id
stringlengths 4
8
| url
stringlengths 32
188
| title
stringlengths 2
122
| text
stringlengths 143
226k
|
---|---|---|---|
304716 | https://en.wikipedia.org/wiki/Smart%20Personal%20Objects%20Technology | Smart Personal Objects Technology | The Smart Personal Objects Technology (SPOT) is a discontinued initiative by Microsoft to create intelligent and personal home appliances, consumer electronics, and other objects through new hardware capabilities and software features.
Development of SPOT began as an incubation project initiated by the Microsoft Research division. SPOT was first announced by Bill Gates at the COMDEX computer exposition event in 2002, and additional details were revealed by Microsoft at the 2003 Consumer Electronics Show where Gates demonstrated a set of prototype smart watches—the first type of device that would support the technology. Unlike more recent technologies, SPOT did not use more traditional forms of connectivity, such as 3G or Wi-Fi, but relied on FM broadcasting subcarrier transmission as a method of data distribution.
While several types of electronics would eventually support the technology throughout its lifecycle, SPOT was considered a commercial failure. Reasons that have been cited for its failure include its subscription-based business model, support limited to North America, the emergence of more efficient and popular forms of data distribution, and mobile feature availability that surpasses the features that SPOT offered.
History
Development
Development of SPOT began as an incubation project led by Microsoft engineer, Bill Mitchell, and initiated by the Microsoft Research division. Mitchell would enlist the help of Larry Karr, president of SCA Data Systems, to develop the project. Karr had previously worked in the 1980s to develop technology for Atari that would distribute games in a manner distinct from the company's competitors; Karr proposed FM broadcasting subcarrier transmission as a method of distribution, technology which would also be used by Microsoft's SPOT. Microsoft Research and SCA Data Systems would ultimately develop the DirectBand subcarrier technology for SPOT. National Semiconductor would aid in the development of device chipsets, which would feature an ARM7 CPU and ROM, SRAM, and a 100 MHz RF receiver chip.
SPOT was unveiled by Bill Gates at the annual COMDEX computer exposition event in fall of 2002. Gates stated that "new devices and technologies will help bring about the next computing revolution" and demonstrated refrigerator magnets that displayed the current time and sports scores, and an alarm clock that could display a list of upcoming appointments, traffic updates, and weather forecasts.
At the Consumer Electronics Show of 2003, Microsoft announced that wristwatches would be the first type of device to utilize the technology in a partnership with watch manufacturers Citizen Watch Co., Fossil, and Suunto. Bill Gates also demonstrated a set of prototype smart watches. SPOT was not Microsoft's first foray into the smartwatch business—the company previously co-developed the Timex Datalink with Timex in 1994. During CES, Microsoft claimed that the first SPOT-based smartwatches would be released in the fall of that year; the company would also release a promotional video that displayed an estimated delivery time of fall 2003, but the first devices would be delayed until the beginning of 2004.
At the Windows Hardware Engineering Conference of 2003, Gates unveiled a new set of hardware-based navigational controls codenamed XEEL, designed to create a consistent navigation experience across Windows-based devices, such as smart phones, tablet PCs, and those powered by SPOT. Microsoft intended for XEEL to create a consistent navigation experience across hardware devices that equaled the software interface navigation consistency introduced by the mouse scroll wheel.
In June 2003, Microsoft unveiled its MSN Direct wireless service developed specifically for SPOT, which would be made available across North America. The company stated that the service would enable the delivery of personalized information on devices and, as an example of this functionality, would allow users to receive messages sent from MSN Messenger or calendar appointment reminders from Microsoft Outlook. MSN Direct would use a subscription-based business model, available through monthly or yearly service plans. MSN Direct relied on the DirectBand subcarrier technology developed by Microsoft in conjunction with SCA Data Systems.
Release
The first devices to make use of SPOT were released in 2004 by Fossil and Suunto. Tissot would later introduce the first compatible watch to feature a touchscreen, and Swatch would release the first compatible watch largely tailored towards younger consumers. As smartwatches were the first type of devices to make use of the technology, they became the de facto type of device that represented it.
In 2006, Oregon Scientific released the second type of SPOT device, a weather station that displayed regional weather forecasts and other various types of information. A second generation of smartwatches was also released, and were designed to address the shortcomings observed in first generation models. Later that year, Melitta released the third type of device to utilize the technology: a coffeemaker that displayed weather forecasts on an electronic visual display. Garmin released the first SPOT-compatible GPS navigation units in 2007.
In early 2008, Microsoft announced that MSN Direct would be available for Windows Mobile, and in early 2009, the service would receive additional location-based enhancements.
Discontinuation
Production of SPOT watches ceased in 2008. In 2009, Microsoft announced that it would discontinue the MSN Direct service at the beginning of 2012. The company stated that this decision was due to decreased demand for the service and because of the emergence of more efficient and popular forms of data distribution, such as Wi-Fi. The MSN Direct service continued to support existing SPOT devices until transmissions ceased on January 1, 2012.
Overview
SPOT extended functionality of traditional devices to include features not originally envisaged for them; a SPOT-powered coffeemaker, for example, would be able to display information such as weather forecasts on an electronic visual display. Smartwatches featured digital watch displays, referred to as Channels, that presented information in a manner that could be customized by a user—a user could also specify the default channel to be displayed; this feature was functionally analogous with a home screen commonly seen in mobile operating systems. Additional channels could be downloaded from a specialized website, and a Glance feature would allow a user to cycle through downloaded information.
Manufacturers could also add their own features to SPOT-based devices; as an example, a manufacturer could create its own smartwatch channel in order to distinguish its product from a competitor's product. Each SPOT-based device included a unique identification number used to enable secure authentication and encryption of DirectBand signals. Microsoft also reportedly considered an alarm function for SPOT-based smartwatches that would activate in the event of theft.
SPOT relied on the .NET Micro Framework for the creation and management of embedded device firmware. This technology would later be used for the Windows SideShow feature introduced in Windows Vista, which shares design similarities with SPOT. In 2007, five years after SPOT was announced, Microsoft released the first software development kit for the .NET Micro Framework.
See also
.NET Framework
.NET Compact Framework
Microsoft Band
Smart Display
Windows CE
Windows SideShow
References
2012 disestablishments
Discontinued Microsoft products
Microsoft initiatives
Products introduced in 2002
Smart devices
Smartphones
Smartwatches |
304803 | https://en.wikipedia.org/wiki/Digibox | Digibox | The Digibox is a device marketed by Sky UK in the UK and Ireland to enable home users to receive digital satellite television broadcasts (satellite receiver) from the Astra satellites at 28.2° east. An Internet service was also available through the device, similar in some ways to the American MSN TV, before being discontinued in 2015. The first Digiboxes shipped to consumers in October 1998 when Sky Digital was launched, and the hardware reference design has been relatively unchanged since then. Compared to other satellite receivers, they are severely restricted. As of 2020, Sky Digiboxes have become largely outmoded, superseded by Sky's latest-generation Sky Q boxes and Sky Glass televisions; the previous generation Sky+HD boxes are still in use, however.
Base technical details
The Digibox's internal hardware specifications are not publicly disclosed, however some details are clearly visible on the system. All early boxes except the Pace Javelin feature dual SCART outputs, an RS232 serial port, a dual-output RF modulator with passthrough, and RCA socketed audio outputs, as well as a 33.6 modem and an LNB cable socket. A VideoGuard card slot, as well as a second smart-card reader are fitted to the front (these are for the Sky viewing card and other interactive cards). All share an identical user interface and EPG, with the exception of Sky+ HD boxes which have used the new Sky+ HD Guide since early 2009. The DRX595 dropped the RF modulator outputs. A PC type interface was fitted internally to some early standard boxes but was never utilised by Sky. The latest HD boxes only have a single SCART socket but have a RCA/phono socket for composite video output. All Sky+ and HD boxes have an optical sound output.
The serial port outputs data used for the Sky Gnome and Sky Talker. The Sky Gamepad sends data to the box via the serial port.
Uniquely, the second RF port outputs a 9 V power signal which is used to power 'tvLINK' devices that can be attached to the RF cable next to a TV in a remote room. The tvLINK has an IR detector and sends commands from the remote control back to the Digibox on a 6 MHz carrier on the RF cable. This allows the Sky box to be viewed and controlled from another room by running a single RF cable.
All Digiboxes run on OpenTV (the latest HD boxes now use what is known internally as Project Darwin software) with Sky's EPG software and NDS VideoGuard conditional access. The Digibox receives software updates over the air, even when in standby mode should an update be available. The software features Sky-controlled channel numbering, the ability to view OpenTV or WapTV applications provided as "interactive" or "teletext" content on channels (with the full-Internet style service available at first via Open... and later Sky Active), parental controls, the ability to order PPV events and some basic control over lists of favorite channels and show reminders.
Early decoders seemingly support only 700 channels approximately in their channel listing, as Sky has announced it is halting SD channels' launch applications, with over 100 awaiting launch, over 600 existing channels, and an average closure rate of 1 per week.
Digibox remote control
The Digibox remote control comes in four physical designs - blue with new Sky logo, blue with old Sky logo (this version was being issued from around the first year of the Sky service, although that logo fell out of use before it launched), black with new Sky logo (Sony boxes only) and silver with new Sky logo. However, all have multiple variations at present with a new version of Sky remote control being produced every year, to support the addition of new televisions to its universal remote control capabilities.
Use outside Sky's system
The units are DVB-S compatible, and usually carry the DVB logo on the front. However, their use as a DVB-S receiver for anything other than Sky services is seriously limited by their reduced choice of symbol rates (22,000 and 27,500; additionally 28,250 and 29,000 on Sky+ HD), and their inability to store more than 50 (fewer still on some models) non-EPG channels without losing them. Following a software "upgrade" the Digibox will not display programmes from non 28.2 East Satellites. Additionally, once any form of parental controls are enabled, the "Other Channels" menu requires PIN entry on every use.
The box also refuses to let users view channels which are free-to-air but displaying flags claiming encryption, which locks out some channels even on the satellites Sky use themselves, such as Free to View channels and the "EPG Background Audio" channel.
Manufacturers and non-common features
Digiboxes have been made by Amstrad, Sony, Thomson, Panasonic, Grundig, and most commonly, Pace. Although the reference designs were identical, a number of digibox lines have specific faults or traits, such as failing modems on Grundig units, and unstable tuners on older model Pace boxes. Some units add features not found on other models, such as S-Video sockets on some Grundig units, and TOSLINK output on Sony's.
For a long period of time in 2004 and 2005, only one Digibox was in production, the Pace DS430N, but Amstrad and Thomson have resumed production with the DRX500 and DSi4212 boxes respectively. Digiboxes for new customers are assigned randomly, and cannot be chosen, creating an after market for specific boxes due to their individual features. Sky bought out Amstrad satellite production in 2007 and now only distribute boxes which initially appeared as Amstrad models but are now badged merely as Sky.
Standardized design
In late 2005, it was announced that all future Digiboxes would have a standardized cosmetic design, although retain the three current makers, and have a slightly redesigned remote control, which would be recoloured white with some blue keys.
Thomson and Amstrad began supplying these boxes in late 2005 (with the DSI4214 and DRX550, respectively), with Pace following in late 2006 (with the DS445NB). The standardized design is known as the "Flow" design.
Other Digiboxes
A second generation of Digibox exists, marketed as Sky+. These PVR units have three versions: the 40 GB PVR1, the 40 GB PVR2, and the PVR3 which is fitted with a 160 GB HDD with 80GB available for user recordings, and the remaining 80 GB reserved for use by Sky Anytime. All have dual LNB inputs and an optical digital audio output, as well as all other features of the Sky box. These units are manufactured by Amstrad, Pace and Thomson only, and use a different remote control. USB ports are fitted to the PVR3. When Sky+ was launched, there was an additional £10 monthly charge to access the Sky+ functionality if two or more premium packages were not subscribed to. Sky have now removed this charge, effective 1 July 2007. Without a Sky+ Subscription, the Sky+ box reverts to the functionality of a normal Digibox, albeit without Autoview modes.
The Sky+2TB box has 2 TB to record to and is manufactured solely by Thomson. This unit is also fitted with USB ports. (this unit has been discontinued)
A third generation of Digibox also exists, with the additional ability to receive DVB-S2 HDTV signals in the MPEG-4 format. The initial sole manufacturer of these Sky+ HD boxes was Thomson, and made their debut on 22 May 2006 with the launch of HDTV channels on Sky. Although around 17,000 digiboxes were initially produced, they could not meet demand and some had to wait for longer for their SkyHD Digibox. Past manufacturers are Thomson, Pace, Amstrad and Samsung.
These boxes have all the features of a Sky+ box, as well as a HDMI output, and the early Thomson DSI8215 model has component video output. Both SATA and Ethernet ports are also supplied, currently used for the Sky Anytime+ service. As well as requiring a Sky+ subscription or any subscription, HD versions of the subscription channels require the payment of an additional £10.25 (or €15) per month, although HD channels can be viewed on free-to-air services such as BBC One HD, BBC Two HD, BBC Four HD, CBBC HD, ITV HD, ITVBe HD, Channel 4 HD, NHK World HD and with a Freesat from Sky card, Channel 5 HD.
On 16 March 2011, Sky launched the Sky HD box, primarily targeted at multiroom subscribers. and also used for Freesat from Sky installations. The Sky HD box is not a personal video recorder, meaning it has no hard disk and cannot support the Sky+ functionality. These were later replaced with Sky+HD boxes, which had both sets of functionality.
Card pairing
All Sky boxes, whether Digiboxes, Sky+ or SkyHD incorporate card pairing. This involves the 'marrying' or 'pairing' of a viewing card to one particular STB, thus preventing one card (and indeed subscription) being used on multiple digiboxes. During installation the engineer will initiate a 'callback' from the STB to Sky via modem and telephone line, transmitting details of the viewing card number and the box in which it is installed. Once this step has been completed it is no longer possible to view premium channels such as Movies or Sports on any other box besides that which the card was paired with. The card can still be used to view non-premium channels such as Sky1 but will display an error message when attempting to watch a movie channel or sports channel with an unpaired box and card combination. A card can be 're-paired' in some instances such as STB replacement or multiroom relocation, however this must be initiated by Sky and cannot be completed by an end-user.
Power consumption
Standard Digiboxes use almost as much power in standby as when active; the "standby" setting merely mutes the sound and cuts off the picture, but internal signal processing continues at the same rate. Sky+ boxes are believed to reduce their power consumption more significantly in standby because they can spin down hard disks. Power consumption for the standard box varies from around 10 to 18 Watts. Most Sky+HD boxes consume up to 60W when active, falling to ~30W when the disc is powered down. To some degree, this has been addressed with the latest DRX890/895 HD boxes. These have a power consumption of 45W max but now have a "deep sleep" mode consuming around 1/2 W.
See also
Digital television adapter
Set-top box
British Sky Broadcasting
Sky (UK & Ireland)
Astra 28.2°E
References
External links
Sky information Free information at satcure.co.uk
Official Astra consumers/viewers' website
SES satellite operator's official trade/industry site
SES guide to channels broadcasting on Astra satellites
http://www.heyrick.co.uk/ricksworld/digibox/index.html (last updated June 2009)
Television technology
Consumer electronics
Broadband
Satellite television
Amstrad
Sky Group |
305854 | https://en.wikipedia.org/wiki/Text%20messaging | Text messaging | Text messaging, or texting, is the act of composing and sending electronic messages, typically consisting of alphabetic and numeric characters, between two or more users of mobile devices, desktops/laptops, or another type of compatible computer. Text messages may be sent over a cellular network, or may also be sent via an Internet connection.
The term originally referred to messages sent using the Short Message Service (SMS). It has grown beyond alphanumeric text to include multimedia messages using the Multimedia Messaging Service (MMS) containing digital images, videos, and sound content, as well as ideograms known as emoji (happy faces, sad faces, and other icons), and instant messenger applications (usually the term is used when on mobile devices).
Text messages are used for personal, family, business and social purposes. Governmental and non-governmental organizations use text messaging for communication between colleagues. In the 2010s, the sending of short informal messages became an accepted part of many cultures, as happened earlier with emailing. This makes texting a quick and easy way to communicate with friends, family and colleagues, including in contexts where a call would be impolite or inappropriate (e.g., calling very late at night or when one knows the other person is busy with family or work activities). Like e-mail and voicemail and unlike calls (in which the caller hopes to speak directly with the recipient), texting does not require the caller and recipient to both be free at the same moment; this permits communication even between busy individuals. Text messages can also be used to interact with automated systems, for example, to order products or services from e-commerce websites, or to participate in online contests. Advertisers and service providers use direct text marketing to send messages to mobile users about promotions, payment due dates, and other notifications instead of using postal mail, email, or voicemail.
Terminology
The service is referred to by different colloquialisms depending on the region. It may simply be referred to as a "text" in North America, the United Kingdom, Australia, New Zealand, and the Philippines, an "SMS" in most of mainland Europe, or an "MMS" or "SMS" in the Middle East, Africa, and Asia. The sender of a text message is commonly referred to as a "texter".
History
The electrical telegraph systems, developed in the early 19th century, used simple electrical signals to send text messages. In the late 19th century, the wireless telegraphy was developed using radio waves.
In 1933, the German Reichspost (Reich postal service) introduced the first "telex" service.
The University of Hawaii began using radio to send digital information as early as 1971, using ALOHAnet. Friedhelm Hillebrand conceptualised SMS in 1984 while working for Deutsche Telekom. Sitting at a typewriter at home, Hillebrand typed out random sentences and counted every letter, number, punctuation, and space. Almost every time, the messages contained fewer than 160 characters, thus giving the basis for the limit one could type via text messaging. With Bernard Ghillebaert of France Télécom, he developed a proposal for the GSM (Groupe Spécial Mobile) meeting in February 1985 in Oslo. The first technical solution evolved in a GSM subgroup under the leadership of Finn Trosby. It was further developed under the leadership of Kevin Holley and Ian Harris (see Short Message Service). SMS forms an integral part of Signalling System No. 7 (SS7). Under SS7, it is a "state" with a 160 character data, coded in the ITU-T "T.56" text format, that has a "sequence lead in" to determine different language codes and may have special character codes that permit, for example, sending simple graphs as text. This was part of ISDN (Integrated Services Digital Network) and since GSM is based on this, it made its way to the mobile phone. Messages could be sent and received on ISDN phones, and these can send SMS to any GSM phone. The possibility of doing something is one thing, implementing it another, but systems existed from 1988 that sent SMS messages to mobile phones (compare ND-NOTIS).
SMS messaging was used for the first time on 3 December 1992, when Neil Papworth, a 22-year-old test engineer for Sema Group in the UK (now Airwide Solutions), used a personal computer to send the text message "Merry Christmas" via the Vodafone network to the phone of Richard Jarvis, who was at a party in Newbury, Berkshire, which had been organized to celebrate the event. Modern SMS text messaging is usually messaging from one mobile phone to another. Finnish Radiolinja became the first network to offer a commercial person-to-person SMS text messaging service in 1994. When Radiolinja's domestic competitor, Telecom Finland (now part of TeliaSonera) also launched SMS text messaging in 1995 and the two networks offered cross-network SMS functionality, Finland became the first nation where SMS text messaging was offered on a competitive as well as on a commercial basis. GSM was allowed in the United States and the radio frequencies were blocked and awarded to US "Carriers" to use US technology. Hence there is no "development" in the US in mobile messaging service. The GSM in the US had to use a frequency allocated for private communication services (PCS) – what the ITU frequency régime had blocked for DECT – Digital Enhanced Cordless Telecommunications – 1000-feet range picocell, but survived. American Personal Communications (APC), the first GSM carrier in America, provided the first text-messaging service in the United States. Sprint Telecommunications Venture, a partnership of Sprint Corp. and three large cable-TV companies, owned 49 percent of APC. The Sprint venture was the largest single buyer at a government-run spectrum auction that raised $7.7 billion in 2005 for PCS licenses. APC operated under the brand name of Sprint Spectrum and launched its service on 15 November 1995, in Washington, D.C. and in Baltimore, Maryland. Vice President Al Gore in Washington, D.C. made the initial phone-call to launch the network, calling Mayor Kurt Schmoke in Baltimore.
Initial growth of text messaging was slow, with customers in 1995 sending on average only 0.4 messages per GSM customer per month. One factor in the slow take-up of SMS was that operators were slow to set up charging systems, especially for prepaid subscribers, and to eliminate billing fraud, which was possible by changing SMSC settings on individual handsets to use the SMSCs of other operators. Over time, this issue was eliminated by switch billing instead of billing at the SMSC and by new features within SMSCs to allow blocking of foreign mobile users sending messages through it. SMS is available on a wide range of networks, including 3G networks. However, not all text-messaging systems use SMS; some notable alternate implementations of the concept include J-Phone's SkyMail and NTT Docomo's Short Mail, both in Japan. E-mail messaging from phones, as popularized by NTT Docomo's i-mode and the RIM BlackBerry, also typically use standard mail protocols such as SMTP over TCP/IP. text messaging was the most widely used mobile data service, with 74% of all mobile phone users worldwide, or 2.4 billion out of 3.3 billion phone subscribers, at the end of 2007 being active users of the Short Message Service. In countries such as Finland, Sweden, and Norway, over 85% of the population use SMS. The European average is about 80%, and North America is rapidly catching up with over 60% active users of SMS . The largest average usage of the service by mobile phone subscribers occurs in the Philippines, with an average of 27 texts sent per day per subscriber.
Uses
Text messaging is most often used between private mobile phone users, as a substitute for voice calls in situations where voice communication is impossible or undesirable (e.g., during a school class or a work meeting). Texting is also used to communicate very brief messages, such as informing someone that you will be late or reminding a friend or colleague about a meeting. As with e-mail, informality and brevity have become an accepted part of text messaging. Some text messages such as SMS can also be used for the remote control of home appliances. It is widely used in domotics systems. Some amateurs have also built their own systems to control (some of) their appliances via SMS. Other methods such as group messaging, which was patented in 2012 by the GM of Andrew Ferry, Devin Peterson, Justin Cowart, Ian Ainsworth, Patrick Messinger, Jacob Delk, Jack Grande, Austin Hughes, Brendan Blake, and Brooks Brasher are used to involve more than two people into a text messaging conversation. A Flash SMS is a type of text message that appears directly on the main screen without user interaction and is not automatically stored in the inbox. It can be useful in cases such as an emergency (e.g., fire alarm) or confidentiality (e.g., one-time password).
Short message services are developing very rapidly throughout the world. SMS is particularly popular in Europe, Asia (excluding Japan; see below), United States, Australia, and New Zealand and is also gaining influence in Africa. Popularity has grown to a sufficient extent that the term texting (used as a verb meaning the act of mobile phone users sending short messages back and forth) has entered the common lexicon. Young Asians consider SMS as the most popular mobile phone application. Fifty percent of American teens send fifty text messages or more per day, making it their most frequent form of communication. In China, SMS is very popular and has brought service providers significant profit (18 billion short messages were sent in 2001).
It is a very influential and powerful tool in the Philippines, where the average user sends 10–12 text messages a day. The Philippines alone sends on average over 1 billion text messages a day, more than the annual average SMS volume of the countries in Europe, and even China and India. SMS is hugely popular in India, where youngsters often exchange many text messages, and companies provide alerts, infotainment, news, cricket scores updates, railway/airline booking, mobile billing, and banking services on SMS.
Similarly, in 2008, text messaging played a primary role in the implication of former Detroit Mayor Kwame Kilpatrick in an SMS sex scandal. Short messages are particularly popular among young urbanites. In many markets, the service is comparatively cheap. For example, in Australia, a message typically costs between A$0.20 and $0.25 to send (some prepaid services charge $0.01 between their own phones), compared with a voice call, which costs somewhere between $0.40 and $2.00 per minute (commonly charged in half-minute blocks). The service is enormously profitable to the service providers. At a typical length of only 190 bytes (including protocol overhead), more than 350 of these messages per minute can be transmitted at the same data rate as a usual voice call (9 kbit/s). There are also free SMS services available, which are often sponsored, that allow sending and receiving SMS from a PC connected to the Internet. Mobile service providers in New Zealand, such as Vodafone and Telecom NZ, provide up to 2000 SMS messages for NZ$10 per month. Users on these plans send on average 1500 SMS messages every month. Text messaging has become so popular that advertising agencies and advertisers are now jumping into the text messaging business. Services that provide bulk text message sending are also becoming a popular way for clubs, associations, and advertisers to reach a group of opt-in subscribers quickly.
Research suggests that Internet-based mobile messaging will have grown to equal the popularity of SMS in 2013, with nearly 10 trillion messages being sent through each technology. Services such as Facebook Messenger, Snapchat, WhatsApp and Viber have led to a decline in the use of SMS in parts of the world.
Research has shown that women are more likely than men to use emoticons in text messages.
Applications
Microblogging
Of many texting trends, a system known as microblogging has surfaced, which consists of a miniaturized blog, inspired mainly by people's tendency to jot down informal thoughts and post them online. They consist of websites like Twitter and its Chinese equivalent Weibo (微博). As of 2016, both of these websites were popular.
Emergency services
In some countries, text messages can be used to contact emergency services. In the UK, text messages can be used to call emergency services only after registering with the emergency SMS service. This service is primarily aimed at people who, because of disability, are unable to make a voice call. It has recently been promoted as a means for walkers and climbers to call emergency services from areas where a voice call is not possible due to low signal strength. In the US, there is a move to require both traditional operators and Over-the-top messaging providers to support texting to 911. In Asia, SMS is used for tsunami warnings and in Europe, SMS is used to inform individuals of imminent disasters. Since the location of a handset is known, systems can alert everyone in an area that the events have made impossible to pass through e.g. an avalanche. A similar system, known as Emergency Alert, is used in Australia to notify the public of impending disasters through both SMS and landline phone calls. These messages can be sent based on either the location of the phone or the address to which the handset is registered.
Reminders of medical appointments
SMS messages are used in some countries as reminders of medical appointments. Missed outpatient clinic appointments cost the National Health Service (England) more than £600 million ($980 million) a year. SMS messages are thought to be more cost-effective, swifter to deliver, and more likely to receive a faster response than letters. A recent study by Sims and colleagues (2012) examined the outcomes of 24,709 outpatient appointments scheduled in mental health services in South-East London. The study found that SMS message reminders could reduce the number of missed psychiatric appointments by 25–28%, representing a potential national yearly saving of over £150 million. Because of the COVID-19 pandemic, medical facilities in the United States are using text messaging to coordinate the appointment process, including reminders, cancellations, and safe check-in. US-based cloud radiology information system vendor AbbaDox includes this in their patient engagement services.
Commercial uses
Short codes
Short codes are special telephone numbers, shorter than full telephone numbers, that can be used to address SMS and MMS messages from mobile phones or fixed phones. There are two types of short codes: dialling and messaging.
Text messaging gateway providers
SMS gateway providers facilitate the SMS traffic between businesses and mobile subscribers, being mainly responsible for carrying mission-critical messages, SMS for enterprises, content delivery and entertainment services involving SMS, e.g., TV voting. Considering SMS messaging performance and cost, as well as the level of text messaging services, SMS gateway providers can be classified as resellers of the text messaging capability of another provider's SMSC or offering the text messaging capability as an operator of their own SMSC with SS7. SMS messaging gateway providers can provide gateway-to-mobile (Mobile Terminated–MT) services. Some suppliers can also supply mobile-to-gateway (text-in or Mobile Originated/MO services). Many operate text-in services on short codes or mobile number ranges, whereas others use lower-cost geographic text-in numbers.
Premium content
SMS is widely used for delivering digital content, such as news alerts, financial information, pictures, GIFs, logos and ringtones. Such messages are also known as premium-rated short messages (PSMS). The subscribers are charged extra for receiving this premium content, and the amount is typically divided between the mobile network operator and the value added service provider (VASP), either through revenue share or a fixed transport fee. Services like 82ASK and Any Question Answered have used the PSMS model to enable rapid response to mobile consumers' questions, using on-call teams of experts and researchers. In November 2013, amidst complaints about unsolicited charges on bills, major mobile carriers in the US agreed to stop billing for PSMS in 45 states, effectively ending its use in the United States.
Outside the United States, premium short messages are increasingly being used for "real-world" services. For example, some vending machines now allow payment by sending a premium-rated short message, so that the cost of the item bought is added to the user's phone bill or subtracted from the user's prepaid credits. Recently, premium messaging companies have come under fire from consumer groups due to a large number of consumers racking up huge phone bills. A new type of free-premium or hybrid-premium content has emerged with the launch of text-service websites. These sites allow registered users to receive free text messages when items they are interested in go on sale, or when new items are introduced. An alternative to inbound SMS is based on long numbers (international mobile number format, e.g., +44 7624 805000, or geographic numbers that can handle voice and SMS, e.g., 01133203040), which can be used in place of short codes or premium-rated short messages for SMS reception in several applications, such as TV voting, product promotions and campaigns. Long numbers are internationally available, as well as enabling businesses to have their own number, rather than short codes, which are usually shared across a lot of brands. Additionally, long numbers are non-premium inbound numbers.
In workplaces
The use of text messaging for workplace purposes has grown significantly during the mid-2000s (decade). As companies seek competitive advantages, many employees are using new technology, collaborative applications, and real-time messaging such as SMS, instant messaging, and mobile communications to connect with teammates and customers. Some practical uses of text messaging include the use of SMS for confirming delivery or other tasks, for instant communication between a service provider and a client (e.g., a stockbroker and an investor), and for sending alerts. Several universities have implemented a system of texting students and faculties campus alerts. One such example is Penn State. As text messaging has proliferated in business, so too have regulations governing its use. One regulation specifically governing the use of text messaging in financial-services firms engaged in stocks, equities, and securities trading is Regulatory Notice 07-59, Supervision of Electronic Communications, December 2007, issued to member firms by the Financial Industry Regulatory Authority. In 07-59, FINRA noted that "electronic communications", "e-mail", and "electronic correspondence" may be used interchangeably and can include such forms of electronic messaging as instant messaging and text messaging. Industry has had to develop new technology to allow companies to archive their employees' text messages.
Security, confidentiality, reliability, and speed of SMS are among the most important guarantees industries such as financial services, energy and commodities trading, health care and enterprises demand in their mission-critical procedures. One way to guarantee such a quality of text messaging lies in introducing SLAs (Service Level Agreement), which are common in IT contracts. By providing measurable SLAs, corporations can define reliability parameters and set up a high quality of their services. Just one of many SMS applications that have proven highly popular and successful in the financial services industry is mobile receipts. In January 2009, Mobile Marketing Association (MMA) published the Mobile Banking Overview for financial institutions in which it discussed the advantages and disadvantages of mobile channel platforms such as Short Message Services (SMS), Mobile Web, Mobile Client Applications, SMS with Mobile Web and Secure SMS.
Mobile interaction services are an alternative way of using SMS in business communications with greater certainty. Typical business-to-business applications are telematics and Machine-to-Machine, in which two applications automatically communicate with each other. Incident alerts are also common, and staff communications are also another use for B2B scenarios. Businesses can use SMS for time-critical alerts, updates, and reminders, mobile campaigns, content and entertainment applications. Mobile interaction can also be used for consumer-to-business interactions, such as media voting and competitions, and consumer-to-consumer interaction, for example, with mobile social networking, chatting and dating.
Text messaging is widely used in business settings; as well, it is used in many civil service and non-governmental organization workplaces. The U.S. And Canadian civil service both adopted Blackberry smartphones in the 2000s.
Group texts
Group texts involve more than two users. In some cases, when one or more people on the group text are offline, in airplane mode, or has their device shut down, a text being sent to the group may reveal an error message that the text did not go through. Users should rest assured, that all online or available users on the group received the message and that re-sending the message will only result in some participants receiving the message multiple times.
Online SMS services
There are a growing number of websites that allow users to send free SMS messages online. Some websites provide free SMS for promoting premium business packages.
Worldwide use
Europe
Europe follows next behind Asia in terms of the popularity of the use of SMS. In 2003, an average of 16 billion messages was sent each month. Users in Spain sent a little more than fifty messages per month on average in 2003. In Italy, Germany and the United Kingdom, the figure was around 35–40 SMS messages per month. In each of these countries, the cost of sending an SMS message varies from €0.04–0.23, depending on the payment plan (with many contractual plans including all or several texts for free). In the United Kingdom, text messages are charged between £0.05–0.12. Curiously, France has not taken to SMS in the same way, sending just under 20 messages on average per user per month. France has the same GSM technology as other European countries, so the uptake is not hampered by technical restrictions.
In the Republic of Ireland, 1.5 billion messages are sent every quarter, on average 114 messages per person per month. In the United Kingdom, over 1 billion text messages were sent every week. The Eurovision Song Contest organized the first pan-European SMS voting in 2002, as a part of the voting system (there was also a voting over traditional landline phone lines). In 2005, the Eurovision Song Contest organized the biggest televoting ever (with SMS and phone voting). During roaming, that is, when a user connects to another network in different country from his own, the prices may be higher, but in July 2009, EU legislation went into effect limiting this price to €0.11.
Mobile service providers in Finland offer contracts in which users can send 1000 text messages a month for €10. In Finland, which has very high mobile phone ownership rates, some TV channels began "SMS chat", which involved sending short messages to a phone number, and the messages would be shown on TV. Chats are always moderated, which prevents users from sending offensive material to the channel. The craze evolved into quizzes and strategy games and then faster-paced games designed for television and SMS control. Games require users to register their nicknames and send short messages to control a character onscreen. Messages usually cost 0.05 to 0.86 Euro apiece, and games can require the player to send dozens of messages. In December 2003, a Finnish TV channel, MTV3, put a Santa Claus character on-air reading aloud text messages sent in by viewers. On 12 March 2004, the first entirely "interactive" TV channel, VIISI, began operation in Finland. However, SBS Finland Oy took over the channel and turned it into a music channel named The Voice in November 2004. In 2006, the Prime Minister of Finland, Matti Vanhanen, made the news when he allegedly broke up with his girlfriend with a text message. In 2007, the first book written solely in text messages, Viimeiset viestit (Last Messages), was released by Finnish author Hannu Luntiala. It is about an executive who travels through Europe and India.
United States
In the United States, text messaging is very popular; as reported by CTIA in December 2009, the 286 million US subscribers sent 152.7 billion text messages per month, for an average of 534 messages per subscriber per month. The Pew Research Center found in May 2010 that 72% of U.S. adult cellphone users send and receive text messages. In the U.S., SMS is often charged both at the sender and at the destination, but, unlike phone calls, it cannot be rejected or dismissed. The reasons for lower uptake than other countries are varied. Many users have unlimited "mobile-to-mobile" minutes, high monthly minute allotments, or unlimited service. Moreover, "push to talk" services offer the instant connectivity of SMS and are typically unlimited. The integration between competing providers and technologies necessary for cross-network text messaging was not initially available. Some providers originally charged extra for texting, reducing its appeal. In the third quarter of 2006, at least 12 billion text messages were sent on AT&T's network, up almost 15% from the preceding quarter. In the U.S., while texting is mainly popular among people from 13–22 years old, it is also increasing among adults and business users. The age that a child receives his/her first cell phone has also decreased, making text messaging a popular way of communicating. The number of texts sent in the US has gone up over the years as the price has gone down to an average of $0.10 per text sent and received. To convince more customers to buy unlimited text messaging plans, some major cellphone providers have increased the price to send and receive text messages from $.15 to $.20 per message. This is over $1,300 per megabyte. Many providers offer unlimited plans, which can result in a lower rate per text, given sufficient volume.
Japan
Japan was among the first countries to adopt short messages widely, with pioneering non-GSM services including J-Phone's SkyMail and NTT Docomo's Short Mail. Japanese adolescents first began text messaging, because it was a cheaper form of communication than the other available forms. Thus, Japanese theorists created the selective interpersonal relationship theory, claiming that mobile phones can change social networks among young people (classified as 13- to 30-year-olds). They theorized this age group had extensive but low-quality relationships with friends, and mobile phone usage may facilitate improvement in the quality of their relationships. They concluded this age group prefers "selective interpersonal relationships in which they maintain particular, partial, but rich relations, depending on the situation." The same studies showed participants rated friendships in which they communicated face-to-face and through text messaging as being more intimate than those in which they communicated solely face-to-face. This indicates participants make new relationships with face-to-face communication at an early stage, but use text messaging to increase their contact later on. As the relationships between participants grew more intimate, the frequency of text messaging also increased. However, short messaging has been largely rendered obsolete by the prevalence of mobile Internet e-mail, which can be sent to and received from any e-mail address, mobile or otherwise. That said, while usually presented to the user simply as a uniform "mail" service (and most users are unaware of the distinction), the operators may still internally transmit the content as short messages, especially if the destination is on the same network.
China
Text messaging is popular and cheap in China. About 700 billion messages were sent in 2007. Text message spam is also a problem in China. In 2007, 353.8 billion spam messages were sent, up 93% from the previous year. It is about 12.44 messages per week per person. It is routine that the People's Republic of China government monitors text messages across the country for illegal content.
Among Chinese migrant workers with little formal education, it is common to refer to SMS manuals when text messaging. These manuals are published as cheap, handy, smaller-than-pocket-size booklets that offer diverse linguistic phrases to utilize as messages.
Philippines
SMS was introduced to selected markets in the Philippines in 1995. In 1998, Philippine mobile service providers launched SMS more widely across the country, with initial television marketing campaigns targeting hearing-impaired users. The service was initially free with subscriptions, but Filipinos quickly exploited the feature to communicate for free instead of using voice calls, which they would be charged for. After telephone companies realized this trend, they began charging for SMS. The rate across networks is 1 peso per SMS (about US$0.023). Even after users were charged for SMS, it remained cheap, about one-tenth of the price of a voice call. This low price led to about five million Filipinos owning a cell phone by 2001. Because of the highly social nature of Philippine culture and the affordability of SMS compared to voice calls, SMS usage shot up. Filipinos used texting not only for social messages but also for political purposes, as it allowed the Filipinos to express their opinions on current events and political issues. It became a powerful tool for Filipinos in promoting or denouncing issues and was a key factor during the 2001 EDSA II revolution, which overthrew then-President Joseph Estrada, who was eventually found guilty of corruption. According to 2009 statistics, there are about 72 million mobile service subscriptions (roughly 80% of the Filipino population), with around 1.39 billion SMS messages being sent daily. Because of the large number of text messages being sent, the Philippines became known as the "text capital of the world" during the late 1990s until the early 2000s.
New Zealand
There are three mobile network companies in New Zealand. Spark NZ (formally Telecom NZ), was the first telecommunication company in New Zealand. In 2011, Spark was broken into two companies, with Chorus Ltd taking the landline infrastructure and Spark NZ providing services including over their mobile network. Vodafone NZ acquired mobile network provider Bellsouth New Zealand in 1998 and has 2.32 million customers as of July 2013. Vodafone launched the first Text messaging service in 1999 and has introduced innovative TXT services like Safe TXT and CallMe 2degrees Mobile Ltd launched in August 2009. In 2005, around 85% of the adult population had a mobile phone. In general, texting is more popular than making phone calls, as it is viewed as less intrusive and therefore more polite.
Africa
Text messaging will become a key revenue driver for mobile network operators in Africa over the next couple of years. Today, text messaging is already slowly gaining influence in the African market. One such person used text messaging to spread the word about HIV and AIDS. Also, in September 2009, a multi-country campaign in Africa used text messaging to expose stock-outs of essential medicines at public health facilities and put pressure on governments to address the issue.
Social effects
The advent of text messaging made possible new forms of interaction that were not possible before. A person may now carry out a conversation with another user without the constraint of being expected to reply within a short amount of time and without needing to set time aside to engage in conversation. With voice calling, both participants need to be free at the same time. Mobile phone users can maintain communication during situations in which a voice call is impractical, impossible, or unacceptable, such as during a school class or work meeting. Texting has provided a venue for participatory culture, allowing viewers to vote in online and TV polls, as well as receive information while they are on the move. Texting can also bring people together and create a sense of community through "Smart Mobs" or "Net War", which create "people power".
Research has also proven that text messaging is somehow making the social distances larger and could be ruining verbal communication skills for many people.
Effect on language
The small phone keypad and the rapidity of typical text message exchanges have caused a number of spelling abbreviations: as in the phrase "txt msg", "u" (an abbreviation for "you"), "HMU"("hit me up"; i.e., call me), or use of camel case, such as in "ThisIsVeryLame". To avoid the even more limited message lengths allowed when using Cyrillic or Greek letters, speakers of languages written in those alphabets often use the Latin alphabet for their own language. In certain languages utilizing diacritic marks, such as Polish, SMS technology created an entire new variant of written language: characters normally written with diacritic marks (e.g., ą, ę, ś, ż in Polish) are now being written without them (as a, e, s, z) to enable using cell phones without Polish script or to save space in Unicode messages. Historically, this language developed out of shorthand used in bulletin board systems and later in Internet chat rooms, where users would abbreviate some words to allow a response to be typed more quickly, though the amount of time saved was often inconsequential. However, this became much more pronounced in SMS, where mobile phone users either have a numeric keyboard (with older cellphones) or a small QWERTY keyboard (for 2010s-era smartphones), so more effort is required to type each character, and there is sometimes a limit on the number of characters that may be sent. In Mandarin Chinese, numbers that sound similar to words are used in place of those words. For example, the numbers 520 in Chinese (wǔ èr líng) sound like the words for "I love you" (wǒ ài nǐ). The sequence 748 (qī sì bā) sounds like the curse "go to hell" (qù sǐ ba).
Predictive text software, which attempts to guess words (Tegic's T9 as well as iTap) or letters (Eatoni's LetterWise) reduces the labour of time-consuming input. This makes abbreviations not only less necessary but slower to type than regular words that are in the software's dictionary. However, it makes the messages longer, often requiring the text message to be sent in multiple parts and, therefore, costing more to send. The use of text messaging has changed the way that people talk and write essays, some believing it to be harmful. Children today are receiving cell phones at an age as young as eight years old; more than 35 per cent of children in second and third grade have their own mobile phones. Because of this, the texting language is integrated into the way that students think from an earlier age than ever before. In November 2006, New Zealand Qualifications Authority approved the move that allowed students of secondary schools to use mobile phone text language in the end-of-the-year-exam papers. Highly publicized reports, beginning in 2002, of the use of text language in school assignments, caused some to become concerned that the quality of written communication is on the decline, and other reports claim that teachers and professors are beginning to have a hard time controlling the problem. However, the notion that text language is widespread or harmful is refuted by research from linguistic experts.
An article in The New Yorker explores how text messaging has anglicized some of the world's languages. The use of diacritic marks is dropped in languages such as French, as well as symbols in Ethiopian languages. In his book, Txtng: the Gr8 Db8 (which translates as "Texting: the Great Debate"), David Crystal states that texters in all eleven languages use "lol" ("laughing out loud"), "u", "brb" ("be right back"), and "gr8" ("great"), all English-based shorthands. The use of pictograms and logograms in texts are present in every language. They shorten words by using symbols to represent the word or symbols whose name sounds like a syllable of the word such as in 2day or b4. This is commonly used in other languages as well. Crystal gives some examples in several languages such as Italian sei, "six", is used for sei, "you are". Example: dv6 = dove sei ("where are you") and French k7 = cassette ("casette"). There is also the use of numeral sequences, substituting for several syllables of a word and creating whole phrases using numerals. For example, in French, a12c4 can be said as à un de ces quatres, "see you around" (literally: "to one of these four [days]"). An example of using symbols in texting and borrowing from English is the use of @. Whenever it is used in texting, its intended use is with the English pronunciation. Crystal gives the example of the Welsh use of @ in @F, pronounced ataf, meaning "to me". In character-based languages such as Chinese and Japanese, numbers are assigned syllables based on the shortened form of the pronunciation of the number, sometimes the English pronunciation of the number. In this way, numbers alone can be used to communicate whole passages, such as in Chinese, "8807701314520" can be literally translated as "Hug hug you, kiss you, whole life, whole life I love you." English influences worldwide texting in variation, but still in combination with the individual properties of languages.
American popular culture is also recognized in shorthand. For example, Homer Simpson translates into: ~(_8^(|). Crystal also suggests that texting has led to more creativity in the English language, giving people opportunities to create their own slang, emoticons, abbreviations, acronyms, etc. The feeling of individualism and freedom makes texting more popular and a more efficient way to communicate. Crystal has also been quoted in saying that "In a logical world, text messaging should not have survived." But text messaging didn't just come out of nowhere. It originally began as a messaging system that would send out emergency information. But it gained immediate popularity with the public. What followed is the SMS we see today, which is a very quick and efficient way of sharing information from person to person. Work by Richard Ling has shown that texting has a gendered dimension and it plays into the development of teen identity. In addition we text to a very small number of other persons. For most people, half of their texts go to 3 – 5 other people.
Research by Rosen et al. (2009) found that those young adults who used more language-based textisms (shortcuts such as LOL, 2nite, etc.) in daily writing produced worse formal writing than those young adults who used fewer linguistic textisms in daily writing. However, the exact opposite was true for informal writing. This suggests that perhaps the act of using textisms to shorten communication words leads young adults to produce more informal writing, which may then help them to be better "informal" writers. Due to text messaging, teens are writing more, and some teachers see that this comfort with language can be harnessed to make better writers. This new form of communication may be encouraging students to put their thoughts and feelings into words and this may be able to be used as a bridge, to get them more interested in formal writing.
Joan H. Lee in her thesis What does txting do 2 language: The influences of exposure to messaging and print media on acceptability constraints (2011) associates exposure to text messaging with more rigid acceptability constraints. The thesis suggests that more exposure to the colloquial, Generation Text language of text messaging contributes to being less accepting of words. In contrast, Lee found that students with more exposure to traditional print media (such as books and magazines) were more accepting of both real and fictitious words. The thesis, which garnered international media attention, also presents a literature review of academic literature on the effects of text messaging on language. Texting has also been shown to have had no effect or some positive effects on literacy. According to Plester, Wood and Joshi and their research done on the study of 88 British 10–12-year-old children and their knowledge of text messages, "textisms are essentially forms of phonetic abbreviation" that show that "to produce and read such abbreviations arguably requires a level of phonological awareness (and orthographic awareness) in the child concerned."
Texting while driving
Texting while driving leads to increased distraction behind the wheel and can lead to an increased risk of an accident. In 2006, Liberty Mutual Insurance Group conducted a survey with more than 900 teens from over 26 high schools nationwide. The results showed that 87% of students found texting to be "very" or "extremely" distracting. A study by AAA found that 46% of teens admitted to being distracted behind the wheel due to texting. One example of distraction behind the wheel is the 2008 Chatsworth train collision, which killed 25 passengers. The engineer had sent 45 text messages while operating the train. A 2009 experiment with Car and Driver editor Eddie Alterman (that took place at a deserted airfield, for safety reasons) compared texting with drunk driving. The experiment found that texting while driving was more dangerous than being drunk. While being legally drunk added four feet to Alterman's stopping distance while going 70 mph, reading an e-mail on a phone added 36 feet, and sending a text message added 70 feet. In 2009, the Virginia Tech Transportation Institute released the results of an 18-month study that involved placing cameras inside the cabs of more than 100 long-haul trucks, which recorded the drivers over a combined driving distance of three million miles. The study concluded that when the drivers were texting, their risk of crashing was 23 times greater than when not texting.
Texting while walking
Due to the proliferation of smart phone applications performed while walking, "texting while walking" or "wexting" is the increasing practice of people being transfixed to their mobile device without looking in any direction but their personal screen while walking. First coined reference in 2015 in New York from Rentrak's chief client officer when discussing time spent with media and various media usage metrics. Text messaging among pedestrians leads to increased cognitive distraction and reduced situation awareness, and may lead to increases in unsafe behavior leading to injury and death. Recent studies conducted on cell phone use while walking showed that cell phone users recall fewer objects when conversing, walk slower, have altered gait and are more unsafe when crossing a street. Additionally, some gait analyses showed that stance phase during overstepping motion, longitudinal and lateral deviation increased during cell phone operation, but step length and clearance did not; a different analysis did find increased step clearance and reduced step length.
It is unclear which processes may be affected by distraction, which types of distraction may affect which cognitive processes, and how individual differences may affect the influence of distraction. Lamberg and Muratori believe that engaging in a dual-task, such as texting while walking, may interfere with working memory and result in walking errors. Their study demonstrated that participants engaged in text messaging were unable to maintain walking speed or retain accurate spatial information, suggesting an inability to adequately divide their attention between two tasks. According to them, the addition of texting while walking with vision occluded increases the demands placed on the working memory system resulting in gait disruptions.
Texting on a phone distracts participants, even when the texting task used is a relatively simple one. Stavrinos et al. investigated the effect of other cognitive tasks, such as engaging in conversations or cognitive tasks on a phone, and found that participants actually have reduced visual awareness. This finding was supported by Licence et al., who conducted a similar study. For example, texting pedestrians may fail to notice unusual events in their environment, such as a unicycling clown. These findings suggest that tasks that require the allocation of cognitive resources can affect visual attention even when the task itself does not require the participants to avert their eyes from their environment. The act of texting itself seems to impair pedestrians' visual awareness. It appears that the distraction produced by texting is a combination of both a cognitive and visual perceptual distraction. A study conducted by Licence et al. supported some of these findings, particularly that those who text while walking significantly alter their gait. However, they also found that the gait pattern texters adopted was slower and more "protective", and consequently did not increase obstacle contact or tripping in a typical pedestrian context.
There have also been technological approaches to increase the safety/awareness of pedestrians that are (inattentionally) blind while using a smart phone, e.g., using a Kinect or an ultrasound phone cover as a virtual white cane, or using the built-in camera to algorithmically analyze single, respectively a stream of pictures for obstacles, with Wang et al. proposing to use machine learning to specifically detect incoming vehicles.
Sexting
Sexting is slang for the act of sending sexually explicit or suggestive content between mobile devices using SMS. It contains either text, images, or video that is intended to be sexually arousing. Sexting was reported as early as 2005 in The Sunday Telegraph Magazine, constituting a trend in the creative use of SMS to excite another with alluring messages throughout the day.
Although sexting often takes place consensually between two people, it can also occur against the wishes of a person who is the subject of the content. A number of instances have been reported in which the recipients of sexting have shared the content of the messages with others, with less intimate intentions, such as to impress their friends or embarrass their sender. Celebrities such as Miley Cyrus, Vanessa Hudgens, and Adrienne Bailon have been victims of such abuses of sexting. A 2008 survey by The National Campaign to Prevent Teen and Unplanned Pregnancy and CosmoGirl.com suggested a trend of sexting and other seductive online content being readily shared between teens. One in five teen girls surveyed (22 per cent)—and 11 per cent of teen girls aged 13–16 years old—say they have electronically sent, or posted online, nude or semi-nude images of themselves. One-third (33 per cent) of teen boys and one-quarter (25 per cent) of teen girls say they were shown private nude or semi-nude images. According to the survey, sexually suggestive messages (text, e-mail, and instant messaging) were even more common than images, with 39 per cent of teens having sent or posted such messages, and half of the teens (50 per cent) having received them. A 2012 study that has received wide international media attention was conducted at the University of Utah Department of Psychology by Donald S. Strassberg, Ryan Kelly McKinnon, Michael Sustaíta and Jordan Rullo. They surveyed 606 teenagers ages 14–18 and found that nearly 20 per cent of the students said they had sent a sexually explicit image of themselves via cell phone, and nearly twice as many said that they had received a sexually explicit picture. Of those receiving such a picture, over 25 per cent indicated that they had forwarded it to others.
In addition, of those who had sent a sexually explicit picture, over a third had done so despite believing that there could be serious legal and other consequences if they got caught. Students who had sent a picture by cell phone were more likely than others to find the activity acceptable. The authors conclude: "These results argue for educational efforts such as cell phone safety assemblies, awareness days, integration into class curriculum and teacher training, designed to raise awareness about the potential consequences of sexting among young people." Sexting becomes a legal issue when teens (under 18) are involved, because any nude photos they may send of themselves would put the recipients in possession of child pornography.
In schools
Text messaging has affected students academically by creating an easier way to cheat on exams. In December 2002, a dozen students were caught cheating on an accounting exam through the use of text messages on their mobile phones. In December 2002, Hitotsubashi University in Japan failed 26 students for receiving e-mailed exam answers on their mobile phones. The number of students caught using mobile phones to cheat on exams has increased significantly in recent years. According to Okada (2005), most Japanese mobile phones can send and receive long text messages of between 250 and 3000 characters with graphics, video, audio, and Web links. In England, 287 school and college students were excluded from exams in 2004 for using mobile phones during exams. Some teachers and professors claim that advanced texting features can lead to students cheating on exams. Students in high school and college classrooms are using their mobile phones to send and receive texts during lectures at high rates. Further, published research has established that students who text during college lectures have impaired memories of the lecture material compared to students who do not. For example, in one study, the number of irrelevant text messages sent and received during a lecture covering the topic of developmental psychology was related to students' memory of the lecture.
Bullying
Spreading rumors and gossip by text message, using text messages to bully individuals, or forwarding texts that contain defamatory content is an issue of great concern for parents and schools. Text "bullying" of this sort can cause distress and damage reputations. In some cases, individuals who are bullied online have committed suicide. Harding and Rosenberg (2005) argue that the urge to forward text messages can be difficult to resist, describing text messages as "loaded weapons".
Influence on perceptions of the student
When a student sends an email that contains phonetic abbreviations and acronyms that are common in text messaging (e.g., "gr8" instead of "great"), it can influence how that student is subsequently evaluated. In a study by Lewandowski and Harrington (2006), participants read a student's email sent to a professor that either contained text-messaging abbreviations (gr8, How R U?) or parallel text in standard English (great, How are you?), and then provided impressions of the sender. Students who used abbreviations in their email were perceived as having a less favorable personality and as putting forth less effort on an essay they submitted along with the email. Specifically, abbreviation users were seen as less intelligent, responsible, motivated, studious, dependable, and hard-working. These findings suggest that the nature of a student's email communication can influence how others perceive the student and their work.
Law and crime
Text messaging has been a subject of interest for police forces around the world. One of the issues of concern to law enforcement agencies is the use of encrypted text messages. In 2003, a British company developed a program called Fortress SMS which used 128 bit AES encryption to protect SMS messages. Police have also retrieved deleted text messages to aid them in solving crimes. For example, Swedish police retrieved deleted texts from a cult member who claimed she committed a double murder based on forwarded texts she received. Police in Tilburg, Netherlands, started an SMS alert program, in which they would send a message to ask citizens to be vigilant when a burglar was on the loose or a child was missing in their neighbourhood. Several thieves have been caught and children have been found using the SMS Alerts. The service has been expanding to other cities. A Malaysian–Australian company has released a multi-layer SMS security program. Boston police are now turning to text messaging to help stop crime. The Boston Police Department asks citizens to send texts to make anonymous crime tips.
Under some interpretations of sharia law, husbands can divorce their wives by the pronouncement of talaq. In 2003, a court in Malaysia upheld such a divorce pronouncement which was transmitted via SMS.
The Massachusetts Supreme Judicial Court ruled in 2017 that under the state constitution, police require a warrant before obtaining access to text messages without consent.
Social unrest
Texting has been used on a number of occasions with the result of the gathering of large aggressive crowds. SMS messaging drew a crowd to Cronulla Beach in Sydney resulting in the 2005 Cronulla riots. Not only were text messages circulating in the Sydney area but in other states as well (Daily Telegraph). The volume of such text messages and e-mails also increased in the wake of the riot. The crowd of 5000 at stages became violent, attacking certain ethnic groups. Sutherland Shire Mayor directly blamed heavily circulated SMS messages for the unrest. NSW police considered whether people could be charged over the texting. Retaliatory attacks also used SMS.
The Narre Warren Incident, when a group of 500 party goers attended a party at Narre Warren in Melbourne, Australia, and rioted in January 2008, also was a response of communication being spread by SMS and Myspace. Following the incident, the Police Commissioner wrote an open letter asking young people to be aware of the power of SMS and the Internet. In Hong Kong, government officials find that text messaging helps socially because they can send multiple texts to the community. Officials say it is an easy way of contacting the community or individuals for meetings or events. Texting was used to coordinate gatherings during the 2009 Iranian election protests.
Between 2009 and 2012 the U.S. secretly created and funded a Twitter-like service for Cubans called ZunZuneo, initially based on mobile phone text message service and later with an internet interface. The service was funded by the U.S. Agency for International Development through its Office of Transition Initiatives, who utilized contractors and front companies in the Cayman Islands, Spain and Ireland. A longer-term objective was to organize "smart mobs" that might "renegotiate the balance of power between the state and society." A database about the subscribers was created, including gender, age, and "political tendencies". At its peak ZunZuneo had 40,000 Cuban users, but the service closed as financially unsustainable when U.S. funding was stopped.
In politics
[[File:Sms.jpg|thumb|A recruitment ban in French SMS language: «Slt koi29 on é jamé 2tro @ s batre pour la P. ;-)» = «Salut! Quoi de neuf? On n'est jamais de trop à se battre pour la Paix!»]]
Text messaging has affected the political world. American campaigns find that text messaging is a much easier, cheaper way of getting to the voters than the door-to-door approach. Mexico's president-elect Felipe Calderón launched millions of text messages in the days immediately preceding his narrow win over Andres Manuel Lopez Obradór. In January 2001, Joseph Estrada was forced to resign from the post of president of the Philippines. The popular campaign against him was widely reported to have been coordinated with SMS chain letters. A massive texting campaign was credited with boosting youth turnout in Spain's 2004 parliamentary elections. In 2008, Detroit Mayor Kwame Kilpatrick and his Chief of Staff at the time became entangled in a sex scandal stemming from the exchange of over 14,000 text messages that eventually led to his forced resignation, the conviction of perjury, and other charges.
Text messaging has been used to turn down other political leaders. During the 2004 U.S. Democratic and Republican National Conventions, protesters used an SMS-based organizing tool called TXTmob to get to opponents. In the last day before the 2004 presidential elections in Romania, a message against Adrian Năstase was largely circulated, thus breaking the laws that prohibited campaigning that day. Text messaging has helped politics by promoting campaigns.
On 20 January 2001, President Joseph Estrada of the Philippines became the first head of state in history to lose power to a smart mob. More than one million Manila residents assembled at the site of the 1986 People Power peaceful demonstrations that have toppled the Marcos regime. These people have organized themselves and coordinated their actions through text messaging. They were able to bring down a government without having to use any weapons or violence. Through text messaging, their plans and ideas were communicated to others and successfully implemented. Also, this move encouraged the military to withdraw their support from the regime, and as a result, the Estrada government fell. People were able to converge and unite with the use of their cell phones. "The rapid assembly of the anti-Estrada crowd was a hallmark of early smart mob technology, and the millions of text messages exchanged by the demonstrators in 2001 was, by all accounts, a key to the crowds esprit de corps."
Use in healthcare
Text messaging is a rapidly growing trend in Healthcare. A randomized controlled trial of text messaging intervention for diabetes in Bangladesh was one of the first robust trials to report improvement in diabetes management in a low-and-middle income country. A recent systematic review and individual participants data meta analysis from 3779 participants reported that mobile phone text messaging could improve blood pressure and body mass index. Another study in people with type 2 diabetes showed that participants were willing to pay a modest amount to receive a diabetes text messaging program in addition to standard care. "One survey found that 73% of physicians text other physicians about work- similar to the overall percentage of the population that texts." A 2006 study of reminder messages sent to children and adolescents with type 1 diabetes mellitus showed favorable changes in adherence to treatment. A risk is that these physicians could be violating the Health Insurance Portability and Accountability Act. Where messages could be saved to a phone indefinitely, patient information could be subject to theft or loss, and could be seen by other unauthorized persons. The HIPAA privacy rule requires that any text message involving a medical decision must be available for the patient to access, meaning that any texts that are not documented in an EMR system could be a HIPAA violation."HIPAA compliant messaging for healthcare providers". OnPage. Retrieved 10 January 2018.
Medical concerns
The excessive use of the thumb for pressing keys on mobile devices has led to a high rate of a form of repetitive strain injury termed "BlackBerry thumb" (although this refers to strain developed on older Blackberry devices, which had a scroll wheel on the side of the phone). An inflammation of the tendons in the thumb caused by constant text-messaging is also called text-messager's thumb, or texting tenosynovitis. Texting has also been linked as a secondary source in numerous traffic collisions, in which police investigations of mobile phone records have found that many drivers have lost control of their cars while attempting to send or retrieve a text message. Increasing cases of Internet addiction are now also being linked to text messaging, as mobile phones are now more likely to have e-mail and Web capabilities to complement the ability to text.
Etiquette
Texting etiquette refers to what is considered appropriate texting behaviour. These expectations may concern different areas, such as the context in which a text was sent and received/read, who each participant was with when the participant sent or received/read a text message or what constitutes impolite text messages. At the website of The Emily Post Institute, the topic of texting has spurred several articles with the "do's and dont's" regarding the new form of communication. One example from the site is: "Keep your message brief. No one wants to have an entire conversation with you by texting when you could just call him or her instead." Another example is: "Don't use all Caps. Typing a text message in all capital letters will appear as though you are shouting at the recipient, and should be avoided."
Expectations for etiquette may differ depending on various factors. For example, expectations for appropriate behaviour have been found to differ markedly between the U.S. and India. Another example is generational differences. In The M-Factor: How the Millennial Generation Is Rocking the Workplace, Lynne Lancaster and David Stillman note that younger Americans often do not consider it rude to answer their cell or begin texting in the middle of a face-to-face conversation with someone else, while older people, less used to the behavior and the accompanying lack of eye contact or attention, find this to be disruptive and ill-mannered. With regard to texting in the workplace, Plantronics studied how we communicate at work] and found that 58% of US knowledge workers have increased the use of text messaging for work in the past five years. The same study found that 33% of knowledge workers felt text messaging was critical or very important to success and productivity at work.
Challenges
Spam
In 2002, an increasing trend towards spamming mobile phone users through SMS prompted cellular-service carriers to take steps against the practice, before it became a widespread problem. No major spamming incidents involving SMS had been reported , but the existence of mobile phone spam has been noted by industry watchdogs including Consumer Reports magazine and the Utility Consumers' Action Network (UCAN). In 2005, UCAN brought a case against Sprint for spamming its customers and charging $0.10 per text message. The case was settled in 2006 with Sprint agreeing not to send customers Sprint advertisements via SMS. SMS expert Acision (formerly LogicaCMG Telecoms) reported a new type of SMS malice at the end of 2006, noting the first instances of SMiShing (a cousin to e-mail phishing scams). In SMiShing, users receive SMS messages posing to be from a company, enticing users to phone premium-rate numbers or reply with personal information. Similar concerns were reported by PhonepayPlus, a consumer watchdog in the United Kingdom, in 2012.
Pricing concerns
Concerns have been voiced over the excessive cost of off-plan text messaging in the United States. AT&T Mobility, along with most other service providers, charges texters 20 cents per message if they do not have a messaging plan or if they have exceeded their allotted number of texts. Given that an SMS message is at most 160 bytes in size, this cost scales to a cost of $1,310 per megabyte sent via text message. This is in sharp contrast with the price of unlimited data plans offered by the same carriers, which allow the transmission of hundreds of megabytes of data for monthly prices of about $15 to $45 in addition to a voice plan. As a comparison, a one-minute phone call uses up the same amount of network capacity as 600 text messages, meaning that if the same cost-per-traffic formula were applied to phone calls, cell phone calls would cost $120 per minute. With service providers gaining more customers and expanding their capacity, their overhead costs should be decreasing, not increasing. In 2005, text messaging generated nearly 70 billion dollars in revenue, as reported by Gartner, industry analysts, three times as much as Hollywood box office sales in 2005. World figures showed that over a trillion text messages were sent in 2005.
Although major cellphone providers deny any collusion, fees for out-of-package text messages have increased, doubling from 10 to 20 cents in the United States between 2007 and 2008 alone. On 16 July 2009, Senate hearings were held to look into any breach of the Sherman Antitrust Act. The same trend is visible in other countries, though increasingly widespread flat-rate plans, for example in Germany, do make text messaging easier, text messages sent abroad still result in higher costs.
Increasing competition
While text messaging is still a growing market, traditional SMS is becoming increasingly challenged by alternative messaging services which are available on smartphones with data connections. These services are much cheaper and offer more functionality like exchanging multimedia content (e.g. photos, videos or audio notes) and group messaging. Especially in western countries some of these services attract more and more users.
Security concerns
Consumer SMS should not be used for confidential communication. The contents of common SMS messages are known to the network operator's systems and personnel. Therefore, consumer SMS is not an appropriate technology for secure communications. To address this issue, many companies use an SMS gateway provider based on SS7 connectivity to route the messages. The advantage of this international termination model is the ability to route data directly through SS7, which gives the provider visibility of the complete path of the SMS. This means SMS messages can be sent directly to and from recipients without having to go through the SMS-C of other mobile operators. This approach reduces the number of mobile operators that handle the message; however, it should not be considered as an end-to-end secure communication, as the content of the message is exposed to the SMS gateway provider.
An alternative approach is to use end-to-end security software that runs on both the sending and receiving device, where the original text message is transmitted in encrypted form as a consumer SMS. By using key rotation, the encrypted text messages stored under data retention laws at the network operator cannot be decrypted even if one of the devices is compromised. A problem with this approach is that communicating devices needs to run compatible software. Failure rates without backward notification can be high between carriers. International texting can be unreliable depending on the country of origin, destination and respective operators (US: "carriers"). Differences in the character sets used for coding can cause a text message sent from one country to another to become unreadable.
In popular culture
Records and competition
The Guinness Book of World Records has a world record for text messaging, currently held by Sonja Kristiansen of Norway. Kristiansen keyed in the official text message, as established by Guinness, in 37.28 seconds. The message is, "The razor-toothed piranhas of the genera Serrasalmus and Pygocentrus are the most ferocious freshwater fish in the world. In reality, they seldom attack a human." In 2005, the record was held by a 24-year-old Scottish man, Craig Crosbie, who completed the same message in 48 seconds, beating the previous time by 19 seconds. The Book of Alternative Records lists Chris Young of Salem, Oregon, as the world-record holder for the fastest 160-character text message where the contents of the message are not provided ahead of time. His record of 62.3 seconds was set on 23 May 2007.
Elliot Nicholls of Dunedin, New Zealand, currently holds the world record for the fastest blindfolded text messaging. A record of a 160-letter text in 45 seconds while blindfolded was set on 17 November 2007, beating the old record of 1-minute 26 seconds set by an Italian in September 2006. Ohio native Andrew Acklin is credited with the world record for most text messages sent or received in a single month, with 200,052. His accomplishments were first in the World Records Academy and later followed up by Ripley's Believe It Or Not 2010: Seeing Is Believing. He has been acknowledged by The Universal Records Database for the most text messages in a single month; however, this has since been broken twice and as of 2010 was listed as 566607 messages by Fred Lindgren.
In January 2010, LG Electronics sponsored an international competition, the LG Mobile World Cup, to determine the fastest pair of texters. The winners were a team from South Korea, Ha Mok-min and Bae Yeong-ho. On 6 April 2011, SKH Apps released an iPhone app, iTextFast, to allow consumers to test their texting speed and practice the paragraph used by Guinness Book of World Records''. As of 2011, best time listed on Game Center for that paragraph is 34.65 seconds.
Morse code
A few competitions have been held between expert Morse code operators and expert SMS users. Several mobile phones have Morse code ring tones and alert messages. For example, many Nokia mobile phones have an option to beep "S M S" in Morse code when it receives a short message. Some of these phones could also play the Nokia slogan "Connecting people" in Morse code as a message tone. There are third-party applications available for some mobile phones that allow Morse input for short messages.
Tattle texting
"Tattle texting" can mean either of two different texting trends:
Arena security
Many sports arenas now offer a number where patrons can text report security concerns, like drunk or unruly fans, or safety issues like spills. These programs have been praised by patrons and security personnel as more effective than traditional methods. For instance, the patron doesn't need to leave his seat and miss the event in order to report something important. Also, disruptive fans can be reported with relative anonymity. "Text tattling" also gives security personnel a useful tool to prioritize messages. For instance, a single complaint in one section about an unruly fan can be addressed when convenient, while multiple complaints by several different patrons can be acted upon immediately.
Smart cars
In this context, "tattle texting" refers to an automatic text sent by the computer in an automobile, because a preset condition was met. The most common use for this is for parents to receive texts from the car their child is driving, alerting them to speeding or other issues. Employers can also use the service to monitor their corporate vehicles. The technology is still new and (currently) only available on a few car models.
Common conditions that can be chosen to send a text are:
Speeding. With the use of GPS, stored maps, and speed limit information, the onboard computer can determine if the driver is exceeding the current speed limit. The device can store this information and/or send it to another recipient.
Range. Parents/employers can set a maximum range from a fixed location after which a "tattle text" is sent. Not only can this keep children close to home and keep employees from using corporate vehicles inappropriately, but it can also be a crucial tool for quickly identifying stolen vehicles, car jackings, and kidnappings.
See also
Instant messaging
Personal message, also called private message or direct message
Messaging apps
Chat language
Enhanced Messaging Service
Mobile dial code
Operator messaging
Telegram
Tironian notes, scribal abbreviations and ligatures: Roman and medieval abbreviations used to save space on manuscripts and epigraphs
Comparison of user features of messaging platforms
References
2000s fads and trends
Computer-related introductions in 1985
Online chat |
310238 | https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Vi%C3%A8te | François Viète | François Viète, Seigneur de la Bigotière (; 1540 – 23 February 1603) was a French mathematician whose work on new algebra was an important step towards modern algebra, due to its innovative use of letters as parameters in equations. He was a lawyer by trade, and served as a privy councillor to both Henry III and Henry IV of France.
Biography
Early life and education
Viète was born at Fontenay-le-Comte in present-day Vendée. His grandfather was a merchant from La Rochelle. His father, Etienne Viète, was an attorney in Fontenay-le-Comte and a notary in Le Busseau. His mother was the aunt of Barnabé Brisson, a magistrate and the first president of parliament during the ascendancy of the Catholic League of France.
Viète went to a Franciscan school and in 1558 studied law at Poitiers, graduating as a Bachelor of Laws in 1559. A year later, he began his career as an attorney in his native town. From the outset, he was entrusted with some major cases, including the settlement of rent in Poitou for the widow of King Francis I of France and looking after the interests of Mary, Queen of Scots.
Serving Parthenay
In 1564, Viète entered the service of Antoinette d’Aubeterre, Lady Soubise, wife of Jean V de Parthenay-Soubise, one of the main Huguenot military leaders and accompanied him to Lyon to collect documents about his heroic defence of that city against the troops of Jacques of Savoy, 2nd Duke of Nemours just the year before.
The same year, at Parc-Soubise, in the commune of Mouchamps in present-day Vendée, Viète became the tutor of Catherine de Parthenay, Soubise's twelve-year-old daughter. He taught her science and mathematics and wrote for her numerous treatises on astronomy and trigonometry, some of which have survived. In these treatises, Viète used decimal numbers (twenty years before Stevin's paper) and he also noted the elliptic orbit of the planets, forty years before Kepler and twenty years before Giordano Bruno's death.
John V de Parthenay presented him to King Charles IX of France. Viète wrote a genealogy of the Parthenay family and following the death of Jean V de Parthenay-Soubise in 1566 his biography.
In 1568, Antoinette, Lady Soubise, married her daughter Catherine to Baron Charles de Quellenec and Viète went with Lady Soubise to La Rochelle, where he mixed with the highest Calvinist aristocracy, leaders like Coligny and Condé and Queen Jeanne d’Albret of Navarre and her son, Henry of Navarre, the future Henry IV of France.
In 1570, he refused to represent the Soubise ladies in their infamous lawsuit against the Baron De Quellenec, where they claimed the Baron was unable (or unwilling) to provide an heir.
First steps in Paris
In 1571, he enrolled as an attorney in Paris, and continued to visit his student Catherine. He regularly lived in Fontenay-le-Comte, where he took on some municipal functions. He began publishing his Universalium inspectionum ad Canonem mathematicum liber singularis and wrote new mathematical research by night or during periods of leisure. He was known to dwell on any one question for up to three days, his elbow on the desk, feeding himself without changing position (according to his friend, Jacques de Thou).
In 1572, Viète was in Paris during the St. Bartholomew's Day massacre. That night, Baron De Quellenec was killed after having tried to save Admiral Coligny the previous night. The same year, Viète met Françoise de Rohan, Lady of Garnache, and became her adviser against Jacques, Duke of Nemours.
In 1573, he became a councillor of the Parliament of Brittany, at Rennes, and two years later, he obtained the agreement of Antoinette d'Aubeterre for the marriage of Catherine of Parthenay to Duke René de Rohan, Françoise's brother.
In 1576, Henri, duc de Rohan took him under his special protection, recommending him in 1580 as "maître des requêtes". In 1579, Viète finished the printing of his Canonem mathematicum (Mettayer publisher). A year later, he was appointed maître des requêtes to the parliament of Paris, committed to serving the king. That same year, his success in the trial between the Duke of Nemours and Françoise de Rohan, to the benefit of the latter, earned him the resentment of the tenacious Catholic League.
Exile in Fontenay
Between 1583 and 1585, the League persuaded Henry III to release Viète, Viète having been accused of sympathy with the Protestant cause. Henry of Navarre, at Rohan's instigation, addressed two letters to King Henry III of France on March 3 and April 26, 1585, in an attempt to obtain Viète's restoration to his former office, but he failed.
Viète retired to Fontenay and Beauvoir-sur-Mer, with François de Rohan. He spent four years devoted to mathematics, writing his New Algebra (1591).
Code-breaker to two kings
In 1589, Henry III took refuge in Blois. He commanded the royal officials to be at Tours before 15 April 1589. Viète was one of the first who came back to Tours. He deciphered the secret letters of the Catholic League and other enemies of the king. Later, he had arguments with the classical scholar Joseph Juste Scaliger. Viète triumphed against him in 1590.
After the death of Henry III, Viète became a privy councillor to Henry of Navarre, now Henry IV. He was appreciated by the king, who admired his mathematical talents. Viète was given the position of councillor of the parlement at Tours. In 1590, Viète discovered the key to a Spanish cipher, consisting of more than 500 characters, and this meant that all dispatches in that language which fell into the hands of the French could be easily read.
Henry IV published a letter from Commander Moreo to the King of Spain. The contents of this letter, read by Viète, revealed that the head of the League in France, Charles, Duke of Mayenne, planned to become king in place of Henry IV. This publication led to the settlement of the Wars of Religion. The King of Spain accused Viète of having used magical powers.
In 1593, Viète published his arguments against Scaliger. Beginning in 1594, he was appointed exclusively deciphering the enemy's secret codes.
Gregorian calendar
In 1582, Pope Gregory XIII published his bull Inter gravissimas and ordered Catholic kings to comply with the change from the Julian calendar, based on the calculations of the Calabrian doctor Aloysius Lilius, aka Luigi Lilio or Luigi Giglio. His work was resumed, after his death, by the scientific adviser to the Pope, Christopher Clavius.
Viète accused Clavius, in a series of pamphlets (1600), of introducing corrections and intermediate days in an arbitrary manner, and misunderstanding the meaning of the works of his predecessor, particularly in the calculation of the lunar cycle. Viète gave a new timetable, which Clavius cleverly refuted, after Viète's death, in his Explicatio (1603).
It is said that Viète was wrong. Without doubt, he believed himself to be a kind of "King of Times" as the historian of mathematics, Dhombres, claimed. It is true that Viète held Clavius in low esteem, as evidenced by De Thou:
The Adriaan van Roomen problem
In 1596, Scaliger resumed his attacks from the University of Leyden. Viète replied definitively the following year. In March that same year, Adriaan van Roomen sought the resolution, by any of Europe's top mathematicians, to a polynomial equation of degree 45. King Henri IV received a snub from the Dutch ambassador, who claimed that there was no mathematician in France. He said it was simply because some Dutch mathematician, Adriaan van Roomen, had not asked any Frenchman to solve his problem.
Viète came, saw the problem, and, after leaning on a window for a few minutes, solved it. It was the equation between sin(x) and sin(x/45). He resolved this at once, and said he was able to give at the same time (actually the next day) the solution to the other 22 problems to the ambassador. "Ut legit, ut solvit," he later said. Further, he sent a new problem back to Van Roomen, for resolution by Euclidean tools (rule and compass) of the lost answer to the problem first set by Apollonius of Perga. Van Roomen could not overcome that problem without resorting to a trick (see detail below).
Final years
In 1598, Viète was granted special leave. Henry IV, however, charged him to end the revolt of the Notaries, whom the King had ordered to pay back their fees. Sick and exhausted by work, he left the King's service in December 1602 and received 20,000 écu, which were found at his bedside after his death.
A few weeks before his death, he wrote a final thesis on issues of cryptography, whose memory made obsolete all encryption methods of the time.
He died on 23 February 1603, as De Thou wrote, leaving two daughters, Jeanne, whose mother was Barbe Cottereau, and Suzanne, whose mother was Julienne Leclerc. Jeanne, the eldest, died in 1628, having married Jean Gabriau, a councillor of the parliament of Brittany. Suzanne died in January 1618 in Paris.
The cause of Viète's death is unknown. Alexander Anderson, student of Viète and publisher of his scientific writings, speaks of a "praeceps et immaturum autoris fatum." (meeting an untimely end).
Work and thought
New algebra
Background
At the end of the 16th century, mathematics was placed under the dual aegis of the Greeks, from whom it borrowed the tools of geometry, and the Arabs, who provided procedures for the resolution. At the time of Viète, algebra therefore oscillated between arithmetic, which gave the appearance of a list of rules, and geometry which seemed more rigorous. Meanwhile, Italian mathematicians Luca Pacioli, Scipione del Ferro, Niccolò Fontana Tartaglia, Ludovico Ferrari, and especially Raphael Bombelli (1560) all developed techniques for solving equations of the third degree, which heralded a new era.
On the other hand, the German school of the Coss, the Welsh mathematician Robert Recorde (1550) and the Dutchman Simon Stevin (1581) brought an early algebraic notation, the use of decimals and exponents. However, complex numbers remained at best a philosophical way of thinking and Descartes, almost a century after their invention, used them as imaginary numbers. Only positive solutions were considered and using geometrical proof was common.
The task of the mathematicians was in fact twofold. It was necessary to produce algebra in a more geometrical way, i.e., to give it a rigorous foundation; and on the other hand, it was necessary to give geometry a more algebraic sense, allowing the analytical calculation in the plane. Viète and Descartes solved this dual task in a double revolution.
Viète's symbolic algebra
Firstly, Viète gave algebra a foundation as strong as that of geometry. He then ended the algebra of procedures (al-Jabr and al-Muqabala), creating the first symbolic algebra, and claiming that with it, all problems could be solved (nullum non problema solvere).
In his dedication of the Isagoge to Catherine de Parthenay, Viète wrote:
Viète did not know "multiplied" notation (given by William Oughtred in 1631) or the symbol of equality, =, an absence which is more striking because Robert Recorde had used the present symbol for this purpose since 1557, and Guilielmus Xylander had used parallel vertical lines since 1575.
Viète had neither much time, nor students able to brilliantly illustrate his method. He took years in publishing his work (he was very meticulous), and most importantly, he made a very specific choice to separate the unknown variables, using consonants for parameters and vowels for unknowns. In this notation he perhaps followed some older contemporaries, such as Petrus Ramus, who designated the points in geometrical figures by vowels, making use of consonants, R, S, T, etc., only when these were exhausted. This choice proved unpopular with future mathematicians and Descartes, among others, preferred the first letters of the alphabet to designate the parameters and the latter for the unknowns.
Viète also remained a prisoner of his time in several respects. First, he was heir of Ramus and did not address the lengths as numbers. His writing kept track of homogeneity, which did not simplify their reading. He failed to recognize the complex numbers of Bombelli and needed to double-check his algebraic answers through geometrical construction. Although he was fully aware that his new algebra was sufficient to give a solution, this concession tainted his reputation.
However, Viète created many innovations: the binomial formula, which would be taken by Pascal and Newton, and the coefficients of a polynomial to sums and products of its roots, called Viète's formula.
Geometric algebra
Viète was well skilled in most modern artifices, aiming at the simplification of equations by the substitution of new quantities having a certain connection with the primitive unknown quantities. Another of his works, Recensio canonica effectionum geometricarum, bears a modern stamp, being what was later called an algebraic geometry—a collection of precepts how to construct algebraic expressions with the use of ruler and compass only. While these writings were generally intelligible, and therefore of the greatest didactic importance, the principle of homogeneity, first enunciated by Viète, was so far in advance of his times that most readers seem to have passed it over. That principle had been made use of by the Greek authors of the classic age; but of later mathematicians only Hero, Diophantus, etc., ventured to regard lines and surfaces as mere numbers that could be joined to give a new number, their sum.
The study of such sums, found in the works of Diophantus, may have prompted Viète to lay down the principle that quantities occurring in an equation ought to be homogeneous, all of them lines, or surfaces, or solids, or supersolids — an equation between mere numbers being inadmissible. During the centuries that have elapsed between Viète's day and the present, several changes of opinion have taken place on this subject. Modern mathematicians like to make homogeneous such equations as are not so from the beginning, in order to get values of a symmetrical shape. Viète himself did not see that far; nevertheless, he indirectly suggested the thought. He also conceived methods for the general resolution of equations of the second, third and fourth degrees different from those of Scipione dal Ferro and Lodovico Ferrari, with which he had not been acquainted. He devised an approximate numerical solution of equations of the second and third degrees, wherein Leonardo of Pisa must have preceded him, but by a method which was completely lost.
Above all, Viète was the first mathematician who introduced notations for the problem (and not just for the unknowns). As a result, his algebra was no longer limited to the statement of rules, but relied on an efficient computational algebra, in which the operations act on the letters and the results can be obtained at the end of the calculations by a simple replacement. This approach, which is the heart of contemporary algebraic method, was a fundamental step in the development of mathematics. With this, Viète marked the end of medieval algebra (from Al-Khwarizmi to Stevin) and opened the modern period.
The logic of species
Being wealthy, Viète began to publish at his own expense, for a few friends and scholars in almost every country of Europe, the systematic presentation of his mathematic theory, which he called "species logistic" (from species: symbol) or art of calculation on symbols (1591).
He described in three stages how to proceed for solving a problem:
As a first step, he summarized the problem in the form of an equation. Viète called this stage the Zetetic. It denotes the known quantities by consonants (B, D, etc.) and the unknown quantities by the vowels (A, E, etc.)
In a second step, he made an analysis. He called this stage the Poristic. Here mathematicians must discuss the equation and solve it. It gives the characteristic of the problem, porisma, from which we can move to the next step.
In the last step, the exegetical analysis, he returned to the initial problem which presents a solution through a geometrical or numerical construction based on porisma.
Among the problems addressed by Viète with this method is the complete resolution of the quadratic equations of the form and third-degree equations of the form (Viète reduced it to quadratic equations). He knew the connection between the positive roots of an equation (which, in his day, were alone thought of as roots) and the coefficients of the different powers of the unknown quantity (see Viète's formulas and their application on quadratic equations). He discovered the formula for deriving the sine of a multiple angle, knowing that of the simple angle with due regard to the periodicity of sines. This formula must have been known to Viète in 1593.
Viète's formula
In 1593, based on geometrical considerations and through trigonometric calculations perfectly mastered, he discovered the first infinite product in the history of mathematics by giving an expression of , now known as Viète's formula:
He provides 10 decimal places of by applying the Archimedes method to a polygon with 6 × 216 = 393,216 sides.
Adriaan van Roomen's problem
This famous controversy is told by Tallemant des Réaux in these terms (46th story from the first volume of Les Historiettes. Mémoires pour servir à l’histoire du XVIIe siècle):
This suggests that the Adrien van Roomen problem is an equation of 45°, which Viète recognized immediately as a chord of an arc of 8° ( radians). It was then easy to determine the following 22 positive alternatives, the only valid ones at the time.
When, in 1595, Viète published his response to the problem set by Adriaan van Roomen, he proposed finding the resolution of the old problem of Apollonius, namely to find a circle tangent to three given circles. Van Roomen proposed a solution using a hyperbola, with which Viète did not agree, as he was hoping for a solution using Euclidean tools.
Viète published his own solution in 1600 in his work Apollonius Gallus. In this paper, Viète made use of the center of similitude of two circles. His friend De Thou said that Adriaan van Roomen immediately left the University of Würzburg, saddled his horse and went to Fontenay-le-Comte, where Viète lived. According to De Thou, he stayed a month with him, and learned the methods of the new algebra. The two men became friends and Viète paid all van Roomen's expenses before his return to Würzburg.
This resolution had an almost immediate impact in Europe and Viète earned the admiration of many mathematicians over the centuries. Viète did not deal with cases (circles together, these tangents, etc.), but recognized that the number of solutions depends on the relative position of the three circles and outlined the ten resulting situations. Descartes completed (in 1643) the theorem of the three circles of Apollonius, leading to a quadratic equation in 87 terms, each of which is a product of six factors (which, with this method, makes the actual construction humanly impossible).
Religious and political beliefs
Viète was accused of Protestantism by the Catholic League, but he was not a Huguenot. His father was, according to Dhombres. Indifferent in religious matters, he did not adopt the Calvinist faith of Parthenay, nor that of his other protectors, the Rohan family. His call to the parliament of Rennes proved the opposite. At the reception as a member of the court of Brittany, on 6 April 1574, he read in public a statement of Catholic faith.
Nevertheless, Viète defended and protected Protestants his whole life, and suffered, in turn, the wrath of the League. It seems that for him, the stability of the state was to be preserved and that under this requirement, the King's religion did not matter. At that time, such people were called "Politicals."
Furthermore, at his death, he did not want to confess his sins. A friend had to convince him that his own daughter would not find a husband, were he to refuse the sacraments of the Catholic Church. Whether Viète was an atheist or not is a matter of debate.
Publications
Chronological list
Between 1564 and 1568, Viète prepared for his student, Catherine de Parthenay, some textbooks of astronomy and trigonometry and a treatise that was never published: Harmonicon coeleste.
In 1571, Francisci Vietaei Universalium inspectionum ad Canonem mathematicum liber singularis (a book of trigonometry, often abbreviated Canonem mathematicum), which he published at his own expense and with great printing difficulties. This text contains many formulas on the sine and cosine and is unusual in using decimal numbers. The trigonometric tables here exceeded those of Regiomontanus (Triangulate Omnimodis, 1533) and Rheticus (1543, annexed to De revolutionibus of Copernicus).
In 1589, Deschiffrement d'une lettre escripte par le Commandeur Moreo au Roy d'Espaigne son maître.
In 1590, Deschiffrement description of a letter by the Commander Moreo at Roy Espaigne of his master, Tours: Mettayer.
In 1591:
In artem analyticem isagoge (Introduction to the art of analysis), also known as Algebra Nova (New Algebra) Tours: Mettayer, in 9 folio; the first edition of the Isagoge.
Zeteticorum libri quinque. Tours: Mettayer, in 24 folio; which are the five books of Zetetics, a collection of problems from Diophantus solved using the analytical art.
Between 1591 and 1593, Effectionum geometricarum canonica recensio. Tours: Mettayer, in 7 folio.
In 1593:
Vietae Supplementum geometriae. Tours: Francisci, in 21 folio.
Francisci Vietae Variorum de rebus responsorum mathematics liber VIII. Tours: Mettaye, in 49 folio; about the challenges of Scaliger.
Variorum de rebus mathematicis responsorum liber VIII; the "Eighth Book of Varied Responses" in which he talks about the problems of the trisection of the angle (which he acknowledges that it is bound to an equation of third degree) of squaring the circle, building the regular heptagon, etc.
In 1594, Munimen adversus nova cyclometrica. Paris: Mettayer, in quarto, 8 folio; again, a response against Scaliger.
In 1595, Ad problema quod omnibus mathematicis totius orbis construendum proposuit Adrianus Romanus, Francisci Vietae responsum. Paris: Mettayer, in quarto, 16 folio; about the Adriaan van Roomen problem.
In 1600:
De numerosa potestatum ad exegesim resolutione. Paris: Le Clerc, in 36 folio; work that provided the means for extracting roots and solutions of equations of degree at most 6.
Francisci Vietae Apollonius Gallus. Paris: Le Clerc, in quarto, 13 folio; where he referred to himself as the French Apollonius.
Between 1600 and 1602:
Fontenaeensis libellorum supplicum in Regia magistri relatio Kalendarii vere Gregoriani ad ecclesiasticos doctores exhibita Pontifici Maximi Clementi VIII. Paris: Mettayer, in quarto, 40 folio.
Francisci Vietae adversus Christophorum Clavium expostulatio. Paris: Mettayer, in quarto, 8 folio; his theses against Clavius.
Posthumous publications
1612:
Supplementum Apollonii Galli edited by Marin Ghetaldi.
Supplementum Apollonii Redivivi sive analysis problematis bactenus desiderati ad Apollonii Pergaei doctrinam a Marino Ghetaldo Patritio Regusino hujusque non ita pridem institutam edited by Alexander Anderson.
1615:
Ad Angularum Sectionem Analytica Theoremata F. Vieta primum excogitata at absque ulla demonstratione ad nos transmissa, iam tandem demonstrationibus confirmata edited by Alexander Anderson.
Pro Zetetico Apolloniani problematis a se jam pridem edito in supplemento Apollonii Redivivi Zetetico Apolloniani problematis a se jam pridem edito; in qua ad ea quae obiter inibi perstrinxit Ghetaldus respondetur edited by Alexander Anderson
Francisci Vietae Fontenaeensis, De aequationum — recognitione et emendatione tractatus duo per Alexandrum Andersonum edited by Alexander Anderson
1617: Animadversionis in Franciscum Vietam, a Clemente Cyriaco nuper editae brevis diakrisis edited by Alexander Anderson
1619: Exercitationum Mathematicarum Decas Prima edited by Alexander Anderson
1631: In artem analyticem isagoge. Eiusdem ad logisticem speciosam notae priores, nunc primum in lucem editae. Paris: Baudry, in 12 folio; the second edition of the Isagoge, including the posthumously published Ad logisticem speciosam notae priores.
Reception and influence
During the ascendancy of the Catholic League, Viète's secretary was Nathaniel Tarporley, perhaps one of the more interesting and enigmatic mathematicians of 16th-century England. When he returned to London, Tarporley became one of the trusted friends of Thomas Harriot.
Apart from Catherine de Parthenay, Viète's other notable students were: French mathematician Jacques Aleaume, from Orleans, Marino Ghetaldi of Ragusa, Jean de Beaugrand and the Scottish mathematician Alexander Anderson. They illustrated his theories by publishing his works and continuing his methods. At his death, his heirs gave his manuscripts to Peter Aleaume. We give here the most important posthumous editions:
In 1612: Supplementum Apollonii Galli of Marino Ghetaldi.
From 1615 to 1619: Animadversionis in Franciscum Vietam, Clemente a Cyriaco nuper by Alexander Anderson
Francisci Vietae Fontenaeensis ab aequationum recognitione et emendatione Tractatus duo Alexandrum per Andersonum. Paris, Laquehay, 1615, in 4, 135 p. The death of Alexander Anderson unfortunately halted the publication.
In 1630, an Introduction en l'art analytic ou nouvelle algèbre ('Introduction to the analytic art or modern algebra), translated into French and commentary by mathematician J. L. Sieur de Vaulezard. Paris, Jacquin.
The Five Books of François Viette's Zetetic (Les cinq livres des zététiques de François Viette), put into French, and commented increased by mathematician J. L. Sieur de Vaulezard. Paris, Jacquin, p. 219.
The same year, there appeared an Isagoge by Antoine Vasset (a pseudonym of Claude Hardy), and the following year, a translation into Latin of Beaugrand, which Descartes would have received.
In 1648, the corpus of mathematical works printed by Frans van Schooten, professor at Leiden University (Elzevirs presses). He was assisted by Jacques Golius and Mersenne.
The English mathematicians Thomas Harriot and Isaac Newton, and the Dutch physicist Willebrord Snellius, the French mathematicians Pierre de Fermat and Blaise Pascal all used Viète's symbolism.
About 1770, the Italian mathematician Targioni Tozzetti, found in Florence Viète's Harmonicon coeleste. Viète had written in it: Describat Planeta Ellipsim ad motum anomaliae ad Terram. (That shows he adopted Copernicus' system and understood before Kepler the elliptic form of the orbits of planets.)
In 1841, the French mathematician Michel Chasles was one of the first to reevaluate his role in the development of modern algebra.
In 1847, a letter from François Arago, perpetual secretary of the Academy of Sciences (Paris), announced his intention to write a biography of François Viète.
Between 1880 and 1890, the polytechnician Fréderic Ritter, based in Fontenay-le-Comte, was the first translator of the works of François Viète and his first contemporary biographer with Benjamin Fillon.
Descartes' views on Viète
Thirty-four years after the death of Viète, the philosopher René Descartes published his method and a book of geometry that changed the landscape of algebra and built on Viète's work, applying it to the geometry by removing its requirements of homogeneity. Descartes, accused by Jean Baptiste Chauveau, a former classmate of La Flèche, explained in a letter to Mersenne (1639 February) that he never read those works. Descartes accepted the Viète's view of mathematics for which the study shall stress the self-evidence of the results that Descartes implemented translating the symbolic algebra in geometric reasoning. The locution mathesis universalis was derived from van Roomen's works.
Elsewhere, Descartes said that Viète's notations were confusing and used unnecessary geometric justifications. In some letters, he showed he understands the program of the Artem Analyticem Isagoge; in others, he shamelessly caricatured Viète's proposals. One of his biographers, Charles Adam, noted this contradiction:
Current research has not shown the extent of the direct influence of the works of Viète on Descartes. This influence could have been formed through the works of Adriaan van Roomen or Jacques Aleaume at the Hague, or through the book by Jean de Beaugrand.
In his letters to Mersenne, Descartes consciously minimized the originality and depth of the work of his predecessors. "I began," he says, "where Vieta finished". His views emerged in the 17th century and mathematicians won a clear algebraic language without the requirements of homogeneity. Many contemporary studies have restored the work of Parthenay's mathematician, showing he had the double merit of introducing the first elements of literal calculation and building a first axiomatic for algebra.
Although Viète was not the first to propose notation of unknown quantities by letters - Jordanus Nemorarius had done this in the past - we can reasonably estimate that it would be simplistic to summarize his innovations for that discovery and place him at the junction of algebraic transformations made during the late sixteenth – early 17th century.
See also
Michael Stifel
References
Bibliography
Bailey Ogilvie, Marilyn; Harvey, Joy Dorothy. The Biographical Dictionary of Women in Science: L–Z. Google Books. p 985.
Bachmakova, Izabella G., Slavutin, E.I. “ Genesis Triangulorum de François Viète et ses recherches dans l’analyse indéterminée ”, Archives for History of Exact Science, 16 (4), 1977, 289-306.
Bashmakova, Izabella Grigorievna; Smirnova Galina S; Shenitzer, Abe. The Beginnings and Evolution of Algebra. Google Books. pp. 75–.
Biard, Joel; Rāshid, Rushdī. Descartes et le Moyen Age. Paris: Vrin, 1998. Google Books
Burton, David M (1985). The History of Mathematics: An Introduction. Newton, Massachusetts: Allyn and Bacon, Inc.
Cajori, F. (1919). A History of Mathematics. pp. 152 and onward.
Calinger, Ronald (ed.) (1995). Classics of Mathematics. Englewood Cliffs, New Jersey: Prentice–Hall, Inc.
Calinger, Ronald. Vita mathematica. Mathematical Association of America. Google Books
Chabert, Jean-Luc; Barbin, Évelyne; Weeks, Chris. A History of Algorithms. Google Books
Derby Shire, John (2006). Unknown Quantity a Real and Imaginary History of Algebra. Scribd.com
Eves, Howard (1980). Great Moments in Mathematics (Before 1650). The Mathematical Association of America. Google Books
Grisard, J. (1968) François Viète, mathématicien de la fin du seizième siècle: essai bio-bibliographique (Thèse de doctorat de 3ème cycle) École Pratique des Hautes Études, Centre de Recherche d'Histoire des Sciences et des Techniques, Paris.
Godard, Gaston. François Viète (1540–1603), Father of Modern Algebra. Université de Paris-VII, France, Recherches vendéennes.
W. Hadd, Richard. On the shoulders of merchants. Google Books
Hofmann, Joseph E (1957). The History of Mathematics, translated by F. Graynor and H. O. Midonick. New York, New York: The Philosophical Library.
Joseph, Anthony. Round tables. European Congress of Mathematics. Google Books
Michael Sean Mahoney (1994). The mathematical career of Pierre de Fermat (1601–1665). Google Books
Jacob Klein. Die griechische Logistik und die Entstehung der Algebra in: Quellen und Studien zur Geschichte der Mathematik, Astronomie und Physik, Abteilung B: Studien, Band 3, Erstes Heft, Berlin 1934, p. 18–105 and Zweites Heft, Berlin 1936, p. 122–235; translated in English by Eva Brann as: Greek Mathematical Thought and the Origin of Algebra. Cambridge, Mass. 1968,
Mazur, Joseph (2014). Enlightening Symbols: A Short History of Mathematical Notation and Its Hidden Powers. Princeton, New Jersey: Princeton University Press.
Nadine Bednarz, Carolyn Kieran, Lesley Lee. Approaches to algebra. Google Books
Otte, Michael; Panza, Marco. Analysis and Synthesis in Mathematics. Google Books
Pycior, Helena M. Symbols, Impossible Numbers, and Geometric Entanglements. Google Books
Francisci Vietae Opera Mathematica, collected by F. Van Schooten. Leyde, Elzévir, 1646, p. 554 Hildesheim-New-York: Georg Olms Verlag (1970).
The intégral corpus (excluding Harmonicon) was published by Frans van Schooten, professor at Leyde as Francisci Vietæ. Opera mathematica, in unum volumen congesta ac recognita, opera atque studio Francisci a Schooten, Officine de Bonaventure et Abraham Elzevier, Leyde, 1646. Gallica.bnf.fr (pdf).
Stillwell, John. Mathematics and its history. Google Books
Varadarajan, V. S. (1998). Algebra in Ancient and Modern Times'' The American Mathematical Society. Google Books
Attribution
External links
New Algebra (1591) online
Francois Viète: Father of Modern Algebraic Notation
The Lawyer and the Gambler
About Tarporley
Site de Jean-Paul Guichard
L'algèbre nouvelle
.
1540 births
1603 deaths
People from Fontenay-le-Comte
Pre-19th-century cryptographers
16th-century French mathematicians
Algebraists
French cryptographers
University of Poitiers alumni |
310953 | https://en.wikipedia.org/wiki/Quotient%20%28universal%20algebra%29 | Quotient (universal algebra) | In mathematics, a quotient algebra is the result of partitioning the elements of an algebraic structure using a congruence relation.
Quotient algebras are also called factor algebras. Here, the congruence relation must be an equivalence relation that is additionally compatible with all the operations of the algebra, in the formal sense described below.
Its equivalence classes partition the elements of the given algebraic structure. The quotient algebra has these classes as its elements, and the compatibility conditions are used to give the classes an algebraic structure.
The idea of the quotient algebra abstracts into one common notion the quotient structure of quotient rings of ring theory, quotient groups of group theory, the quotient spaces of linear algebra and the quotient modules of representation theory into a common framework.
Compatible relation
Let A be the set of the elements of an algebra , and let E be an equivalence relation on the set A. The relation E is said to be compatible with (or have the substitution property with respect to) an n-ary operation f, if for implies for any with . An equivalence relation compatible with all the operations of an algebra is called a congruence with respect to this algebra.
Quotient algebras and homomorphisms
Any equivalence relation E in a set A partitions this set in equivalence classes. The set of these equivalence classes is usually called the quotient set, and denoted A/E. For an algebra , it is straightforward to define the operations induced on the elements of A/E if E is a congruence. Specifically, for any operation of arity in (where the superscript simply denotes that it is an operation in , and the subscript enumerates the functions in and their arities) define as , where denotes the equivalence class of generated by E ("x modulo E").
For an algebra , given a congruence E on , the algebra is called the quotient algebra (or factor algebra) of modulo E. There is a natural homomorphism from to mapping every element to its equivalence class. In fact, every homomorphism h determines a congruence relation via the kernel of the homomorphism, .
Given an algebra , a homomorphism h thus defines two algebras homomorphic to , the image h() and The two are isomorphic, a result known as the homomorphic image theorem or as the first isomorphism theorem for universal algebra. Formally, let be a surjective homomorphism. Then, there exists a unique isomorphism g from onto such that g composed with the natural homomorphism induced by equals h.
Congruence lattice
For every algebra on the set A, the identity relation on A, and are trivial congruences. An algebra with no other congruences is called simple.
Let be the set of congruences on the algebra . Because congruences are closed under intersection, we can define a meet operation: by simply taking the intersection of the congruences .
On the other hand, congruences are not closed under union. However, we can define the closure of any binary relation E, with respect to a fixed algebra , such that it is a congruence, in the following way: . Note that the closure of a binary relation is a congruence and thus depends on the operations in , not just on the carrier set. Now define as .
For every algebra , with the two operations defined above forms a lattice, called the congruence lattice of .
Maltsev conditions
If two congruences permute (commute) with the composition of relations as operation, i.e. , then their join (in the congruence lattice) is equal to their composition: . An algebra is called congruence-permutable if every pair of its congruences permutes; likewise a variety is said to be congruence-permutable if all its members are
congruence-permutable algebras.
In 1954, Anatoly Maltsev established the following characterization of congruence-permutable varieties: a variety is congruence permutable if and only if there exist a ternary term such that ; this is called a Maltsev term and varieties with this property are called Maltsev varieties. Maltsev's characterization explains a large number of similar results in groups (take ), rings, quasigroups (take , complemented lattices, Heyting algebras etc. Furthermore, every congruence-permutable algebra is congruence-modular, i.e. its lattice of congruences is modular lattice as well; the converse is not true however.
After Maltsev's result, other researchers found characterizations based on conditions similar to that found by Maltsev but for other kinds of properties, e.g. in 1967 Bjarni Jónsson found the conditions for varieties having congruence lattices that are distributive (thus called congruence-distributive varieties). Generically, such conditions are called Maltsev conditions.
This line of research led to the Pixley–Wille algorithm for generating Maltsev conditions associated
with congruence identities.
See also
Quotient ring
Congruence lattice problem
Lattice of subgroups
Notes
References
Universal algebra |
311819 | https://en.wikipedia.org/wiki/RAR%20%28file%20format%29 | RAR (file format) | RAR is a proprietary archive file format that supports data compression, error recovery and file spanning. It was developed in 1993 by Russian software engineer Eugene Roshal and the software is licensed by win.rar GmbH. The name RAR stands for Roshal Archive.
File format
The filename extensions used by RAR are .rar for the data volume set and .rev for the recovery volume set. Previous versions of RAR split large archives into several smaller files, creating a "multi-volume archive". Numbers were used in the file extensions of the smaller files to keep them in the proper sequence. The first file used the extension .rar, then .r00 for the second, and then .r01, .r02, etc.
RAR compression applications and libraries (including GUI based WinRAR application for Windows, console rar utility for different OSes and others) are proprietary software, to which Alexander L. Roshal, the elder brother of Eugene Roshal, owns the copyright. Version 3 of RAR is based on Lempel-Ziv (LZSS) and prediction by partial matching (PPM) compression, specifically the PPMd implementation of PPMII by Dmitry Shkarin.
The minimum size of a RAR file is 20 bytes. The maximum size of a RAR file is 9,223,372,036,854,775,807 (263−1) bytes, which is about 9,000 PB.
Versions
The RAR file format revision history:
1.3 – the first public version, does not have the "Rar!" signature.
1.5 – changes are not known.
2.0 – released with WinRAR 2.0 and Rar for MS-DOS 2.0; features the following changes:
Multimedia compression for true color bitmap images and uncompressed audio.
Up to 1 MB compression dictionary.
Introduces archives data recovery protection record.
2.9 – released in WinRAR version 3.00. Feature changes in this version include:
File extensions is changed from {volume name}.rar, {volume name}.r00, {volume name}.r01, etc. to {volume name}.part001.rar, {volume name}.part002.rar, etc.
Encryption of both file data and file headers.
Improves compression algorithm using 4 MB dictionary size, Dmitry Shkarin's PPMII algorithm for file data.
Optional creation of "recovery volumes" (.rev files) with redundancy data, which can be used to reconstruct missing files in a volume set.
Support for archive files larger than 9 GB.
Support for Unicode file names stored in UTF-16 little endian format.
5.0 – supported by WinRAR 5.0 and later. Changes in this version:
Maximum compression dictionary size increased to 1 GB (default for WinRAR 5.x is 32 MB and 4 MB for WinRAR 4.x).
Maximum path length for files in RAR and ZIP archives is increased up to 2048 characters.
Support for Unicode file names stored in UTF-8 format.
Faster compression and decompression.
Multicore decompression support.
Greatly improves recovery.
Optional AES encryption increased from 128-bit to 256-bit.
Optional 256-bit BLAKE2 file hash instead of a default 32-bit CRC32 file checksum.
Optional duplicate file detection.
Optional NTFS hard and symbolic links.
Optional Quick Open Record. Rar4 archives had to be parsed before opening as file names were spread throughout the archive, slowing operation particularly with slower devices such as optical drives, and reducing the integrity of damaged archives. Rar5 can optionally create a "quick open record", a special archive block at the end of the file that contains the names of files included, allowing archives to be opened faster.
Removes specialized compression algorithms for Itanium executables, text, raw audio (WAV), and raw image (BMP) files; consequently some files of these types compress better in the older RAR (4) format with these options enabled than in RAR5.
Notes
Software
Operating system support
Software is available for Microsoft Windows (named WinRAR), Linux, FreeBSD, macOS, and Android; archive extraction is supported natively in Chrome OS. WinRAR supports the Windows graphical user interface (GUI); other versions named RAR run as console commands. Later versions are not compatible with some older operating systems previously supported:
WinRAR v6.10 supports Windows Vista and later.
WinRAR v6.02 is the last version that supports Windows XP.
WinRAR v4.11 is the last version that supports Windows 2000.
WinRAR v3.93 is the last version that supports Windows 95, 98, ME, and NT.
RAR v3.93 is the last version that supports MS-DOS and OS/2 on 32-bit x86 CPUs such as 80386 and later. It supports long file names in a Windows DOS box (except Windows NT), and uses the RSX DPMI extender.
RAR v2.50 is the last version that supports MS-DOS and OS/2 on 16-bit x86 CPUs such as Intel 8086, 8088, and 80286.
Creating RAR files
RAR files can be created only with commercial software WinRAR (Windows), RAR for Android, command-line RAR (Windows, MS-DOS, macOS, Linux, and FreeBSD), and other software that has written permission from Alexander Roshal or uses copyrighted code under license from Roshal. The software license agreements forbid reverse engineering.
Third-party software for extracting RAR files
Several programs can unpack the file format.
RARLAB distributes the C++ source code and binaries for a command-line unrar program. The license permits its use to produce software capable of unpacking, but not creating, RAR archives, without having to pay a fee. It is not a free software license.
7-Zip, a free and open-source program, starting from 7-Zip version 15.06 beta can unpack RAR5 archives, using the RARLAB unrar code.
PeaZip is a free RAR unarchiver, licensed under the LGPL, it runs as a RAR extractor on Linux, macOS, and Windows, with a GUI. PeaZip supports both pre-RAR5 .rar files, and files in the new RAR5 format.
The Unarchiver is a proprietary software unarchiver for RAR and other formats. It runs on macOS, and the command-line version, , also runs on Windows and on Linux, and is free software licensed under the LGPL. It supports all versions of the RAR archive format, including RAR3 and RAR5.
UNRARLIB (UniquE RAR File Library) was an obsolete free software unarchiving library called "unrarlib", licensed under the GPL. It could only decompress archives created by RAR versions prior to 2.9; archives created by RAR 2.9 and later use different formats not supported by this library. The original development-team ended work on this library in 2007.
libarchive, a free and open source library for reading and writing a variety of archive formats, supports all RAR versions including RAR5. The code was written from scratch using RAR's "technote.txt" format description.
Other uses of rar
The filename extension rar is also used by the unrelated Resource Adapter Archive file format.
See also
.cbr
List of archive formats
Comparison of archive formats
Comparison of file archivers
Data corruption, Bit rot, Disc rot
References
External links
RARLAB FTP download website, current and old versions of WinRAR and RAR
RAR 5.0 archive file format
Computer-related introductions in 1993
Archive formats
Russian inventions |
312018 | https://en.wikipedia.org/wiki/VMware | VMware | VMware, Inc. is an American cloud computing and virtualization technology company with headquarters in California. VMware was the first commercially successful company to virtualize the x86 architecture.
VMware's desktop software runs on Microsoft Windows, Linux, and macOS, while its enterprise software hypervisor for servers, VMware ESXi, is a bare-metal hypervisor that runs directly on server hardware without requiring an additional underlying operating system.
History
In 1998, VMware was founded by Diane Greene, Mendel Rosenblum, Scott Devine, Ellen Wang and Edouard Bugnion. Greene and Rosenblum were both graduate students at the University of California, Berkeley. Edouard Bugnion remained the chief architect and CTO of VMware until 2005, and went on to found Nuova Systems (now part of Cisco). For the first year, VMware operated in stealth mode, with roughly 20 employees by the end of 1998. The company was launched officially early in the second year, in February 1999, at the DEMO Conference organized by Chris Shipley. The first product, VMware Workstation, was delivered in May 1999, and the company entered the server market in 2001 with VMware GSX Server (hosted) and VMware ESX Server (hostless).
In 2003, VMware launched VMware Virtual Center, vMotion, and Virtual SMP technology. 64-bit support was introduced in 2004.
On January 9, 2004, under the terms of the definitive agreement announced on December 15, 2003, EMC (now Dell EMC) acquired the company with $625 million in cash. On August 14, 2007, EMC sold 15% of VMware to the public via an initial public offering. Shares were priced at per share and closed the day at .
On July 8, 2008, after disappointing financial performance, the board of directors fired VMware co-founder, president and CEO Diane Greene, who was replaced by Paul Maritz, a retired 14-year Microsoft veteran who was heading EMC's cloud computing business unit. Greene had been CEO since the company's founding, ten years earlier. On September 10, 2008, Mendel Rosenblum, the company's co-founder, chief scientist, and the husband of Diane Greene, resigned.
On September 16, 2008, VMware announced a collaboration with Cisco Systems. One result was the Cisco Nexus 1000V, a distributed virtual software switch, an integrated option in the VMware infrastructure.
In April 2011, EMC transferred control of the Mozy backup service to VMware.
On April 12, 2011, VMware released an open-source platform-as-a-service system called Cloud Foundry, as well as a hosted version of the service. This supported application deployment for Java, Ruby on Rails, Sinatra, Node.js, and Scala, as well as database support for MySQL, MongoDB, Redis, Postgres, RabbitMQ.
In August 2012, Pat Gelsinger was appointed as the new CEO of VMware, coming over from EMC. Paul Maritz went over to EMC as Head of Strategy before moving on to lead the Pivotal spin-off.
In March 2013, VMware announced the corporate spin-off of Pivotal Software, with General Electric making an investment in the company. All of VMware's application- and developer-oriented products, including Spring, tc Server, Cloud Foundry, RabbitMQ, GemFire, and SQLFire were transferred to this organization.
In May 2013, VMware launched its own IaaS service, vCloud Hybrid Service, at its new Palo Alto headquarters (vCloud Hybrid Service was rebranded vCloud Air and subsequently sold to cloud provider OVH), announcing an early access program in a Las Vegas data center. The service is designed to function as an extension of its customer's existing vSphere installations, with full compatibility with existing virtual machines virtualized with VMware software and tightly integrated networking. The service is based on vCloud Director 5.1/vSphere 5.1.
In September 2013, at VMworld San Francisco, VMware announced the general availability of vCloud Hybrid Service and expansion to Sterling, Virginia, Santa Clara, California, Dallas, Texas, and a service beta in the UK. It announced the acquisition Desktone in October 2013.
In January 2016, in anticipation of Dell's acquisition of EMC, VMware announced a restructuring to reduce about 800 positions, and some executives resigned. The entire development team behind VMware Workstation and Fusion was disbanded and all US developers were immediately fired. On April 24, 2016, maintenance release 12.1.1 was released. On September 8, 2016, VMware announced the release of Workstation 12.5 and Fusion 8.5 as a free upgrade supporting Windows 10 and Windows Server 2016.
In April 2016, VMware president and COO Carl Eschenbach left VMware to join Sequoia Capital, and Martin Casado, VMware's general manager for its Networking and Security business, left to join Andreessen Horowitz. Analysts commented that the cultures at Dell and EMC, and at EMC and VMware, are different, and said that they had heard that impending corporate cultural collisions and potentially radical product overlap pruning, would cause many EMC and VMware personnel to leave; VMware CEO Pat Gelsinger, following rumors, categorically denied that he would leave.
In August 2016 VMware introduced the VMware Cloud Provider website.
Mozy was transferred to Dell in 2016 after the merger of Dell and EMC.
In April 2017, according to Glassdoor, VMware was ranked 3rd on the list of highest paying companies in the United States.
In Q2 2017, VMware sold vCloud Air to French cloud service provider OVH.
On January 13, 2021, VMware announced that CEO Pat Gelsinger would be leaving to step in at Intel. Intel is where Gelsinger spent 30 years of his career and was Intel's first chief technology officer. CFO Zane Rowe became interim CEO while the board searched for a replacement.
On April 15, 2021, it was reported that Dell would spin off its remaining stake in VMware shareholders and that the two companies would continue to operate without major changes for at least five years. The spinoff was completed on November 1, 2021.
On May 12, 2021, VMware announced that Raghu Raghuram would take over as CEO.
Acquisitions
Litigation
In March 2015, the Software Freedom Conservancy announced it was funding litigation by Christoph Hellwig in Hamburg, Germany against VMware for alleged violation of his copyrights in its ESXi product.
Hellwig's core claim is that ESXi is a derivative work of the GPLv2-licensed Linux kernel 2.4, and therefore VMware is not in compliance
with GPLv2 because it does not publish the source code to ESXi. VMware publicly stated that ESXi is not a derivative of the Linux kernel, refuting Hellwig's
core claim. VMware said it offered a way to use Linux device drivers with ESXi, and that code does use some Linux GPLv2-licensed code and so
it had published the source, meeting GPLv2 requirements.
The lawsuit was dismissed by the court in July 2016 and Hellwig announced he would file an appeal. The appeal was decided February 2019 and again dismissed by German court, on the basis of not meeting "procedural requirements for the burden of proof of the plaintiff."
Current products
VMware's most notable products are its hypervisors. VMware became well known for its first type 2 hypervisor known as GSX. This product has since evolved into two hypervisor product lines: VMware's type 1 hypervisors running directly on hardware and their hosted type 2 hypervisors.
VMware software provides a completely virtualized set of hardware to the guest operating system. VMware software virtualizes the hardware for a video adapter, a network adapter, and hard disk adapters. The host provides pass-through drivers for guest USB, serial, and parallel devices. In this way, VMware virtual machines become highly portable between computers, because every host looks nearly identical to the guest. In practice, a system administrator can pause operations on a virtual machine guest, move or copy that guest to another physical computer, and there resume execution exactly at the point of suspension. Alternatively, for enterprise servers, a feature called vMotion allows the migration of operational guest virtual machines between similar but separate hardware hosts sharing the same storage (or, with vMotion Storage, separate storage can be used, too). Each of these transitions is completely transparent to any users on the virtual machine at the time it is being migrated.
VMware's products predate the virtualization extensions to the x86 instruction set, and do not require virtualization-enabled processors. On newer processors, the hypervisor is now designed to take advantage of the extensions. However, unlike many other hypervisors, VMware still supports older processors. In such cases, it uses the CPU to run code directly whenever possible (as, for example, when running user-mode and virtual 8086 mode code on x86). When direct execution cannot operate, such as with kernel-level and real-mode code, VMware products use binary translation (BT) to re-write the code dynamically. The translated code gets stored in spare memory, typically at the end of the address space, which segmentation mechanisms can protect and make invisible. For these reasons, VMware operates dramatically faster than emulators, running at more than 80% of the speed that the virtual guest operating system would run directly on the same hardware. In one study VMware claims a slowdown over native ranging from 0–6 percent for the VMware ESX Server.
Desktop software
VMware Workstation, introduced in 1999, was the first product launched by VMware. This software suite allows users to run multiple instances of x86 or x86-64 -compatible operating systems on a single physical personal computer. Workstation Pro version 15.5.1 was released in Nov 2019.
VMware Fusion provides similar functionality for users of the Intel Mac platform, along with full compatibility with virtual machines created by other VMware products.
VMware Workstation Player is freeware for non-commercial use, without requiring a license, and available for commercial use with permission. It is similar to VMware Workstation, with reduced functionality.
Server software
VMware ESXi, an enterprise software product, can deliver greater performance than the freeware VMware Server, due to lower system computational overhead. VMware ESXi, as a "bare-metal" product, runs directly on the server hardware, allowing virtual servers to also use hardware more or less directly. In addition, VMware ESXi integrates into VMware vCenter, which offers extra services.
Cloud management software
VMware Suite – a cloud management platform purpose-built for a hybrid cloud.
VMware Go is a web-based service to guide users of any expertise level through the installation and configuration of VMware vSphere Hypervisor.
VMware Cloud Foundation – Cloud Foundation provides an easy way to deploy and operate a private cloud on an integrated SDDC system.
VMware Horizon View is a virtual desktop infrastructure (VDI) product.
Application management
The VMware Workspace Portal was a self-service app store for workspace management.
Storage and availability
VMware's storage and availability products are composed of two primary offerings:
VMware vSAN (previously called VMware Virtual SAN) is software-defined storage that is embedded in VMware's ESXi hypervisor. The vSphere and vSAN software runs on industry-standard x86 servers to form a hyper-converged infrastructure (or HCI). However, network operators need to have servers from HCL (Hardware Compatibility List) to put one into production. The first release, version 5.5, was released in March 2014. The 6th generation, version 6.6, was released in April 2017. New features available in VMware vSAN 6.6 include native data at rest encryption, local protection for stretched clusters, analytics, and optimized solid-state drive performance. The VMWare 6.7 version was released in April 2018, Users now have improved monitoring tools and new workflows, it is closer to feature parity. The vCenter Server Appliance architecture is moving around to an easy deployment method.
VMware Site Recovery Manager (SRM) automates the failover and failback of virtual machines to and from a secondary site using policy-based management.
Networking and security products
VMware NSX is VMware's network virtualization product marketed using the term software-defined data center (SDDC). The technology included some acquired from the 2012 purchase of Nicira. Software Defined Networking (SDN) allows the same policies that govern Identity and Access Management (IAM) to dictate levels of access to applications and data through a totally converged infrastructure not possible with legacy network and system access methods.
Other products
Workspace ONE allows mobile users to access apps and data.
The VIX (Virtual Infrastructure eXtension) API allows automated or scripted management of a computer virtualized using either VMware's vSphere, Workstation, Player, or Fusion products. VIX provides bindings for the programming languages C, Perl, Visual Basic, VBscript and C#.
Herald is an communications protocol from VMware for more reliable Bluetooth communication and range finding across for mobile devices. Herald code is available under an Open-source license and was implemented in the Australian Government's COVIDSafe app for contact tracing on 19 December 2020.
See also
Comparison of platform virtualization software
Hardware virtualization
Hypervisor
VMware VMFS
References
External links
American companies established in 1998
1998 establishments in California
Cloud computing providers
Companies based in Palo Alto, California
Software companies established in 1998
Companies listed on the New York Stock Exchange
Dell EMC
Software companies based in the San Francisco Bay Area
2007 initial public offerings
Software companies of the United States |
312028 | https://en.wikipedia.org/wiki/Packet%20Switch%20Stream | Packet Switch Stream | In the United Kingdom, Packet Switch Stream (PSS) was an X.25-based packet-switched network, provided by the British Post Office Telecommunications and then British Telecommunications starting in 1980. After a period of pre-operational testing with customers (mainly UK universities and computer manufacturers at this early phase) the service was launched as a commercial service on 20 August 1981. The experimental predecessor network (EPSS) formally closed down on 31 July 1981 after all the existing connections had been moved to PSS.
Description
Companies and individual users could connect into the PSS network using the full X.25 interface, via a dedicated four-wire telephone circuit using a PSS analog modem and later on, when problems of 10-100 ms transmission failures with the PCM Voice based transmission equipment used by the early Kilostream service were resolved, via a Kilostream digital access circuit (actually a baseband modem). In this early 1980s era installation lead times for suitable 4-wire analog lines could be more than 6 months in the UK.
Companies and individual users could also connect into the PSS network using a basic non-error correcting RS232/V.24 asynchronous character based interface via an X.3/X.28/X.29 PAD (Packet Assembler/Disassembler) service oriented to the then prevalent dumb terminal market place. The PAD service could be connected to via a dedicated four-wire telephone circuit using a PSS analog modem and later on via a Kilostream digital access circuit. However most customers, for cost reasons, chose to dial up via an analog modem over the then UK analog telephony network to their nearest public PAD, via published phone numbers, using an ID/password provided as a subscription service.
The current day analogy of ISPs offering broadband always on and dial up services to the internet applies here. Some customers connected to the PSS network via the X.25 service and bought their own PADs. PSS was one of the first telecommunications networks in the UK to be fully liberalised in that customers could connect their own equipment to the network. This was before privatisation and the creation of British Telecommunications plc (BT) in 1984.
Connectivity to databases and mainframe systems
PSS could be used to connect to a variety of online databases and mainframe systems. Of particular note was the use of PSS for the first networked Clearing House Automated Payment System (CHAPS). This was a network system used to transfer all payments over £10,000 GBP (in early 1980s monetary value) between the major UK banks and other major financial institutions based in the UK. It replaced a paper based system that operated in the City of London using electrical vehicles similar to milk floats. Logica (now LogicaCMG) designed the CHAPS system and incorporated an encryption system able to cope with HDLC bit stuffing on X.25 links.
Speeds
There was a choice of different speeds of PSS lines; the faster the line the more expensive it cost to rent it. The highest and lowest speed lines were provided by the Megastream and Kilostream services, 2M (Mega) bit/s and 256K (kilo) bit/s respectively. On analog links 2400 bit/s, 4800 bit/s, 9600 bit/s and 48 kbit/s were offered. Individual users could link into PSS, on a pay as you go basis, by using a 110, 300, 1200/75, 1,200 or 2,400 bit/s PSTN modem to connect a Data Terminal Equipment terminal into a local PSS exchange. Note: in those days 2,400 bit/s modems were quite rare; 1,200 bit/s was the usual speed in the 1980s, although 110 and 300 bit/s modems were not uncommon.
History of implementation
The International Packet Switch Stream (IPSS) is an international X.25 network service launched by the international division of BT to which PSS was linked to other packet switched networks around the world. This started in about 1978 before PSS went into operation due to the high demand for affordable access to US based database and other network services. A PAD service was provided by IPSS to this market in advance of PSS launch.
For a brief time the EEC operated a packet switched network, Euronet, and a related project Diane to encourage more database and network services to develop in Europe. These connections moved over to PSS and other European networks as commercial X.25 services launched.
Later on the InterStream gateway between the Telex network and PSS was introduced based on a low speed PAD interface.
The network was initially based upon a dedicated modular packet switch using DCC's TP 4000 communication processor hardware. The operating system and the packet switching software was developed by Telenet (later on GTE Telenet). At the time of PSS's launch this was in advance of both Telenet's own network and most others that used general purpose mini-computers as packet switches.
BT bought Telenet's system via Plessey Controls of Poole, Dorset who also sold Telex and Traffic light systems.
Later on BT used Telematics packet switches for the Vascom network to support the Prestel service and also bought the Tymnet network from McDonneld Douglas.
In the words of BT's own history:
British Telecom purchased the Tymnet network systems business and its associated applications activities from the McDonnell Douglas Corporation on 19 November (1989) for $355 million. Its activities included TYMNET, the public network business, plus its associates private and hybrid (mixed public and private) network activities, the OnTyme electronic mail service, the Card Service processing business, and EDI*Net, the US market leader in electronic data interchange.
BT Tymnet anticipated developing an end to end managed network service for multi-national customers, and developing dedicated or hybrid networks that embraced major trading areas. Customers would be able to enjoy one-stop-shopping for global data networks, and a portfolio of products designed for a global market place.
These services were subsequently offered by BT Global Network Services, and subsequently by Concert as part of Concert Global Network Services after the Concert joint venture company was launched on 15 June 1994.
Exchange for other assets
It is believed BT subsequently exchanged major US elements of the Tymnet business with MCI for other assets when the proposed merger of their two businesses was thwarted by MCI's purchase by WorldCom.
The last PSS node in the UK was finally switched off Wednesday, June 28, 2006.
Network management had been run on a system of 24 Prime 63xx and 48xx computers running a modified versions of Revisions 20 and 22 of the Primos operating system. These network management systems were based in London and Manchester. Packet switches were installed at major trunk exchanges in most major conurbations in the UK.
The DNICs used by IPSS and PSS were 2341 and 2342 respectively.
BT's attitude to packet switching was ambivalent at best. Compared to France's Transpac that had a separate commercial company with dedicated management and saw X.25 packet switching as a core offering BT's then senior management regarded packet switching as a passing phase until the telecommunications nirvana of ISDN's 64 kbit/s for everyone arrived.
Even in its recent history BT's senior management stated that the Internet was "not fit for purpose".
Investment challenges
PSS suffered from inconsistent investment during its early years. Sometimes not enough and sometimes too much but mostly for the wrong reasons.
Investments in value added network services (VANS) and BT's own access level packet switching hardware delayed operating profit. This in turn dented PSS's low credibility with BT's management still further. Despite healthy demand for basic X.25 services and the obvious trend for more demanding bandwidth intensive applications that required investment in more powerful switches a decision to develop BT's own hardware and network applications was made instead.
In the midst of this IBM (the then market leader in computing) and BT attempted to launch a joint venture, called Jove, for managed SNA services in the UK. And for a time significant extra expenditure was allowed for BT's data services, PSS being the major part, as one concern of regulators was this joint venture might damage work on Open Systems Interconnection. This only made cost control worse and achieving operating profit delayed further. Eventually the UK government decided the SNA joint venture was anti-competitive and vetoed it. But not before PSS management was allowed to commit to large investments that caused serious problems later.
One of the few successful value added applications was the transaction phone used to check credit cards by retailer to validate transactions and prevent fraud. It was believed that putting a packet switch in every local telephone exchange would allow this and other low bandwidth applications to drive revenue. The lesson of Tymnet's similar transaction phone that just used a dial up link to a standard PAD based service was not followed. Each low end packet switch installed added costs for floor space, power, etc. without any significant value added revenue benefit resulting. Nor were they adequate for X.25 host traffic.
Ideas like providing a more user friendly menu based interface, called Epad, than X.28 was proven obsolete by the advent windows based clients on PCs.
As the added value services, named PSS Plus collectively, added significant costs and headcount while contributed virtually no revenue a change in PSS's management eventually resulted. While a decision was eventually made to put some of the basic network services people in senior positions and try to launch what had been developed this proved to be a major mistake. An exodus of people who were developing the value added network services helped reduce some costs. However significant on-going expenditure had been committed already to manufacture packet switch hardware and by using the very expensive Tandem computers in existing VANS. Operating profit was still not achieved and a further change in management with McKinsey consulting being called in.
McKinsey's recommendation that increasing revenue while cutting costs was required to turn around the business was duly followed by the new management and an operating profit achieved in about 1988. This rested on running PSS efficiently and cutting the VANS as much as possible. PSS was then merged with other failing business like Prestel as it became part of a larger Managed Network Services division that was used to fix or close BT's problem businesses.
Later history
While PSS eventually went the way of all X.25 networks and was overwhelmed by the internet and more significantly the internet's superior application suite and cost model. BT did not capitalise as much as other packet switch operators by subsequent mistakes concerning the internet, Tymnet, BT's North American operations and the Concert Global Services with ATT.
BT's failure to become the major ISP in its own home market unlike every other former PTT and the success of Dixon's Freeserve, Demon and Energis based virtual ISPs in the same sector has only been recovered from recently. Only after BT changed its most senior management who were fixated on circuit switching/ISDN based on System X/Y telephone exchanges and embracing broadband/internet lock stock and barrel has this changed. An emergency rights issue also helped resolve the debt from acquiring second or third ranked old telcos style companies around the world.
Now BT appears to be inheriting a dominating position in the Global Network Services market, based on packet switching, as CSC and Reuters sell up their networks to BT. As the commodity price of IP services based in their core 21st century MPLS network to carry voice and data finally gives them the real cost efficiencies that packet switching always promised.
See also
Internet in the United Kingdom § History
NPL network
Telecommunications in the United Kingdom
External links
Pictures of the BT PSS equipment
BT Group
General Post Office
History of computing in the United Kingdom
History of telecommunications in the United Kingdom
Packets (information technology)
X.25 |
313793 | https://en.wikipedia.org/wiki/Medium%20of%20exchange | Medium of exchange | In economics, a medium of exchange is any item that is widely acceptable in exchange for goods and services. In modern economies, the most commonly used medium of exchange is currency.
The origin of "mediums of exchange" in human societies is assumed to have arisen in antiquity as awareness grew of the limitations of barter. The form of the "medium of exchange" follows that of a token, which has been further refined as money. A "medium of exchange" is considered one of the functions of money. The exchange acts as an intermediary instrument as the use can be to acquire any good or service and avoids the limitations of barter; where what one wants has to be matched with what the other has to offer.
Most forms of money are categorised as mediums of exchange, including commodity money, representative money, cryptocurrency, and most commonly fiat money. Representative and fiat money most widely exist in digital form as well as physical tokens, for example coins and notes.
Overcoming the limitations of barter
In a barter transaction, one valuable good is exchanged for another of approximately equivalent value. William Stanley Jevons described how a widely accepted medium allows each barter exchange to be split into three difficulties of barter. A medium of exchange is deemed to eliminate the need for a coincidence of wants.
Want of coincidence
A barter exchange requires a party who has what you desire and desires what you have. A medium of exchange removes that requirement, allowing an individual to sell and buy from various parties via an intermediary instrument.
Want of a measure of value
A barter market theoretically requires a value being known of every commodity, which is both impractical to arrange and impractical to maintain. If all exchanges go 'through' an intermediate medium, such as money, then goods can be priced in terms of that one medium. The medium of exchange therefore allows the relative values of items in the marketplace to be set and adjusted with ease. This is a dimension of the modern fiat money system referred to as a "unit of account"
Want of means of subdivision
A barter transaction requires that both objects being bartered be of equivalent value. A medium of exchange is able to be subdivided into small enough units to approximate the value of any good or service.
Transactions over time
A barter transaction typically happens on the spot or over a short period of time. Money, on the other hand, also functions as a store of value, until what is wanted becomes available.
Mutual impedance with store-of-value function
An ideal medium of exchange is spread throughout the marketplace to allow individuals with exchange potential to buy and sell. When money serves the function of a store of value, as fiat money does, there are conflicting drivers of monetary policy. This is because a store of value can become more valuable if it is scarce in the marketplace. When the medium of exchange is scarce, traders will pay to rent it (interest), which acts as an impedance to trade. In stable or deflationary environments, interest is a net transfer of wealth from debtor to creditor with the opposite transfer under inflationary environments.
Medium of exchange and measure of value
Fiat currencies function as money with "no intrinsic value" but rather exchange values which facilitate a measurable value of exchange. The market measures or sets the real value of various goods and services using the medium of exchange as a unit of measure i.e., standard or the yard stick of measurement of wealth. There is no other alternative to the mechanism used by the market to set, determine, or measure the value of various goods and services. Determination of price is an essential condition for justice in exchange, efficient allocation of resources, economic growth, welfare and justice.
The most important and essential function of a medium of exchange is to be widely acceptable and have relatively stable purchasing power (real value). Therefore, the following characteristics are essential:
Value common assets
Common and accessible
Constant utility
Low cost of preservation
Transportability
Divisibility
High market value in relation to volume and weight
Recognisability
Resistance to counterfeiting
To serve as a measure of value, a medium of exchange requires constant inherent value of its own or must be firmly linked to a definite basket of goods and services. Furthermore, constant intrinsic value and stable purchasing power are needed. Gold was long popular as a medium of exchange and store of value because it was inert, meaning it was convenient to move due to even small amounts of gold having a considerable and constant value.
Some critics of the prevailing system of fiat money argue that fiat money is the root cause of the continuum of economic crises, since it leads to the dominance of fraud, corruption, and manipulation, precisely as it does not satisfy the criteria for a medium of exchange cited above. Specifically, prevailing fiat money is free-floating, and depending upon its supply market finds or sets a value to it that continues to change as the supply of money shifts with respect to the economy's demand. Increasing free-floating money supply with respect to needs of the economy reduces the quantity of the basket of the goods and services. Therefore, it is not a unit or standard measure of wealth and the manipulation impedes the market mechanism by setting or determining just prices. This leads to a situation where no value-related economic data is just or reliable. On the other hand, Chartalists claim that the ability to manipulate the value of fiat money is an advantage, in that fiscal stimulus is more easily available in times of economic crisis.
Requisites needed
Although the unit of account must be in some way related to the medium of exchange in use, e.g. ensuring coinage is in denominations of that unit, making accounting simpler to perform, it is more often the case that media of exchange have no natural relationship to that unit, and must be 'minted' as having that value. Further, there may be variances in quality of the underlying good which may not have fully agreed commodity grading. The difference between the two functions becomes obvious when one considers the fact that coins were very often 'shaved.' Precious metal was removed from them, leaving them still useful as an identifiable coin in the marketplace, for a certain number of units in trade, but which no longer had the quantity of metal supplied by the coin's minter. It was observed as early as Oresme, Copernicus and then in 1558 by Sir Thomas Gresham, that "bad" money drives out "good" in any marketplace; (Gresham's Law states "Where legal tender laws exist, bad money drives out good money"). A more precise definition follows that: "A currency that is artificially overvalued by law will drive out of circulation a currency that is artificially undervalued by that law." Gresham's law is therefore a specific application of the general law of price controls. A common explanation is that people will always keep the less adultered, less clipped, less filed, less trimmed coin, and offer the other in the marketplace for the full units for which it is marked. It is inevitably the bad coins proffered, good ones retained.
Banks as financial intermediaries between ultimate savers and borrowers, and their ability to generate a medium of exchange marked higher than a fiat currency's store of value, is the basis of banking. Central banking is based on the principle that no medium requires more than the guarantee of the state that it can be redeemed for payment of debt as "legal tender" – therefore, all money equally backed by the state is considered good money, within that state. So long as that state produces anything of value to others, the medium of exchange has some value, and the currency may also be useful as a standard of deferred payment among others.
Of all functions of money, the medium of exchange function has historically been the most problematic due to counterfeiting, the systematic and deliberate creation of bad money with no authorization to do so, leading to the driving out of the good money entirely.
Other functions rely not on recognition of some token or weight of metal in a marketplace, where time to detect any counterfeit is limited and benefits for successful passing-off are high, but on more stable long term social contracts: one cannot easily force a whole society to accept a different standard of deferred payment, require even small groups of people to uphold a floor price for a store of value, still less to re-price everything and rewrite all accounts to a unit of account (the most stable function). Thus it tends to be the medium of exchange function that constrains what can be used as a form of financial capital.
It was once common in the United States to widely accept a check () as a medium of exchange, several parties endorsing it perhaps multiple times before it would eventually be deposited for its value in units of account, and thus redeemed. This practice became less common as it was exploited by forgers and led to a domino effect of bounced checks – a forerunner of the kind of fragility that electronic systems would eventually bring.
In the age of electronic money it was, and remains, common to use very long strings of difficult-to-reproduce numbers, generated by encryption methods, to authenticate transactions and commitments as having come from trusted parties. Thus the medium of exchange function has become wholly a part of the marketplace and its signals, and is utterly integrated with the unit of account function, so that, given the integrity of the public key system on which these are based, they become to that degree inseparable. This has clear advantages – counterfeiting is difficult or impossible unless the whole system is compromised, say by a new factoring algorithm. But at that point, the entire system is broken and the whole infrastructure is obsolete – new keys must be re-generated and the new system will also depend on some assumptions about difficulty of factoring.
Due to this inherent fragility, which is even more profound with electronic voting, some economists argue that units of account should not ever be abstracted or confused with the nominal units or tokens used in exchange. A medium is simply a medium, and should not be confused for the message.
See also
Authentication
Cheque
Commodity money
Forgery
History of money
Identity theft
Private currency
References
Bibliography
Jones, Robert A. "The Origin and Development of Media of Exchange." Journal of Political Economy 84 (Nov. 1976): 757-775.
External links
Linguistic and Commodity Exchanges-Examines the structural differences between barter and monetary commodity exchanges and oral and written linguistic exchanges.
Currency |
313833 | https://en.wikipedia.org/wiki/Swiss%20Army%20knife | Swiss Army knife | The Swiss Army knife is a multi-tool pocketknife manufactured by Victorinox. The term "Swiss Army knife" was coined by American soldiers after World War II after they had trouble pronouncing the German word "", meaning "officer’s knife".
The Swiss Army knife generally has a main spearpoint blade plus other blades and tools such as screwdrivers, a can opener, a saw blade, a pair of scissors, and many others. These are stowed inside the handle of the knife through a pivot point mechanism. The handle is traditionally a red color, with either a Victorinox or Wenger "cross" logo or, for Swiss military issue knives, the coat of arms of Switzerland. Other colors, textures, and shapes have appeared over the years.
Originating in Ibach, Switzerland, the Swiss Army knife was first produced in 1891 when the Karl Elsener company, which later became Victorinox, won the contract to produce the Swiss Army's Modell 1890 knife from the previous German manufacturer. In 1893, the Swiss cutlery company Paul Boéchat & Cie, which later became Wenger SA, received its first contract from the Swiss military to produce model 1890 knives; the two companies split the contract for provision of the knives from 1908 until Victorinox acquired Wenger in 2005. A cultural icon of Switzerland, both the design of the knife and its versatility have worldwide recognition. The term "Swiss Army knife" has acquired usage as a figure of speech indicating extreme utility applicable to more or less any scenario at hand.
History
Origins
The Swiss Army Knife was not the first multi-use pocket knife. In 1851, in Moby-Dick (chapter 107), Herman Melville mentions the "Sheffield contrivances, assuming the exterior – though a little swelled – of a common pocket knife; but containing, not only blades of various sizes, but also screw-drivers, cork-screws, tweezers, awls, pens, rulers, nail-filers and countersinkers."
During the late 1880s, the Swiss Army decided to purchase a new folding pocket knife for their soldiers. This knife was to be suitable for use by the army in opening canned food and for maintenance of the Swiss service rifle, the Schmidt–Rubin, which required a screwdriver for assembly and disassembly.
In January 1891, the knife received the official designation Modell 1890. The knife had a blade, reamer, can opener, screwdriver, and grips made out of dark oak wood that some say was later partly replaced with ebony wood. At that time no Swiss company had the necessary production capacity, so the initial order for 15,000 knives was placed with the German knife manufacturer Wester & Co. from Solingen, Germany. These knives were delivered in October 1891.
In 1891, Karl Elsener, then owner of a company that made surgical equipment, set out to manufacture the knives in Switzerland itself. At the end of 1891 Elsener began production of the Modell 1890 knives, in direct competition with the Solingen company. He incurred financial losses doing so, as Wester & Co was able to produce the knives at a lower cost. Elsener was on the verge of bankruptcy when, in 1896, he developed an improved knife, intended for the use by officers, with tools attached on both sides of the handle using a special spring mechanism, allowing him to use the same spring to hold them in place. This new knife was patented on 12 June 1897, with a second, smaller cutting blade, a corkscrew, and wood fibre grips, under the name of Schweizer Offiziers- und Sportmesser ("Swiss officer's and sports knife"). While the Swiss military did not commission the knife, it was successfully marketed internationally, restoring Elsener's company to prosperity.
Elsener used the Swiss coat of arms to identify his knives beginning in 1909. With slight modifications, this is still the company logo. Also in 1909, on the death of his mother, Elsener named his company "Victoria", after her given name, in her honour.
In 1893 the second industrial cutler of Switzerland, Paul Boéchat & Cie, headquartered in Delémont in the French-speaking region of Jura, started selling a similar product. Its general manager, Théodore Wenger, acquired the company and renamed it the Wenger Company.
Victorinox and Wenger
In 1908 the Swiss government split the contract between Victorinox and Wenger, placing half the orders with each.
By mutual agreement, Wenger advertised "the Genuine Swiss Army Knife" and Victorinox used the slogan, "the Original Swiss Army Knife".
Elsener's son Carl renamed the company in 1921 to "Victorinox", incorporating the abbreviation "inox" for acier inoxydable, the French term for stainless steel which they started to use that year.
During 1961–2005, the pocket knives issued to the Swiss military were produced exclusively by Victorinox and Wenger.
On 26 April 2005, Victorinox acquired Wenger, once again becoming the sole supplier of knives to the Military of Switzerland. Victorinox at first kept the Wenger brand intact, but on 30 January 2013, the company announced that the Wenger brand of knives would be abandoned in favour of Victorinox.
The press release stated that Wenger's factory in Delémont would continue to produce knives and all employees at this site will retain their jobs. They further elaborated that an assortment of items from the Wenger line-up will remain in production under the Victorinox brand name. Wenger's US headquarters will be merged with Victorinox's location in Monroe, Connecticut. Wenger's watch and licensing business will continue as a separate brand: SwissGear.
Up until 2008 Victorinox AG and Wenger SA supplied about 50,000 knives to the military of Switzerland each year, and manufactured many more for export, mostly to the United States. Commercial knives can be distinguished by their cross logos; the Victorinox cross logo is surrounded by a shield while the Wenger cross logo is surrounded by a slightly rounded square.
Victorinox registered the words "Swiss Army" and "Swiss Military" as a trademark in the US and was sued at Bern cantonal commercial court by the Swiss Confederacy (represented by Armasuisse, the authority representing the actual Swiss military), in October 2018. After an initial hearing Victorinox agreed to cede the registration in the United States of the term "Swiss military" to Armasuisse in return for an exclusive licence to market perfumes under the same name.
Features, tools, and parts
Tools and components
There are various models of the Swiss Army knife with different tool combinations. Though Victorinox doesn't provide custom knives, they have produced many variations to suit individual users.
Main tools:
Large blade, imprinted on the blade shank of Victorinox models with "VICTORINOX SWISS MADE" to verify the knife's authenticity.
Small blade
Nail file / nail cleaner
Nail file / nail cleaner / metal file / metal saw
Wood saw
Fish scaler / hook disgorger / ruler in cm and inches
Scissors
Electrician's blade / wire scraper
Pruning blade
Pharmaceutical spatula (cuticle pusher)
Cyber Tool (bit driver)
Pliers / wire cutter / wire crimper
LED light
USB flash drive
Magnifying lens
Phillips screwdriver
Hoof cleaner
Shackle opener / marlinspike
Can opener / 3 mm slotted screwdriver
Cap opener / 6 mm slotted screwdriver / wire stripper
Combination tool containing cap opener / can opener / 5 mm slotted screwdriver / wire stripper
Smaller tools:
Keyring
Reamer
Multipurpose hook
2mm slotted screwdriver
Chisel
Corkscrew or Phillips driver
Mini screwdriver (designed to fit within the corkscrew)
Scale tools:
Tweezers
Toothpick
Pressurized ballpoint pen (with a retractable version on smaller models, and can be used to set DIP switches)
Stainless pin
Digital clock / alarm / timer / altimeter / thermometer / barometer
Three Victorinox SAK models had a butane lighter: the Swissflame, Campflame, and Swisschamp XXLT, first introduced in 2002 and then discontinued in 2005. The models were never sold in the United States due to lack of safety features. They used a standard piezoelectric ignition system for easy and quick ignition with adjustable flame, and were designed for operation at altitudes up to above sea level and continuous operation of 10 minutes.
In January 2010, Victorinox announced the Presentation Master models, released in April 2010. The technological tools included a laser pointer, and detachable flash drive with fingerprint reader. Victorinox now sells an updated version called the Slim Jetsetter, with "a premium software package that provides ultra secure data encryption, automatic backup functionality, secure web surfing capabilities, file and email synchronization between the drive and multiple computers, Bluetooth pairing and much more. On the hardware side of things, biometric fingerprint technology, laser pointers, LED lights, Bluetooth remote control and of course, the original Swiss Army Knife implements – blade, scissors, nail file, screwdriver, key ring and ballpoint pen are standard. **Not every feature is available on every model within the collection."
In 2006, Wenger produced a knife called "The Giant" that included every implement the company ever made, with 87 tools and 141 different functions. It was recognized by Guinness World Records as the world's most multifunctional penknife. It retails for about €798 or $US1000, though some vendors charge much higher prices.
In the same year, Victorinox released the SwissChamp XAVT, consisting of 118 parts and 80 functions with a retail price of $425. The Guinness Book of Records recognizes a unique 314-blade Swiss Army-style knife made in 1991 by Master Cutler Hans Meister as the world's largest penknife, weighing .
Locking mechanisms
Some Swiss Army knives have locking blades to prevent accidental closure. Wenger was the first to offer a "PackLock" for the main blade on several of their standard 85mm models. Several large Wenger and Victorinox models have a locking blade secured by a slide lock that is operated with an unlocking-button integrated in the scales. Some Victorinox 111 mm series knives have a double liner lock that secures the cutting blade and large slotted screwdriver/cap opener/wire stripper combination tool designed towards prying.
Design and materials
Rivets and flanged bushings made from brass hold all machined steel parts and other tools, separators and the scales together. The rivets are made by cutting and pointing appropriately sized bars of solid brass.
The separators between the tools have been made from aluminium alloy since 1951. This makes the knives lighter. Previously these separating layers were made of nickel-silver.
The martensitic stainless steel alloy used for the cutting blades is optimized for high toughness and corrosion resistance and has a composition of 15% chromium, 0.60% silicon, 0.52% carbon, 0.50% molybdenum, and 0.45% manganese and is designated X55CrMo14 or DIN 1.4110 according to Victorinox. After a hardening process at 1040 °C and annealing at 160 °C the blades achieve an average hardness of 56 HRC. This steel hardness is suitable for practical use and easy resharpening, but less than achieved in stainless steel alloys used for blades optimized for high wear resistance. According to Victorinox the martensitic stainless steel alloy used for the other parts is X39Cr13 (aka DIN 1.4031, AISI/ASTM 420) and for the springs X20Cr13 (aka DIN 1.4021, but still within AISI/ASTM 420).
The steel used for the wood saws, scissors and nail files has a steel hardness of HRC 53, the screwdrivers, tin openers and awls have a hardness of HRC 52, and the corkscrew and springs have a hardness of HRC 49.
The metal saws and files, in addition to the special case hardening, are also subjected to a hard chromium plating process so that iron and steel can also be filed and cut.
Although red Cellulose Acetate Butyrate (CAB) (generally known trade names are Cellidor, Tenite and Tenex) scaled Swiss Army knives are most common, there are many colors and alternative materials like more resilient nylon and aluminum for the scales available. Many textures, colors and shapes now appear in the Swiss Army Knife. Since 2006 the scales on some knife models can have textured rubber non-slip inlays incorporated, intended for sufficient grip with moist or wet hands. The rubber also provides some impact protection for such edged scales. A modding community has also developed from professionally produced custom models combining novel materials, colors, finishes and occasionally new tools such as firesteels or tool 'blades' mounting replaceable surgical scalpel blades to replacement of standard scales (handles) with new versions in natural materials such as buffalo horn. In addition to 'limited edition' productions runs, numerous examples from basic to professional-level customizations of standard knives—such as retrofitting pocket clips, one-off scales created using 3D printing techniques, decoration using anodization and new scale materials—can be found by searching for "SAK mods".
Assembly
During assembly, all components are placed on several brass rivets. The first components are generally an aluminium separator and a flat steel spring. Once a layer of tools is installed, another separator and spring are placed for the next layer of tools. This process is repeated until all the desired tool layers and the finishing separator are installed. Once the knife is built, the metal parts are fastened by adding brass flanged bushings to the rivets. The excess length of the rivets is then cut off to make them flush with the bushings. Finally, the remaining length of the rivets is flattened into the flanged bushings.
After the assembly of the metal parts, the blades on smaller knives are sharpened to a 15° angle, resulting in a 30° V-shaped steel cutting edge. From sized knives the blades are sharpened to a 20° angle, resulting in a 40° V-shaped steel cutting edge. Chisel ground blades are sharpened to a 25° angle, resulting in a 25° asymmetric-shaped steel cutting edge were only one side is ground and the other is deburred and remains flat. The blades are then checked with a laser reflecting goniometer to verify the angle of the cutting edges.
Finally, scales are applied. Slightly undersized holes incorporated into the inner surface enclose the bushings, which have truncated cone cross-section and are slightly undercut, forming a one-way interference fit when pressed into the generally softer and more elastic scale material. The result is a tight adhesive-free connection that nonetheless permits new identical-pattern scales to be quickly and easily applied.
Sizes
Victorinox models are available in , , , , , , and lengths when closed. The thickness of the knives varies depending on the number of tool layers included. The models offer the most variety in tool configurations in the Victorinox model line with as many as 15 layers.
Wenger models are available in , , , , and lengths when closed. Thickness varies depending on the number of tool layers included. The models offer the most variety in tool configurations in the Wenger model line, with as many as 10 layers.
Knives issued by the Swiss Armed Forces
Since the first issue as personal equipment in 1891 the Soldatenmesser (Soldier Knives) issued by the Swiss Armed Forces have been revised several times. There are five different main Modelle (models). Their model numbers refer to the year of introduction in the military supply chain. Several main models have been revised over time and therefore exist in different Ausführungen (executions), also denoted by the year of introduction. The issued models of the Swiss Armed Forces are:
Modell 1890
Modell 1890 Ausführung 1901
Modell 1908
Modell 1951
Modell 1951 Ausführung 1954
Modell 1951 Ausführung 1957
Modell 1961
Modell 1961 Ausführung 1965
Modell 1961 Ausführung 1978
Modell 1961 Ausführung 1994
Soldatenmesser 08 (Soldier Knife 08)
Soldier Knives are issued to every recruit or member of the Swiss Armed Forces and the knives issued to officers have never differed from those issued to non-commissioned officers and privates. A model incorporating a corkscrew and scissors was produced as an officer's tool, but was deemed not "essential for survival", leaving officers to purchase it individually.
Soldier knife model 1890
The Soldier Knife model 1890 had a spear point blade, reamer, can-opener, screwdriver and grips made out of oak wood scales (handles) that were treated with rapeseed oil for greater toughness and water-repellency, which made them black in color. The wooden grips of the Modell 1890 tended to crack and chip so in 1901 these were changed to a hard reddish-brown fiber similar in appearance to wood. The knife was long, thick and weighed .
Soldier knife model 1908
The Soldier Knife model 1908 had a clip point blade rather than the 1890s spear point blade, still with the fiber scales, carbon steel tools, nickel-silver bolster, liners, and divider. The knife was long, thick and weighed . The contract with the Swiss Army split production equally between the Victorinox and Wenger companies.
Soldier knife model 1951
The soldier Knife model 1951 had fiber scales, nickel-silver bolsters, liners, and divider, and a spear point blade. This was the first Swiss Armed Forces issue model where the tools were made of stainless steel. The screwdriver now had a scraper arc on one edge. The knife was long, thick and weighed .
Soldier knife model 1961
The Soldier Knife model 1961 has a long knurled alox handle with the Swiss crest, a drop point blade, a reamer, a blade combining bottle opener, screwdriver, and wire stripper, and a combined can-opener and small screwdriver. The knife was thick and weighed
The 1961 model also contains a brass spacer, which allows the knife, with the screwdriver and the reamer extended simultaneously, to be used to assemble the SIG 550 and SIG 510 assault rifles: the knife serves as a restraint to the firing pin during assembly of the lock. The Soldier Knife model 1961 was manufactured only by Victorinox and Wenger and was the first issued knife bearing the Swiss Coat of Arms on the handle.
Soldier knife 08
In 2007 the Swiss Government made a request for new updated soldier knives for the Swiss military for distribution in late 2008. The evaluation phase of the new soldier knife began in February 2008, when Armasuisse issued an invitation to tender. A total of seven suppliers from Switzerland and other countries were invited to participate in the evaluation process. Functional models submitted by suppliers underwent practical testing by military personnel in July 2008, while laboratory tests were used to assess compliance with technical requirements. A cost-benefit analysis was conducted and the model with the best price/performance ratio was awarded the contract. The order for 75,000 soldier knives plus cases was worth . This equates to a purchase price of , , in October 2009 per knife plus case.
Victorinox won the contest with a knife based on the One-Hand German Army Knife as issued by the German Bundeswehr and released in the civilian model lineup with the addition of a toothpick and tweezers stored in the nylon grip scales (side cover plates) as the One-Hand Trekker/Trailmaster model. Mass production of the new Soldatenmesser 08 (Soldier Knife 08) for the Swiss Armed Forces was started in December 2008,
and first issued to the Swiss Armed Forces beginning with the first basic training sessions of 2009.
The Soldier Knife 08 has an long ergonomic dual density handle with TPU rubbery thermoplastic elastomer non-slip inlays incorporated in the green Polyamide 6 grip shells and a double liner locking system, one-hand long locking partly wavy serrated chisel ground (optimized for right-handed use) drop point blade, wood saw, can opener with small slotted screwdriver, locking bottle opener with large slotted screwdriver and wire stripper/bender, reamer, Phillips (PH2) screwdriver and diameter split keyring. The Soldier Knife 08 width is , thickness is , overall length opened is and it weighs . The Soldier Knife 08 is manufactured only by Victorinox.
Knives issued by other militaries
The armed forces of more than 20 different nations have issued or approved the use of various versions of Swiss army knives made by Victorinox, among them the forces of Germany, France, the Netherlands, Norway, Malaysia and the United States (NSN 1095-01-653-1166 Knife, Combat).
Space program
The Swiss Army knife has been present in space missions carried out by NASA since the late 1970s. In 1978, NASA sent a letter of confirmation to Victorinox regarding a purchase of 50 knives known as the Master Craftsman model. In 1985, Edward M. Payton, brother of astronaut Gary E. Payton, sent a letter to Victorinox, asking about getting a Master Craftsman knife after seeing the one his brother used in space. There are other stories of repairs conducted in space using a Swiss Army knife.
Cultural impact
The Swiss Army knife has been added to the collection of the Museum of Modern Art in New York and Munich's State Museum of Applied Art for its design. The term "Swiss Army" currently is a registered trademark owned by Victorinox AG and its subsidiary, Wenger SA.
In both the original television series MacGyver as well as its 2016 reboot, character Angus MacGyver frequently uses different Swiss Army knives in various episodes to solve problems and construct simple objects.
The term "Swiss Army knife" has entered popular culture as a metaphor for usefulness and adaptability. The multi-purpose nature of the tool has also inspired a number of other gadgets.
A particularly large Wenger knife model, Wenger 16999, has inspired a large number of humorous reviews on Amazon.
When U.S. District Court for the Southern District of California Roger Benitez overturned California's 30-year-old ban on assault weapons in Miller v. Bonta, he compared the Swiss Army knife to the AR-15 rifle in the first sentence of his opinion, "Like the Swiss Army Knife, the popular AR-15 rifle is a perfect combination of home defense weapon and homeland defense equipment." In response, California Governor Gavin Newsom wrote that the comparison "completely undermines the credibility of this decision".
See also
Gerber multitool
Leatherman
Pocketknife
Swiss Army Man, a 2016 film that uses absurdist humor to manipulate a man's corpse like a multi-tool
References
Further reading
The Knife and its History – Written on the occasion of the centennial anniversary of Victorinox. Printed in Switzerland in 1984. Begins with 117 pages covering the history of world cutlery, beginning in the Stone Age; many black-and-white prints from old books. 72 pages on the history of the Victorinox company; color photos of the factory, production, and knives. There is an edition in German also, Das Messer and Seine Geschichte. A large-format hardback.
Swiss Army Knife Companion: The Improbable History of the World's Handiest Knife, by Rick Wall. Printed in US, 1986. A joking view of the SAK. 61 pages, paperback booklet. Rick was the president of the now-defunct Swiss Army Knife Society.
Swiss Army Knife Handbook: The Official History and Owner's Guide, by Kathryn Kane. Printed in US, 1988. Practical information on the tools, modifications, uses. Good drawings, done by the author. 93 pages, paperback booklet. Published by the Swiss Army Knife Society.
Die Lieferanten von Schweizer Soldatenmessern Seit 1891, by Martin Frosch, a binder-format in German with drawings dealing mainly with the technical details of the Soldier model up through 1988.
A Collector's Guide to Victorinox 58 mm Pocket Knives. Published about 1990 by the author, Daniel J. Jacquart, President of the Victorinox SAK Society. 173 pages enumerating the models, scale materials, colors. Binder format with black & white photos.
A Fervour Over Knives: Celebrating the centennial of Wenger. Printed in Switzerland in 1993. Eight pages on the history of cutlery, 28 pages on the Delémont region of the 19th century, its iron, forges, waters, businesses. 97 pages on the Wenger company; striking color photographs of production and knives. 1200 copies in French, 800 in German, 500 in English. Large-format hardback, wider than tall.
Swiss Army Knives: A Collectors Companion, by Derek Jackson. Published in London, printed in the United Arab Emirates, 1999; a 2nd edition printed in China, 2003. 35 pages on the history of cutlery; 157 pages on Victorinox knives, brief history of the company, almost no mention of Wenger; no history of models or development of tools; nice photographs. Much of it is material reproduced from Victorinox's The Knife and its History. A first boxed edition included a Soldier with Carl Elsener's signature engraved on the blade; the second edition was sometimes accompanied by one of a limited run (1 of 5,000) 2008 Soldier, last of the Model 1961.
A friend in need, printed by Victorinox. The first edition no title and no date; a second edition dated 2003. 60 pages (2nd edition 56 pages) of true stories about lives saved, emergencies handled, situations resolved with the SAK. A small pamphlet.
The Swiss Army Knife, by Peter Hayden. Printed in England, 2005. A children's story in which an SAK plays a briefly passing role. With humorous illustrations. 63 pages paperback.
The Swiss Army Knife Owner's Manual, by Michael M. Young, 2011. Published by the author, printed in the USA. 224 page paperback with 96 color photos and several drawings. Comprehensive in breadth and depth, literate and sometimes humorous. Chapters on the history of the Victorinox and Wenger companies and the factories, the development of the Soldier and Officer models, charts of the main models made by both companies, care and safe use, improvised uses, results of physical tests, repairs and modifications, true stories.
Les couteaux du soldat de l'Armée suisse, by Robert Moix, 2013. An informative summary in French, with many photos, of the many types and the various manufacturers of the pocketknife issued to the Swiss Army.
Victorinox Swiss Army Knife Whittling Book, by Chris Lubkemann, 2015. "43 easy projects" to carve with an SAK.
External links
Victorinox manufacturer's website
SAKWiki
Products introduced in 1891
Camping equipment
Mechanical hand tools
Pocket knives
Swiss inventions
Victorinox |
315533 | https://en.wikipedia.org/wiki/Churn | Churn | Churn may refer to:
Churn drill, large-diameter drilling machine large holes appropriate for holes in the ground
Dairy-product terms
Butter churn, device for churning butter
Churning (butter), the process of creating butter out of milk or cream
Milk churn, container for milk transportation
Chuck Churn (born 1930), Major League Baseball pitcher in 1957–1959
Geography
Devils Churn, Pacific inlet in Lincoln County, Oregon, U.S.
England:
River Churn, river running through Gloucestershire
Churn railway station (inactive)
British Columbia, Canada:
Churn Creek
Churn Creek Provincial Park, in Churn Creek Protected Area
Music
Churn (Shihad album), 1993
Churn (Seven Mary Three album), 1994
Business
Product churning, a business practice whereby more of the product is sold than is beneficial to the consumer
Churning (stock trade), the excessive buying and selling of a client's stocks by a trader to generate large commission fees
Churn rate, a measure of the number of individuals or items moving into or out of a collective over a specific period of time.
Technology
Churning (cipher), an encryption function used in the ITU G.983.1 standard.
Other
Manthan, an Indian word meaning churn |
315794 | https://en.wikipedia.org/wiki/SD%20card | SD card | Secure Digital, officially abbreviated as SD, is a proprietary non-volatile memory card format developed by the SD Association (SDA) for use in portable devices.
The standard was introduced in August 1999 by joint efforts between SanDisk, Panasonic (Matsushita) and Toshiba as an improvement over MultiMediaCards (MMCs), and has become the industry standard. The three companies formed SD-3C, LLC, a company that licenses and enforces intellectual property rights associated with SD memory cards and SD host and ancillary products.
The companies also formed the SD Association (SDA), a non-profit organization, in January 2000 to promote and create SD Card standards. SDA today has about 1,000 member companies. The SDA uses several trademarked logos owned and licensed by SD-3C to enforce compliance with its specifications and assure users of compatibility.
History
1999–2002: Creation
In 1999, SanDisk, Panasonic (Matsushita), and Toshiba agreed to develop and market the Secure Digital (SD) Memory Card. The card was derived from the MultiMediaCard (MMC) and provided digital rights management based on the Secure Digital Music Initiative (SDMI) standard and for the time, a high memory density.
It was designed to compete with the Memory Stick, a DRM product that Sony had released the year before. Developers predicted that DRM would induce wide use by music suppliers concerned about piracy.
The trademarked "SD" logo was originally developed for the Super Density Disc, which was the unsuccessful Toshiba entry in the DVD format war. For this reason the D within the logo resembles an optical disc.
At the 2000 Consumer Electronics Show (CES) trade show, the three companies announced the creation of the SD Association (SDA) to promote SD cards. The SD Association, headquartered in San Ramon, California, United States, started with about 30 companies and today consists of about 1,000 product manufacturers that make interoperable memory cards and devices. Early samples of the SD card became available in the first quarter of 2000, with production quantities of 32 and 64 MB cards available three months later.
2003: Mini cards
The miniSD form was introduced at March 2003 CeBIT by SanDisk Corporation which announced and demonstrated it. The SDA adopted the miniSD card in 2003 as a small form factor extension to the SD card standard. While the new cards were designed especially for mobile phones, they are usually packaged with a miniSD adapter that provides compatibility with a standard SD memory card slot.
2004–2005: Micro cards
The microSD removable miniaturized Secure Digital flash memory cards were originally named T-Flash or TF, abbreviations of TransFlash. TransFlash and microSD cards are functionally identical allowing either to operate in devices made for the other. microSD (and TransFlash) cards are electrically compatible with larger SD cards and can be used in devices that accept SD cards with the help of a passive adapter, which contains no electronic components, only metal traces connecting the two sets of contacts. Unlike the larger SD cards, microSD does not offer a mechanical write protect switch, thus an operating-system-independent way of write protecting them does not exist in the general case. SanDisk conceived microSD when its Chief Technology Officer (CTO) and the CTO of Motorola concluded that current memory cards were too large for mobile phones.
The card was originally called T-Flash, but just before product launch, T-Mobile sent a cease-and-desist letter to SanDisk claiming that T-Mobile owned the trademark on T-(anything), and the name was changed to TransFlash.
At CTIA Wireless 2005, the SDA announced the small microSD form factor along with SDHC secure digital high capacity formatting in excess of 2 GB with a minimum sustained read and write speed of 17.6 Mbit/s. SanDisk induced the SDA to administer the microSD standard. The SDA approved the final microSD specification on July 13, 2005. Initially, microSD cards were available in capacities of 32, 64, and 128 MB.
The Motorola E398 was the first mobile phone to contain a TransFlash (later microSD) card. A few years later, its competitors began using microSD cards.
2006–2008: SDHC and SDIO
The SDHC format, announced in January 2006, brought improvements such as 32 GB storage capacity and mandatory support for FAT32 file system. In April, the SDA released a detailed specification for the non-security related parts of the SD memory card standard and for the Secure Digital Input Output (SDIO) cards and the standard SD host controller.
In September 2006, SanDisk announced the 4 GB miniSDHC. Like the SD and SDHC, the miniSDHC card has the same form factor as the older miniSD card but the HC card requires HC support built into the host device. Devices that support miniSDHC work with miniSD and miniSDHC, but devices without specific support for miniSDHC work only with the older miniSD card. Since 2008, miniSD cards are no longer produced, due to market domination of the even smaller microSD cards.
2009–2019: SDXC
The storage density of memory cards has increased significantly throughout the 2010s decade, allowing the earliest devices to offer support for the SD:XC standard, such as the Samsung Galaxy S III and Samsung Galaxy Note II mobile phones, to expand their available storage to several hundreds of gigabytes.
2009
In January 2009, the SDA announced the SDXC family, which supports cards up to 2 TB and speeds up to 300 MB/s. SDXC cards are formatted with the exFAT filesystem by default. SDXC was announced at Consumer Electronics Show (CES) 2009 (January 7–10). At the same show, SanDisk and Sony also announced a comparable Memory Stick XC variant with the same 2 TB maximum as SDXC, and Panasonic announced plans to produce 64 GB SDXC cards. On March 6, Pretec introduced the first SDXC card, a 32 GB card with a read/write speed of 400 Mbit/s. But only early in 2010 did compatible host devices come onto the market, including Sony's Handycam HDR-CX55V camcorder, Canon's EOS 550D (also known as Rebel T2i) Digital SLR camera, a USB card reader from Panasonic, and an integrated SDXC card reader from JMicron. The earliest laptops to integrate SDXC card readers relied on a USB 2.0 bus, which does not have the bandwidth to support SDXC at full speed.
2010
In early 2010, commercial SDXC cards appeared from Toshiba (64 GB), Panasonic (64 GB and 48 GB), and SanDisk (64 GB).
2011
In early 2011, Centon Electronics, Inc. (64 GB and 128 GB) and Lexar (128 GB) began shipping SDXC cards rated at Speed Class 10. Pretec offered cards from 8 GB to 128 GB rated at Speed Class 16. In September 2011, SanDisk released a 64 GB microSDXC card. Kingmax released a comparable product in 2011.
2012
In April 2012, Panasonic introduced MicroP2 card format for professional video applications. The cards are essentially full-size SDHC or SDXC UHS-II cards, rated at UHS Speed Class U1. An adapter allows MicroP2 cards to work in current P2 card equipment.
2013
Panasonic MicroP2 cards shipped in March 2013 and were the first UHS-II compliant products on market; initial offer includes a 32GB SDHC card and a 64GB SDXC card. Later that year, Lexar released the first 256 GB SDXC card, based on 20 nm NAND flash technology.
2014
In February 2014, SanDisk introduced the first 128 GB microSDXC card, which was followed by a 200 GB microSDXC card in March 2015. September 2014 saw SanDisk announce the first 512 GB SDXC card.
2016
Samsung announced the world's first EVO Plus 256 GB microSDXC card in May 2016, and in September 2016 Western Digital (SanDisk) announced that a prototype of the first 1 TB SDXC card would be demonstrated at Photokina.
2017
In August 2017, SanDisk launched a 400 GB microSDXC card.
2018
In January 2018, Integral Memory unveiled its 512 GB microSDXC card. In May 2018, PNY launched a 512 GB microSDXC card. In June 2018 Kingston announced its Canvas series of MicroSD cards which were capable of capacities up to 512 GB, in three variations, Select, Go!, and React.
2019
In February 2019, Micron and SanDisk unveiled their microSDXC cards of 1 TB capacity.
2019–present: SDUC
The Secure Digital Ultra Capacity (SDUC) format supports cards up to 128 TB and offers speeds up to 985 MB/s.
Capacity
Secure Digital includes five card families available in three sizes. The five families are the original Standard-Capacity (SDSC), the High-Capacity (SDHC), the eXtended-Capacity (SDXC), the Ultra-Capacity (SDUC) and the SDIO, which combines input/output functions with data storage. The three form factors are the original size, the mini size, and the micro size. Electrically passive adapters allow a smaller card to fit and function in a device built for a larger card. The SD card's small footprint is an ideal storage medium for smaller, thinner, and more portable electronic devices.
SD (SDSC)
The second-generation Secure Digital (SDSC or Secure Digital Standard Capacity) card was developed to improve on the MultiMediaCard (MMC) standard, which continued to evolve, but in a different direction. Secure Digital changed the MMC design in several ways:
Asymmetrical shape of the sides of the SD card prevent inserting it upside down (whereas an MMC goes in most of the way but makes no contact if inverted).
Most SD cards are thick, compared to for MMCs. The SD specification defines a card called Thin SD with a thickness of 1.4 mm, but they occur only rarely, as the SDA went on to define even smaller form factors.
The card's electrical contacts are recessed beneath the surface of the card, protecting them from contact with a user's fingers.
The SD specification envisioned capacities and transfer rates exceeding those of MMC, and both of these functionalities have grown over time. For a comparison table, see below.
While MMC uses a single pin for data transfers, the SD card added a four-wire bus mode for higher data rates.
The SD card added Content Protection for Recordable Media (CPRM) security circuitry for digital rights management (DRM) content-protection.
Addition of a write-protect notch
Full-size SD cards do not fit into the slimmer MMC slots, and other issues also affect the ability to use one format in a host device designed for the other.
SDHC
The Secure Digital High Capacity (SDHC) format, announced in January 2006 and defined in version 2.0 of the SD specification, supports cards with capacities up to 32 GB. The SDHC trademark is licensed to ensure compatibility.
SDHC cards are physically and electrically identical to standard-capacity SD cards (SDSC). The major compatibility issues between SDHC and SDSC cards are the redefinition of the Card-Specific Data (CSD) register in version 2.0 (see below), and the fact that SDHC cards are shipped preformatted with the FAT32 file system.
Version 2.0 also introduces a High-speed bus mode for both SDSC and SDHC cards, which doubles the original Standard Speed clock to produce 25 MB/s.
SDHC host devices are required to accept older SD cards. However, older host devices do not recognize SDHC or SDXC memory cards, although some devices can do so through a firmware upgrade. Older Windows operating systems released before Windows 7 require patches or service packs to support access to SDHC cards.
SDXC
The Secure Digital eXtended Capacity (SDXC) format, announced in January 2009 and defined in version 3.01 of the SD specification, supports cards up to 2 TB, compared to a limit of 32 GB for SDHC cards in the SD 2.0 specification. SDXC adopts Microsoft's exFAT file system as a mandatory feature.
Version 3.01 also introduced the Ultra High Speed (UHS) bus for both SDHC and SDXC cards, with interface speeds from 50 MB/s to 104 MB/s for four-bit UHS-I bus. (this number has since been exceeded with SanDisk proprietary technology for 170 MB/s read, which is not proprietary anymore, as Lexar has the 1066x running at 160 MB/s read and 120 MB/s write via UHS 1, and Kingston also has their Canvas Go! Plus, also running at 170 MB/s).
Version 4.0, introduced in June 2011, allows speeds of 156 MB/s to 312 MB/s over the four-lane (two differential lanes) UHS-II bus, which requires an additional row of physical pins.
Version 5.0 was announced in February 2016 at CP+ 2016, and added "Video Speed Class" ratings for UHS cards to handle higher resolution video formats like 8K. The new ratings define a minimal write speed of 90 MB/s.
SDUC
The Secure Digital Ultra Capacity (SDUC) format, described in the SD 7.0 specification, and announced in June 2018, supports cards up to 128 TB and offers speeds up to 985 MB/s, regardless of form factor, either micro or full size, or interface type including UHS-I, UHS-II, UHS-III or SD Express. The SD Express interface can also be used with SDHC and SDXC cards.
exFAT filesystem
SDXC and SDUC cards are normally formatted using the exFAT file system, thereby limiting their use to a limited set of operating systems. Therefore, exFAT-formatted SDXC cards are not a 100% universally readable exchange medium. However, SD cards can be reformatted to any file system required.
Windows Vista (SP1) and later and OS X (10.6.5 and later) have native support for exFAT. (Windows XP and Server 2003 can support exFAT via an optional update from Microsoft.)
Most BSD and Linux distributions did not, for legal reasons; though in Linux kernel 5.4 Microsoft open-sourced the spec and allowed the inclusion of an exfat driver. Users of older kernels or BSD can manually install third-party implementations of exFAT (as a FUSE module) in order to be able to mount exFAT-formatted volumes. However, SDXC cards can be reformatted to use any file system (such as ext4, UFS, or VFAT), alleviating the restrictions associated with exFAT availability.
Except for the change of file system, SDXC cards are mostly backward compatible with SDHC readers, and many SDHC host devices can use SDXC cards if they are first reformatted to the FAT32 file system.
Nevertheless, in order to be fully compliant with the SDXC card specification, some SDXC-capable host devices are firmware-programmed to expect exFAT on cards larger than 32 GB. Consequently, they may not accept SDXC cards reformatted as FAT32, even if the device supports FAT32 on smaller cards (for SDHC compatibility). Therefore, even if a file system is supported in general, it is not always possible to use alternative file systems on SDXC cards at all depending on how strictly the SDXC card specification has been implemented in the host device. This bears a risk of accidental loss of data, as a host device may treat a card with an unrecognized file system as blank or damaged and reformat the card.
The SD Association provides a formatting utility for Windows and Mac OS X that checks and formats SD, SDHC, SDXC, and SDUC cards.
Comparison
Speed
SD card speed is customarily rated by its sequential read or write speed. The sequential performance aspect is the most relevant for storing and retrieving large files (relative to block sizes internal to the flash memory), such as images and multimedia. Small data (such as file names, sizes and timestamps) falls under the much lower speed limit of random access, which can be the limiting factor in some use cases.
With early SD cards, a few card manufacturers specified the speed as a "times" ("×") rating, which compared the average speed of reading data to that of the original CD-ROM drive. This was superseded by the Speed Class Rating, which guarantees a minimum rate at which data can be written to the card.
The newer families of SD card improve card speed by increasing the bus rate (the frequency of the clock signal that strobes information into and out of the card). Whatever the bus rate, the card can signal to the host that it is "busy" until a read or a write operation is complete. Compliance with a higher speed rating is a guarantee that the card limits its use of the "busy" indication.
Bus
Default Speed
SD Cards will read and write at speeds of 12.5 MB/s.
High Speed
High Speed Mode (25 MB/s) was introduced to support digital cameras with 1.10 spec version.
Ultra High Speed (UHS)
The Ultra High Speed (UHS) bus is available on some SDHC and SDXC cards. The following ultra-high speeds are specified:
UHS-I
Specified in SD version 3.01. Supports a clock frequency of 100 MHz (a quadrupling of the original "Default Speed"), which in four-bit transfer mode could transfer 50 MB/s (SDR50). UHS-I cards declared as UHS104 (SDR104) also support a clock frequency of 208 MHz, which could transfer 104 MB/s. Double data rate operation at 50 MHz (DDR50) is also specified in Version 3.01, and is mandatory for microSDHC and microSDXC cards labeled as UHS-I. In this mode, four bits are transferred when the clock signal rises and another four bits when it falls, transferring an entire byte on each full clock cycle, hence a 50 MB/s operation could be transferred using a 50 MHz clock.
There is a proprietary UHS-I extension primarily by SanDisk that increases transfer speed further to 170 MB/s, called DDR208 (or DDR200). Unlike UHS-II, it does not use additional pins. It achieves this by using the 208 MHz frequency of the standard SDR104 mode, but using DDR transfers. This extension has since then been used by Lexar for their 1066x series (160 MB/s), Kingston Canvas Go Plus (170 MB/s) and the MyMemory PRO SD card (180 MB/s).
UHS-II
Specified in version 4.0, further raises the data transfer rate to a theoretical maximum of 156 MB/s (full-duplex) or 312 MB/s (half-duplex) using an additional row of pins (a total of 17 pins for full-size and 16 pins for micro-size cards). While first implementations in compact system cameras were seen three years after specification (2014), it took many more years until UHS-II was implemented on a regular basis. At the beginning of 2021, there were more than 50 DSLR and compact system cameras using UHS-II.
UHS-III
Version 6.0, released in February 2017, added two new data rates to the standard. FD312 provides 312 MB/s while FD624 doubles that. Both are full-duplex. The physical interface and pin-layout are the same as with UHS-II, retaining backward compatibility.
Cards that comply with UHS show Roman numerals 'I', 'II' or 'III' next to the SD card logo, and report this capability to the host device. Use of UHS-I requires that the host device command the card to drop from 3.3-volt to 1.8-volt operation over the I/O interface pins and select the four-bit transfer mode, while UHS-II requires 0.4-volt operation.
The higher speed rates are achieved by using a two-lane low voltage (0.4 V pp) differential interface. Each lane is capable of transferring up to 156 MB/s. In full-duplex mode, one lane is used for Transmit while the other is used for Receive. In half-duplex mode both lanes are used for the same direction of data transfer allowing a double data rate at the same clock speed. In addition to enabling higher data rates, the UHS-II interface allows for lower interface power consumption, lower I/O voltage and lower electromagnetic interference (EMI).
SD Express
The SD Express bus was released in June 2018 with SD specification 7.0. It uses a single PCIe lane to provide full-duplex 985 MB/s transfer speed. Supporting cards must also implement the NVM Express storage access protocol. The Express bus can be implemented by SDHC, SDXC, and SDUC cards. For legacy application use, SD Express cards must also support High Speed bus and UHS-I bus. The Express bus re-uses the pin layout of UHS-II cards and reserves the space for additional two pins that may be introduced in the future.
Hosts which implement version 7.0 of the spec allow SD Cards to do direct memory access, which increases the attack surface of the host dramatically in the face of malicious SD cards.
Version 8.0 was announced on 19 May 2020, with support for two PCIe lanes with additional row of contacts and PCIe 4.0 transfer rates, for a maximum bandwidth of 3938 MB/s.
microSD Express
In February 2019, the SD Association announced microSD Express. The microSD Express cards offer PCI Express and NVMe interfaces, as the June 2018 SD Express release did, alongside the legacy microSD interface for continued backwards compatibility. The SDA also released visual marks to denote microSD Express memory cards to make matching the card and device easier for optimal device performance.
Bus speed Comparison
Compatibility
NOTE: If the card reader uses the DDR208 controller on the UHS 1 pins, the card reader will perform at 180MB/s on applicable UHS 1 cards
Class
The SD Association defines standard speed classes for SDHC/SDXC cards indicating minimum performance (minimum serial data writing speed). Both read and write speeds must exceed the specified value. The specification defines these classes in terms of performance curves that translate into the following minimum read-write performance levels on an empty card and suitability for different applications:
The SD Association defines three types of Speed Class ratings: the original Speed Class, UHS Speed Class, and Video Speed Class.
(Original) Speed Class
Speed Class ratings 2, 4, and 6 assert that the card supports the respective number of megabytes per second as a minimum sustained write speed for a card in a fragmented state.
Class 10 asserts that the card supports 10 MB/s as a minimum non-fragmented sequential write speed and uses a High Speed bus mode. The host device can read a card's speed class and warn the user if the card reports a speed class that falls below an application's minimum need. By comparison, the older "×" rating measured maximum speed under ideal conditions, and was vague as to whether this was read speed or write speed.
The graphical symbol for the speed class has a number encircled with 'C' (C2, C4, C6, and C10).
UHS Speed Class
UHS-I and UHS-II cards can use UHS Speed Class rating with two possible grades: class 1 for minimum write performance of at least 10 MB/s ('U1' symbol featuring number 1 inside 'U') and class 3 for minimum write performance of 30 MB/s ('U3' symbol featuring 3 inside 'U'), targeted at recording 4K video. Before November 2013, the rating was branded UHS Speed Grade and contained grades 0 (no symbol) and 1 ('U1' symbol). Manufacturers can also display standard speed class symbols (C2, C4, C6, and C10) alongside, or in place of UHS speed class.
UHS memory cards work best with UHS host devices. The combination lets the user record HD resolution videos with tapeless camcorders while performing other functions. It is also suitable for real-time broadcasts and capturing large HD videos.
Video Speed Class
Video Speed Class defines a set of requirements for UHS cards to match the modern MLC NAND flash memory and supports progressive 4K and 8K video with minimum sequential writing speeds of 6 – 90 MB/s. The graphical symbols use a stylized 'V' followed by a number designating write speed (i.e. V6, V10, V30, V60, and V90).
Comparison
Application Performance Class
Application Performance Class is a newly defined standard from the SD Specification 5.1 and 6.0 which not only define sequential Writing Speeds but also mandates a minimum IOPS for reading and writing. Class A1 requires a minimum of 1500 reading and 500 writing operations per second, while class A2 requires 4000 and 2000 IOPS. A2 class cards require host driver support as they use command queuing and write caching to achieve their higher speeds. If used in an unsupported host, they might even be slower than other A1 cards, and if power is lost before cached data is actually written from the card's internal RAM to the card's internal flash RAM, that data is likely to be lost.
"×" rating
The "×" rating, that was used by some card manufacturers and made obsolete by speed classes, is a multiple of the standard CD-ROM drive speed of 150 KB/s (approximately 1.23 Mbit/s). Basic cards transfer data at up to six times (6×) the CD-ROM speed; that is, 900 kbit/s or 7.37 Mbit/s. The 2.0 specification defines speeds up to 200×, but is not as specific as Speed Classes are on how to measure speed. Manufacturers may report best-case speeds and may report the card's fastest read speed, which is typically faster than the write speed. Some vendors, including Transcend and Kingston, report their cards' write speed. When a card lists both a speed class and an "×" rating, the latter may be assumed a read speed only.
Real-world performance
In applications that require sustained write throughput, such as video recording, the device might not perform satisfactorily if the SD card's class rating falls below a particular speed. For example, a high-definition camcorder may require a card of not less than Class 6, suffering dropouts or corrupted video if a slower card is used. Digital cameras with slow cards may take a noticeable time after taking a photograph before being ready for the next, while the camera writes the first picture.
The speed class rating does not totally characterize card performance. Different cards of the same class may vary considerably while meeting class specifications. A card's speed depends on many factors, including:
The frequency of soft errors that the card's controller must re-try
Write amplification: The flash controller may need to overwrite more data than requested. This has to do with performing read-modify-write operations on write blocks, freeing up (the much larger) erase blocks, while moving data around to achieve wear leveling.
File fragmentation: where there is not sufficient space for a file to be recorded in a contiguous region, it is split into non-contiguous fragments. This does not cause rotational or head-movement delays as with electromechanical hard drives, but may decrease speed — for instance, by requiring additional reads and computation to determine where on the card the file's next fragment is stored.
In addition, speed may vary markedly between writing a large amount of data to a single file (sequential access, as when a digital camera records large photographs or videos) and writing a large number of small files (a random-access use common in smartphones). A study in 2012 found that, in this random-access use, some Class 2 cards achieved a write speed of 1.38 MB/s, while all cards tested of Class 6 or greater (and some of lower Classes; lower Class does not necessarily mean better small-file performance), including those from major manufacturers, were over 100 times slower. In 2014, a blogger measured a 300-fold performance difference on small writes; this time, the best card in this category was a class 4 card.
Features
Card security
Cards can protect their contents from erasure or modification, prevent access by non-authorized users, and protect copyrighted content using digital rights management.
Commands to disable writes
The host device can command the SD card to become read-only (to reject subsequent commands to write information to it). There are both reversible and irreversible host commands that achieve this.
Write-protect notch
Most full-size SD cards have a "mechanical write protect switch" allowing the user to advise the host computer that the user wants the device to be treated as read-only. This does not protect the data on the card if the host is compromised: "It is the responsibility of the host to protect the card. The position of the write protect switch is unknown to the internal circuitry of the card." Some host devices do not support write protection, which is an optional feature of the SD specification, and drivers and devices that do obey a read-only indication may give the user a way to override it.
The switch is a sliding tab that covers a notch in the card. The miniSD and microSD formats do not directly support a write protection notch, but they can be inserted into full-size adapters which do.
When looking at the SD card from the top, the right side (the side with the beveled corner) must be notched.
On the left side, there may be a write-protection notch. If the notch is omitted, the card can be read and written. If the card is notched, it is read-only. If the card has a notch and a sliding tab which covers the notch, the user can slide the tab upward (toward the contacts) to declare the card read/write, or downward to declare it read-only. The diagram to the right shows an orange sliding write-protect tab in both the unlocked and locked positions.
Cards sold with content that must not be altered are permanently marked read-only by having a notch and no sliding tab.
Card password
A host device can lock an SD card using a password of up to 16 bytes, typically supplied by the user. A locked card interacts normally with the host device except that it rejects commands to read and write data. A locked card can be unlocked only by providing the same password. The host device can, after supplying the old password, specify a new password or disable locking. Without the password (typically, in the case that the user forgets the password), the host device can command the card to erase all the data on the card for future re-use (except card data under DRM), but there is no way to gain access to the existing data.
Windows Phone 7 devices use SD cards designed for access only by the phone manufacturer or mobile provider. An SD card inserted into the phone underneath the battery compartment becomes locked "to the phone with an automatically generated key" so that "the SD card cannot be read by another phone, device, or PC". Symbian devices, however, are some of the few that can perform the necessary low-level format operations on locked SD cards. It is therefore possible to use a device such as the Nokia N8 to reformat the card for subsequent use in other devices.
smartSD cards
A smartSD memory card is a microSD card with an internal "secure element" that allows the transfer of ISO 7816 Application Protocol Data Unit commands to, for example, JavaCard applets running on the internal secure element through the SD bus.
Some of the earliest versions of microSD memory cards with secure elements were developed in 2009 by DeviceFidelity, Inc., a pioneer in near field communication (NFC) and mobile payments, with the introduction of In2Pay and CredenSE products, later commercialized and certified for mobile contactless transactions by Visa in 2010. DeviceFidelity also adapted the In2Pay microSD to work with the Apple iPhone using the iCaisse, and pioneered the first NFC transactions and mobile payments on an Apple device in 2010.
Various implementations of smartSD cards have been done for payment applications and secured authentication. In 2012 Good Technology partnered with DeviceFidelity to use microSD cards with secure elements for mobile identity and access control.
microSD cards with Secure Elements and NFC (near field communication) support are used for mobile payments, and have been used in direct-to-consumer mobile wallets and mobile banking solutions, some of which were launched by major banks around the world, including Bank of America, US Bank, and Wells Fargo, while others were part of innovative new direct-to-consumer neobank programs such as moneto, first launched in 2012.
microSD cards with Secure Elements have also been used for secure voice encryption on mobile devices, which allows for one of the highest levels of security in person-to-person voice communications. Such solutions are heavily used in intelligence and security.
In 2011, HID Global partnered with Arizona State University to launch campus access solutions for students using microSD with Secure Element and MiFare technology provided by DeviceFidelity, Inc. This was the first time regular mobile phones could be used to open doors without need for electronic access keys.
Vendor enhancements
Vendors have sought to differentiate their products in the market through various vendor-specific features:
Integrated Wi-Fi – Several companies produce SD cards with built-in Wi-Fi transceivers supporting static security (WEP 40; 104; and 128, WPA-PSK, and WPA2-PSK). The card lets any digital camera with an SD slot transmit captured images over a wireless network, or store the images on the card's memory until it is in range of a wireless network. Examples include: Eye-Fi / SanDisk, Transcend Wi-Fi, Toshiba FlashAir, Trek Flucard, PQI Air Card and LZeal ez Share. Some models geotag their pictures.
Pre-loaded content – In 2006, SanDisk announced Gruvi, a microSD card with extra digital rights management features, which they intended as a medium for publishing content. SanDisk again announced pre-loaded cards in 2008, under the slotMusic name, this time not using any of the DRM capabilities of the SD card. In 2011, SanDisk offered various collections of 1000 songs on a single slotMusic card for about $40, now restricted to compatible devices and without the ability to copy the files.
Integrated USB connector – The SanDisk SD Plus product can be plugged directly into a USB port without needing a USB card reader. Other companies introduced comparable products, such as the Duo SD product of OCZ Technology and the 3 Way (microSDHC, SDHC, and USB) product of A-DATA, which was available in 2008 only.
Different colors – SanDisk has used various colors of plastic or adhesive label, including a "gaming" line in translucent plastic colors that indicated the card's capacity.
Integrated display – In 2006, A-DATA announced a Super Info SD card with a digital display that provided a two-character label and showed the amount of unused memory on the card.
SDIO cards
A SDIO (Secure Digital Input Output) card is an extension of the SD specification to cover I/O functions. SDIO cards are only fully functional in host devices designed to support their input-output functions (typically PDAs like the Palm Treo, but occasionally laptops or mobile phones). These devices can use the SD slot to support GPS receivers, modems, barcode readers, FM radio tuners, TV tuners, RFID readers, digital cameras, and interfaces to Wi-Fi, Bluetooth, Ethernet, and IrDA. Many other SDIO devices have been proposed, but it is now more common for I/O devices to connect using the USB interface.
SDIO cards support most of the memory commands of SD cards. SDIO cards can be structured as eight logical cards, although currently, the typical way that an SDIO card uses this capability is to structure itself as one I/O card and one memory card.
The SDIO and SD interfaces are mechanically and electrically identical. Host devices built for SDIO cards generally accept SD memory cards without I/O functions. However, the reverse is not true, because host devices need suitable drivers and applications to support the card's I/O functions. For example, an HP SDIO camera usually does not work with PDAs that do not list it as an accessory. Inserting an SDIO card into any SD slot causes no physical damage nor disruption to the host device, but users may be frustrated that the SDIO card does not function fully when inserted into a seemingly compatible slot. (USB and Bluetooth devices exhibit comparable compatibility issues, although to a lesser extent thanks to standardized USB device classes and Bluetooth profiles.)
The SDIO family comprises Low-Speed and Full-Speed cards. Both types of SDIO cards support SPI and one-bit SD bus types. Low-Speed SDIO cards are allowed to also support the four-bit SD bus; Full-Speed SDIO cards are required to support the four-bit SD bus. To use an SDIO card as a "combo card" (for both memory and I/O), the host device must first select four-bit SD bus operation. Two other unique features of Low-Speed SDIO are a maximum clock rate of 400 kHz for all communications, and the use of Pin 8 as "interrupt" to try to initiate dialogue with the host device.
Ganging cards together
The one-bit SD protocol was derived from the MMC protocol, which envisioned the ability to put up to three cards on a bus of common signal lines. The cards use open collector interfaces, where a card may pull a line to the low voltage level; the line is at the high voltage level (because of a pull-up resistor) if no card pulls it low. Though the cards shared clock and signal lines, each card had its own chip select line to sense that the host device had selected it.
The SD protocol envisioned the ability to gang 30 cards together without separate chip select lines. The host device would broadcast commands to all cards and identify the card to respond to the command using its unique serial number.
In practice, cards are rarely ganged together because open-collector operation has problems at high speeds and increases power consumption. Newer versions of the SD specification recommend separate lines to each card.
Compatibility
Host devices that comply with newer versions of the specification provide backward compatibility and accept older SD cards. For example, SDXC host devices accept all previous families of SD memory cards, and SDHC host devices also accept standard SD cards.
Older host devices generally do not support newer card formats, and even when they might support the bus interface used by the card, there are several factors that arise:
A newer card may offer greater capacity than the host device can handle (over 4 GB for SDHC, over 32 GB for SDXC).
A newer card may use a file system the host device cannot navigate (FAT32 for SDHC, exFAT for SDXC)
Use of an SDIO card requires the host device be designed for the input/output functions the card provides.
The hardware interface of the card was changed starting with the version 2.0 (new high-speed bus clocks, redefinition of storage capacity bits) and SDHC family (Ultra-high speed (UHS) bus)
UHS-II has physically more pins but is backwards compatible to UHS-I and non-UHS for both slot and card.
Some vendors produced SDSC cards above 1 GB before the SDA had standardized a method of doing so.
Markets
Due to their compact size, Secure Digital cards are used in many consumer electronic devices, and have become a widespread means of storing several gigabytes of data in a small size. Devices in which the user may remove and replace cards often, such as digital cameras, camcorders, and video game consoles, tend to use full-sized cards. Devices in which small size is paramount, such as mobile phones, action cameras such as the GoPro Hero series, and camera drones, tend to use microSD cards.
Mobile phones
The microSD card has helped propel the smartphone market by giving both manufacturers and consumers greater flexibility and freedom.
While cloud storage depends on stable internet connection and sufficiently voluminous data plans, memory cards in mobile devices provide location-independent and private storage expansion with much higher transfer rates and no latency (engineering)(), enabling applications such as photography and video recording. While data stored internally on bricked devices is inaccessible, data stored on the memory card can be salvaged and accessed externally by the user as mass storage device. A benefit over USB on the go storage expansion is uncompromised ergonomy. The usage of a memory card also protects the mobile phone's non-replaceable internal storage from weardown from heavy applications such as excessive camera usage and portable FTP server hosting over WiFi Direct. Due to the technical development of memory cards, users of existing mobile devices are able to expand their storage further and priceworthier with time.
Recent versions of major operating systems such as Windows Mobile and Android allow applications to run from microSD cards, creating possibilities for new usage models for SD cards in mobile computing markets, as well as clearing available internal storage space.
SD cards are not the most economical solution in devices that need only a small amount of non-volatile memory, such as station presets in small radios. They may also not present the best choice for applications that require higher storage capacities or speeds as provided by other flash card standards such as CompactFlash. These limitations may be addressed by evolving memory technologies, such as the new SD 7.0 specifications which allow storage capabilities of up to 128 TB.
Many personal computers of all types, including tablets and mobile phones, use SD cards, either through built-in slots or through an active electronic adapter. Adapters exist for the PC card, ExpressBus, USB, FireWire, and the parallel printer port. Active adapters also let SD cards be used in devices designed for other formats, such as CompactFlash. The FlashPath adapter lets SD cards be used in a floppy disk drive.
Some devices such as the Samsung Galaxy Fit (2011) and Samsung Galaxy Note 8.0 (2013) have an SD card compartment located externally and accessible by hand, while it is located under the battery cover on other devices. More recent mobile phones use a pin-hole ejection system for the tray which houses both the memory card and SIM card.
Counterfeits
Commonly found on the market are mislabeled or counterfeit Secure Digital cards that report a fake capacity or run slower than labeled.
Software tools exist to check and detect counterfeit products. Detection of counterfeit cards usually involves copying files with random data to the SD card until the card's capacity is reached, and copying them back. The files that were copied back can be tested either by comparing checksums (e.g. MD5), or trying to compress them. The latter approach leverages the fact that counterfeited cards let the user read back files, which then consist of easily compressible uniform data (for example, repeating 0xFFs).
Digital cameras
SD/MMC cards replaced Toshiba's SmartMedia as the dominant memory card format used in digital cameras. In 2001, SmartMedia had achieved nearly 50% use, but by 2005 SD/MMC had achieved over 40% of the digital camera market and SmartMedia's share had plummeted by 2007.
At this time, all the leading digital camera manufacturers used SD in their consumer product lines, including Canon, Casio, Fujifilm, Kodak, Leica, Nikon, Olympus, Panasonic, Pentax, Ricoh, Samsung, and Sony. Formerly, Olympus and Fujifilm used XD-Picture Cards (xD cards) exclusively, while Sony only used Memory Stick; by early 2010 all three supported SD.
Some prosumer and professional digital cameras continued to offer CompactFlash (CF), either on a second card slot or as the only storage, as CF supports much higher maximum capacities and historically was cheaper for the same capacity.
Secure Digital memory cards can be used in Sony XDCAM EX camcorders with an adapter and in Panasonic P2 card equipment with a MicroP2 adapter.
Personal computers
Although many personal computers accommodate SD cards as an auxiliary storage device using a built-in slot, or can accommodate SD cards by means of a USB adapter, SD cards cannot be used as the primary hard disk through the onboard ATA controller, because none of the SD card variants support ATA signalling. Primary hard disk use requires a separate SD controller chip or an SD-to-CompactFlash converter. However, on computers that support bootstrapping from a USB interface, an SD card in a USB adapter can be the primary hard disk, provided it contains an operating system that supports USB access once the bootstrap is complete.
In laptop and tablet computers, memory cards in an integrated card reader offer an ergonomical benefit over USB flash drives, as the latter sticks out of the device, and the user would need to be cautious not to bump it while transporting the device, which could damage the USB port. Memory cards have a unified shape and do not reserve a USB port when inserted into a computer's dedicated card slot.
Since late 2009, newer Apple computers with installed SD card readers have been able to boot in macOS from SD storage devices, when properly formatted to Mac OS Extended file format and the default partition table set to GUID Partition Table. (See Other file systems below).
SD cards are increasing in usage and popularity among owners of vintage computers like 8-bit Atari. For example SIO2SD (SIO is an Atari port for connecting external devices) is used nowadays. Software for an 8-bit Atari may be included on one SD card that may have less than 4-8 GB of disk size (2019).
Embedded systems
In 2008, the SDA specified Embedded SD, "leverag[ing] well-known SD standards" to enable non-removable SD-style devices on printed circuit boards. However this standard was not adopted by the market while the MMC standard became the de facto standard for embedded systems. SanDisk provides such embedded memory components under the iNAND brand.
Most modern microcontrollers have built-in SPI logic that can interface to an SD card operating in its SPI mode, providing non-volatile storage. Even if a microcontroller lacks the SPI feature, the feature can be emulated by bit banging. For example, a home-brew hack combines spare General Purpose Input/Output (GPIO) pins of the processor of the Linksys WRT54G router with MMC support code from the Linux kernel. This technique can achieve throughput of up to .
Music distribution
Prerecorded microSDs have been used to commercialize music under the brands slotMusic and slotRadio by SanDisk and MQS by Astell&Kern.
Technical details
Physical size
The SD card specification defines three physical sizes. The SD and SDHC families are available in all three sizes, but the SDXC and SDUC families are not available in the mini size, and the SDIO family is not available in the micro size. Smaller cards are usable in larger slots through use of a passive adapter.
Standard
SD (SDSC), SDHC, SDXC, SDIO, SDUC
(as thin as MMC) for Thin SD (rare)
MiniSD
miniSD, miniSDHC, miniSDIO
microSD
The micro form factor is the smallest SD card format.
microSD, microSDHC, microSDXC, microSDUC
Transfer modes
Cards may support various combinations of the following bus types and transfer modes. The SPI bus mode and one-bit SD bus mode are mandatory for all SD families, as explained in the next section. Once the host device and the SD card negotiate a bus interface mode, the usage of the numbered pins is the same for all card sizes.
SPI bus mode: Serial Peripheral Interface Bus is primarily used by embedded microcontrollers. This bus type supports only a 3.3-volt interface. This is the only bus type that does not require a host license.
One-bit SD bus mode: Separate command and data channels and a proprietary transfer format.
Four-bit SD bus mode: Uses extra pins plus some reassigned pins. This is the same protocol as the one-bit SD bus mode which uses one command and four data lines for faster data transfer. All SD cards support this mode. UHS-I and UHS-II require this bus type.
Two differential lines SD UHS-II mode: Uses two low-voltage differential interfaces to transfer commands and data. UHS-II cards include this interface in addition to the SD bus modes.
The physical interface comprises 9 pins, except that the miniSD card adds two unconnected pins in the center and the microSD card omits one of the two VSS (Ground) pins.
Notes:
Direction is relative to card. I = Input, O = Output.
PP = Push-Pull logic, OD = Open-Drain logic.
S = Power Supply, NC = Not Connected (or logical high).
Interface
Command interface
SD cards and host devices initially communicate through a synchronous one-bit interface, where the host device provides a clock signal that strobes single bits in and out of the SD card. The host device thereby sends 48-bit commands and receives responses. The card can signal that a response will be delayed, but the host device can abort the dialogue.
Through issuing various commands, the host device can:
Determine the type, memory capacity, and capabilities of the SD card
Command the card to use a different voltage, different clock speed, or advanced electrical interface
Prepare the card to receive a block to write to the flash memory, or read and reply with the contents of a specified block.
The command interface is an extension of the MultiMediaCard (MMC) interface. SD cards dropped support for some of the commands in the MMC protocol, but added commands related to copy protection. By using only commands supported by both standards until determining the type of card inserted, a host device can accommodate both SD and MMC cards.
Electrical interface
All SD card families initially use a 3.3 volt electrical interface. On command, SDHC and SDXC cards can switch to 1.8 V operation.
At initial power-up or card insertion, the host device selects either the Serial Peripheral Interface (SPI) bus or the one-bit SD bus by the voltage level present on Pin 1. Thereafter, the host device may issue a command to switch to the four-bit SD bus interface, if the SD card supports it. For various card types, support for the four-bit SD bus is either optional or mandatory.
After determining that the SD card supports it, the host device can also command the SD card to switch to a higher transfer speed. Until determining the card's capabilities, the host device should not use a clock speed faster than 400 kHz. SD cards other than SDIO (see below) have a "Default Speed" clock rate of 25 MHz. The host device is not required to use the maximum clock speed that the card supports. It may operate at less than the maximum clock speed to conserve power. Between commands, the host device can stop the clock entirely.
Achieving higher card speeds
The SD specification defines four-bit-wide transfers. (The MMC specification supports this and also defines an eight-bit-wide mode; MMC cards with extended bits were not accepted by the market.) Transferring several bits on each clock pulse improves the card speed. Advanced SD families have also improved speed by offering faster clock frequencies and double data rate (explained here) in a high-speed differential interface (UHS-II).
File system
Like other types of flash memory card, an SD card of any SD family is a block-addressable storage device, in which the host device can read or write fixed-size blocks by specifying their block number.
MBR and FAT
Most SD cards ship preformatted with one or more MBR partitions, where the first or only partition contains a file system. This lets them operate like the hard disk of a personal computer. Per the SD card specification, an SD card is formatted with MBR and the following file system:
For SDSC cards:
Capacity of less than 32,680 logical sectors (smaller than 16 MB ): FAT12 with partition type 01h and BPB 3.0 or EBPB 4.1
Capacity of 32,680 to 65,535 logical sectors (between 16 MB and 32 MB): FAT16 with partition type 04h and BPB 3.0 or EBPB 4.1
Capacity of at least 65,536 logical sectors (larger than 32 MB): FAT16B with partition type 06h and EBPB 4.1
For SDHC cards:
Capacity of less than 16,450,560 logical sectors (smaller than 7.8 GB): FAT32 with partition type 0Bh and EBPB 7.1
Capacity of at least 16,450,560 logical sectors (larger than 7.8 GB): FAT32 with partition type 0Ch and EBPB 7.1
For SDXC cards: exFAT with partition type 07h
Most consumer products that take an SD card expect that it is partitioned and formatted in this way. Universal support for FAT12, FAT16, FAT16B, and FAT32 allows the use of SDSC and SDHC cards on most host computers with a compatible SD reader, to present the user with the familiar method of named files in a hierarchical directory tree.
On such SD cards, standard utility programs such as Mac OS X's "" or Windows' SCANDISK can be used to repair a corrupted filing system and sometimes recover deleted files. Defragmentation tools for FAT file systems may be used on such cards. The resulting consolidation of files may provide a marginal improvement in the time required to read or write the file, but not an improvement comparable to defragmentation of hard drives, where storing a file in multiple fragments requires additional physical, and relatively slow, movement of a drive head. Moreover, defragmentation performs writes to the SD card that count against the card's rated lifespan. The write endurance of the physical memory is discussed in the article on flash memory; newer technology to increase the storage capacity of a card provides worse write endurance.
When reformatting an SD card with a capacity of at least 32 MB (65,536 logical sectors or more), but not more than 2 GB, FAT16B with partition type 06h and EBPB 4.1 is recommended if the card is for a consumer device. (FAT16B is also an option for 4 GB cards, but it requires the use of 64 KB clusters, which are not widely supported.) FAT16B does not support cards above 4 GB at all.
The SDXC specification mandates the use of Microsoft's proprietary exFAT file system, which sometimes requires appropriate drivers (e.g. exfat-utils/exfat-fuse on Linux).
Other file systems
Because the host views the SD card as a block storage device, the card does not require MBR partitions or any specific file system. The card can be reformatted to use any file system the operating system supports. For example:
Under Windows, SD cards can be formatted using NTFS and, on later versions, exFAT.
Under macOS, SD cards can be partitioned as GUID devices and formatted with either HFS Plus or APFS file systems or still use exFAT.
Under Unix-like operating systems such as Linux or FreeBSD, SD cards can be formatted using the UFS, Ext2, Ext3, Ext4, btrfs, HFS Plus, ReiserFS or F2FS file system. Additionally under Linux, HFS Plus file systems may be accessed for read/write if the "hfsplus" package is installed, and partitioned and formatted if "hfsprogs" is installed. (These package names are correct under Debian, Ubuntu etc., but may differ on other Linux distributions.)
Any recent version of the above can format SD cards using the UDF file system.
Additionally, as with live USB flash drives, an SD card can have an operating system installed on it. Computers that can boot from an SD card (either using a USB adapter or inserted into the computer's flash media reader) instead of the hard disk drive may thereby be able to recover from a corrupted hard disk drive. Such an SD card can be write-locked to preserve the system's integrity.
The SD Standard allows usage of only the above-mentioned Microsoft FAT file systems and any card produced in the market shall be preloaded with the related standard file system upon its delivery to the market. If any application or user re-formats the card with a non-standard file system the proper operation of the card, including interoperability, cannot be assured.
Risks of reformatting
Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of FAT12, FAT16 or FAT32. In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient. The SD Association provides freely-downloadable SD Formatter software to overcome these problems for Windows and Mac OS X.
SD/SDHC/SDXC memory cards have a "Protected Area" on the card for the SD standard's security function. Neither standard formatters nor the SD Association formatter will erase it. The SD Association suggests that devices or software which use the SD security function may format it.
Power consumption
The power consumption of SD cards varies by its speed mode, manufacturer and model.
During transfer it may be in the range of 66–330 mW (20–100 mA at a supply voltage of 3.3 V). Specifications from TwinMos Technologies list a maximum of 149 mW (45 mA) during transfer. Toshiba lists 264–330 mW (80–100 mA). Standby current is much lower, less than 0.2 mA for one 2006 microSD card. If there is data transfer for significant periods, battery life may be reduced noticeably; for reference, the capacity of smartphone batteries is typically around 6 Wh (Samsung Galaxy S2: 1650 mAh @ 3.7 V).
Modern UHS-II cards can consume up to 2.88 W, if the host device supports bus speed mode SDR104 or UHS-II. Minimum power consumption in the case of a UHS-II host is 720 mW.
Storage capacity and compatibilities
All SD cards let the host device determine how much information the card can hold, and the specification of each SD family gives the host device a guarantee of the maximum capacity a compliant card reports.
By the time the version 2.0 (SDHC) specification was completed in June 2006, vendors had already devised 2 GB and 4 GB SD cards, either as specified in Version 1.01, or by creatively reading Version 1.00. The resulting cards do not work correctly in some host devices.
SDSC cards above 1 GB
A host device can ask any inserted SD card for its 128-bit identification string (the Card-Specific Data or CSD). In standard-capacity cards (SDSC), 12 bits identify the number of memory clusters (ranging from 1 to 4,096) and 3 bits identify the number of blocks per cluster (which decode to 4, 8, 16, 32, 64, 128, 256, or 512 blocks per cluster). The host device multiplies these figures (as shown in the following section) with the number of bytes per block to determine the card's capacity in bytes.
SD version 1.00 assumed 512 bytes per block. This permitted SDSC cards up to 4,096 × 512 × 512 B = 1 GB, for which there are no known incompatibilities.
Version 1.01 let an SDSC card use a 4-bit field to indicate 1,024 or 2,048 bytes per block instead. Doing so enabled cards with 2 GB and 4 GB capacity, such as the Transcend 4 GB SD card and the Memorette 4 GB SD card.
Early SDSC host devices that assume 512-byte blocks therefore do not fully support the insertion of 2 GB or 4 GB cards. In some cases, the host device can read data that happens to reside in the first 1 GB of the card. If the assumption is made in the driver software, success may be version-dependent. In addition, any host device might not support a 4 GB SDSC card, since the specification lets it assume that 2 GB is the maximum for these cards.
Storage capacity calculations
The format of the Card-Specific Data (CSD) register changed between version 1 (SDSC) and version 2.0 (which defines SDHC and SDXC).
Version 1
In version 1 of the SD specification, capacities up to 2 GB are calculated by combining fields of the CSD as follows:
Capacity = (C_SIZE + 1) × 2(C_SIZE_MULT + READ_BL_LEN + 2)
where
0 ≤ C_SIZE ≤ 4095,
0 ≤ C_SIZE_MULT ≤ 7,
READ_BL_LEN is 9 (for 512 bytes/sector) or 10 (for 1024 bytes/sector)
Later versions state (at Section 4.3.2) that a 2 GB SDSC card shall set its READ_BL_LEN (and WRITE_BL_LEN) to indicate 1024 bytes, so that the above computation correctly reports the card's capacity; but that, for consistency, the host device shall not request (by CMD16) block lengths over 512 B.
Versions 2 and 3
In the definition of SDHC cards in version 2.0, the C_SIZE portion of the CSD is 22 bits and it indicates the memory size in multiples of 512 KB (the C_SIZE_MULT field is removed and READ_BL_LEN is no longer used to compute capacity). Two bits that were formerly reserved now identify the card family: 0 is SDSC; 1 is SDHC or SDXC; 2 and 3 are reserved. Because of these redefinitions, older host devices do not correctly identify SDHC or SDXC cards nor their correct capacity.
SDHC cards are restricted to reporting a capacity not over 32 GB.
SDXC cards are allowed to use all 22 bits of the C_SIZE field. An SDHC card that did so (reported C_SIZE > 65,375 to indicate a capacity of over 32 GB) would violate the specification. A host device that relied on C_SIZE rather than the specification to determine the card's maximum capacity might support such a card, but the card might fail in other SDHC-compatible host devices.
Capacity is calculated thus:
Capacity = (C_SIZE + 1) × 524288
where for SDHC
4112 ≤ C_SIZE ≤ 65375
≈2 GB ≤ Capacity ≤ ≈32 GB
where for SDXC
65535 ≤ C_SIZE
≈32 GB ≤ Capacity ≤ 2 TB
Capacities above 4 GB can only be achieved by following version 2.0 or later versions. In addition, capacities equal to 4 GB must also do so to guarantee compatibility.
Openness of specification
Like most memory card formats, SD is covered by numerous patents and trademarks. Excluding SDIO cards, royalties for SD card licenses are imposed for manufacture and sale of memory cards and host adapters (US$1,000/year plus membership at US$1,500/year)
Early versions of the SD specification were available under a non-disclosure agreement (NDA) prohibiting development of open-source drivers. However, the system was eventually reverse-engineered and free software drivers provided access to SD cards not using DRM. Subsequent to the release of most open-source drivers, the SDA provided a simplified version of the specification under a less restrictive license helping reduce some incompatibility issues.
Under a disclaimers agreement, the simplified specification released by the SDA in 2006 – as opposed to that of SD cards – was later extended to the physical layer, ASSD extensions, SDIO, and SDIO Bluetooth Type-A.
The Simplified Specification is available.
Again, most of the information had already been discovered and Linux had a fully free driver for it. Still, building a chip conforming to this specification caused the One Laptop per Child project to claim "the first truly Open Source SD implementation, with no need to obtain an SDI license or sign NDAs to create SD drivers or applications."
The proprietary nature of the complete SD specification affects embedded systems, laptop computers, and some desktop computers; many desktop computers do not have card slots, instead using USB-based card readers if necessary. These card readers present a standard USB mass storage interface to memory cards, thus separating the operating system from the details of the underlying SD interface. However, embedded systems (such as portable music players) usually gain direct access to SD cards and thus need complete programming information. Desktop card readers are themselves embedded systems; their manufacturers have usually paid the SDA for complete access to the SD specifications. Many notebook computers now include SD card readers not based on USB; device drivers for these essentially gain direct access to the SD card, as do embedded systems.
The SPI-bus interface mode is the only type that does not require a host license for accessing SD cards.
SD Express/UHS-II Verification Program (SVP)
SD Association (SDA) developed the SD Express/UHS-II Verification Program (SVP) to verify the electronic interfaces of members' UHS-II and SD Express card/host/ancillary products. Products passing SVP may be listed on the SDA website as a Verified Product. SVP provides both consumers and businesses higher confidence that products passing SVP meet the interface standards, ensuring compatibility.
SVP tests products for compliance against the SDA's Physical Test Guideline. Products eligible for SVP include card/host/ancillary products using SD Express, with PCI Express (PCIe) interface or SD UHS-II interface. The SDA selected Granite River Labs (GRL) as the first test provider with labs located in Japan, Taiwan and US. SVP is a voluntary program available exclusively to SDA members. Members may choose to have products passing SVP tests listed on the SDA website.
PCIe and UHS-II interfaces are both high differential interfaces and meeting their specifications demanding requirements is extremely important to assure proper operation and interoperability. The SVP serves the market by assuring better interoperability and by publishing a list of SVP Verified Products. This list allows members to promote their products and allows both consumers and OEMs to have more confidence by selecting products on the list.
For a limited time, the SDA is subsidizing SVP costs and is providing its members with additional discount options via a Test Shuttle volume discount program. Test Shuttle leverages multiple members submitting products of the same type for bulk testing. Companies interested in creating products using SDA specifications and participating in SVP can join the SDA by visiting: https://www.sdcard.org/join/.
Comparison to other flash memory formats
Overall, SD is less open than CompactFlash or USB flash memory drives. Those open standards can be implemented without paying for licensing, royalties, or documentation. (CompactFlash and USB flash drives may require licensing fees for the use of the SDA's trademarked logos.)
However, SD is much more open than Sony's Memory Stick, for which no public documentation nor any documented legacy implementation is available. All SD cards can be accessed freely using the well-documented SPI bus.
xD cards are simply 18-pin NAND flash chips in a special package and support the standard command set for raw NAND flash access. Although the raw hardware interface to xD cards is well understood, the layout of its memory contents—necessary for interoperability with xD card readers and digital cameras—is totally undocumented. The consortium that licenses xD cards has not released any technical information to the public.
Data recovery
A malfunctioning SD card can be repaired using specialized equipment, as long as the middle part, containing the flash storage, is not physically damaged. The controller can in this way be circumvented. This might be harder or even impossible in the case of monolithic card, where the controller resides on the same physical die.
See also
Comparison of memory cards
Flash memory
Microdrive
Serial Peripheral Interface Bus (SPI)
Universal Flash Storage
References
External links
SD simplified specifications
How to Use MMC/SDC elm-chan.org, December 26, 2019
Optimizing Linux with cheap flash drives lwn.net
Flash memory card: design, and List of cards and their characteristics linaro
Independent SD Card Speed Tests
Types of Memory Cards and Sizes
Computer-related introductions in 1999
Computer storage devices
Japanese inventions
Solid-state computer storage media |
318439 | https://en.wikipedia.org/wiki/Text%20mining | Text mining | Text mining, also referred to as text data mining, similar to text analytics, is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles.
High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005) we can differ three different perspectives of text mining: information extraction, data mining, and a KDD (Knowledge Discovery in Databases) process. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).
Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via application of natural language processing (NLP), different types of algorithms and analytical methods. An important phase of this process is the interpretation of the gathered information.
A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted.
The document is the basic element while starting with text mining. Here, we define a document as a unit of textual data, which normally exists in many types of collections.
Text analytics
The term text analytics describes a set of linguistic, statistical, and machine learning techniques that model and structure the information content of textual sources for business intelligence, exploratory data analysis, research, or investigation. The term is roughly synonymous with text mining; indeed, Ronen Feldman modified a 2000 description of "text mining" in 2004 to describe "text analytics". The latter term is now used more frequently in business settings while "text mining" is used in some of the earliest application areas, dating to the 1980s, notably life-sciences research and government intelligence.
The term text analytics also describes that application of text analytics to respond to business problems, whether independently or in conjunction with query and analysis of fielded, numerical data. It is a truism that 80 percent of business-relevant information originates in unstructured form, primarily text. These techniques and processes discover and present knowledge – facts, business rules, and relationships – that is otherwise locked in textual form, impenetrable to automated processing.
Text analysis processes
Subtasks—components of a larger text-analytics effort—typically include:
Dimensionality reduction is important technique for pre-processing data. Technique is used to identify the root word for actual words and reduce the size of the text data.
Information retrieval or identification of a corpus is a preparatory step: collecting or identifying a set of textual materials, on the Web or held in a file system, database, or content corpus manager, for analysis.
Although some text analytics systems apply exclusively advanced statistical methods, many others apply more extensive natural language processing, such as part of speech tagging, syntactic parsing, and other types of linguistic analysis.
Named entity recognition is the use of gazetteers or statistical techniques to identify named text features: people, organizations, place names, stock ticker symbols, certain abbreviations, and so on.
Disambiguation—the use of contextual clues—may be required to decide where, for instance, "Ford" can refer to a former U.S. president, a vehicle manufacturer, a movie star, a river crossing, or some other entity.
Recognition of Pattern Identified Entities: Features such as telephone numbers, e-mail addresses, quantities (with units) can be discerned via regular expression or other pattern matches.
Document clustering: identification of sets of similar text documents.
Coreference: identification of noun phrases and other terms that refer to the same object.
Relationship, fact, and event Extraction: identification of associations among entities and other information in text
Sentiment analysis involves discerning subjective (as opposed to factual) material and extracting various forms of attitudinal information: sentiment, opinion, mood, and emotion. Text analytics techniques are helpful in analyzing sentiment at the entity, concept, or topic level and in distinguishing opinion holder and opinion object.
Quantitative text analysis is a set of techniques stemming from the social sciences where either a human judge or a computer extracts semantic or grammatical relationships between words in order to find out the meaning or stylistic patterns of, usually, a casual personal text for the purpose of psychological profiling etc.
Pre-processing usually involves tasks such as tokenization, filtering and stemming.
Applications
Text mining technology is now broadly applied to a wide variety of government, research, and business needs. All these groups may use text mining for records management and searching documents relevant to their daily activities. Legal professionals may use text mining for e-discovery, for example. Governments and military groups use text mining for national security and intelligence purposes. Scientific researchers incorporate text mining approaches into efforts to organize large sets of text data (i.e., addressing the problem of unstructured data), to determine ideas communicated through text (e.g., sentiment analysis in social media) and to support scientific discovery in fields such as the life sciences and bioinformatics. In business, applications are used to support competitive intelligence and automated ad placement, among numerous other activities.
Security applications
Many text mining software packages are marketed for security applications, especially monitoring and analysis of online plain text sources such as Internet news, blogs, etc. for national security purposes. It is also involved in the study of text encryption/decryption.
Biomedical applications
A range of text mining applications in the biomedical literature has been described, including computational approaches to assist with studies in protein docking, protein interactions, and protein-disease associations. In addition, with large patient textual datasets in the clinical field, datasets of demographic information in population studies and adverse event reports, text mining can facilitate clinical studies and precision medicine. Text mining algorithms can facilitate the stratification and indexing of specific clinical events in large patient textual datasets of symptoms, side effects, and comorbidities from electronic health records, event reports, and reports from specific diagnostic tests. One online text mining application in the biomedical literature is PubGene, a publicly accessible search engine that combines biomedical text mining with network visualization. GoPubMed is a knowledge-based search engine for biomedical texts. Text mining techniques also enable us to extract unknown knowledge from unstructured documents in the clinical domain
Software applications
Text mining methods and software is also being researched and developed by major firms, including IBM and Microsoft, to further automate the mining and analysis processes, and by different firms working in the area of search and indexing in general as a way to improve their results. Within public sector much effort has been concentrated on creating software for tracking and monitoring terrorist activities. For study purposes, Weka software is one of the most popular options in the scientific world, acting as an excellent entry point for beginners. For Python programmers, there is an excellent toolkit called NLTK for more general purposes. For more advanced programmers, there's also the Gensim library, which focuses on word embedding-based text representations.
Online media applications
Text mining is being used by large media companies, such as the Tribune Company, to clarify information and to provide readers with greater search experiences, which in turn increases site "stickiness" and revenue. Additionally, on the back end, editors are benefiting by being able to share, associate and package news across properties, significantly increasing opportunities to monetize content.
Business and marketing applications
Text analytics is being used in business, particularly, in marketing, such as in customer relationship management. Coussement and Van den Poel (2008) apply it to improve predictive analytics models for customer churn (customer attrition). Text mining is also being applied in stock returns prediction.
Sentiment analysis
Sentiment analysis may involve analysis of movie reviews for estimating how favorable a review is for a movie.
Such an analysis may need a labeled data set or labeling of the affectivity of words.
Resources for affectivity of words and concepts have been made for WordNet and ConceptNet, respectively.
Text has been used to detect emotions in the related area of affective computing. Text based approaches to affective computing have been used on multiple corpora such as students evaluations, children stories and news stories.
Scientific literature mining and academic applications
The issue of text mining is of importance to publishers who hold large databases of information needing indexing for retrieval. This is especially true in scientific disciplines, in which highly specific information is often contained within the written text. Therefore, initiatives have been taken such as Nature's proposal for an Open Text Mining Interface (OTMI) and the National Institutes of Health's common Journal Publishing Document Type Definition (DTD) that would provide semantic cues to machines to answer specific queries contained within the text without removing publisher barriers to public access.
Academic institutions have also become involved in the text mining initiative:
The National Centre for Text Mining (NaCTeM), is the first publicly funded text mining centre in the world. NaCTeM is operated by the University of Manchester in close collaboration with the Tsujii Lab, University of Tokyo. NaCTeM provides customised tools, research facilities and offers advice to the academic community. They are funded by the Joint Information Systems Committee (JISC) and two of the UK research councils (EPSRC & BBSRC). With an initial focus on text mining in the biological and biomedical sciences, research has since expanded into the areas of social sciences.
In the United States, the School of Information at University of California, Berkeley is developing a program called BioText to assist biology researchers in text mining and analysis.
The Text Analysis Portal for Research (TAPoR), currently housed at the University of Alberta, is a scholarly project to catalogue text analysis applications and create a gateway for researchers new to the practice.
Methods for scientific literature mining
Computational methods have been developed to assist with information retrieval from scientific literature. Published approaches include methods for searching, determining novelty, and clarifying homonyms among technical reports.
Digital humanities and computational sociology
The automatic analysis of vast textual corpora has created the possibility for scholars to analyze
millions of documents in multiple languages with very limited manual intervention. Key enabling technologies have been parsing, machine translation, topic categorization, and machine learning.
The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale, turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analyzed by using tools from network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. This automates the approach introduced by quantitative narrative analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.
Content analysis has been a traditional part of social sciences and media studies for a long time. The automation of content analysis has allowed a "big data" revolution to take place in that field, with studies in social media and newspaper content that include millions of news items. Gender bias, readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents. The analysis of readability, gender bias and topic bias was demonstrated in Flaounas et al. showing how different topics have different gender biases and levels of readability; the possibility to detect mood patterns in a vast population by analyzing Twitter content was demonstrated as well.
Software
Text mining computer programs are available from many commercial and open source companies and sources. See List of text mining software.
Intellectual property law
Situation in Europe
Under European copyright and database laws, the mining of in-copyright works (such as by web mining) without the permission of the copyright owner is illegal. In the UK in 2014, on the recommendation of the Hargreaves review, the government amended copyright law to allow text mining as a limitation and exception. It was the second country in the world to do so, following Japan, which introduced a mining-specific exception in 2009. However, owing to the restriction of the Information Society Directive (2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law does not allow this provision to be overridden by contractual terms and conditions.
The European Commission facilitated stakeholder discussion on text and data mining in 2013, under the title of Licenses for Europe. The fact that the focus on the solution to this legal issue was licenses, and not limitations and exceptions to copyright law, led representatives of universities, researchers, libraries, civil society groups and open access publishers to leave the stakeholder dialogue in May 2013.
Situation in the United States
US copyright law, and in particular its fair use provisions, means that text mining in America, as well as other fair use countries such as Israel, Taiwan and South Korea, is viewed as being legal. As text mining is transformative, meaning that it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of the Google Book settlement the presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one such use being text and data mining.
Implications
Until recently, websites most often used text-based searches, which only found documents containing specific user-defined words or phrases. Now, through use of a semantic web, text mining can find content based on meaning and context (rather than just by a specific word). Additionally, text mining software can be used to build large dossiers of information about specific people and events. For example, large datasets based on data extracted from news reports can be built to facilitate social networks analysis or counter-intelligence. In effect, the text mining software may act in a capacity similar to an intelligence analyst or research librarian, albeit with a more limited scope of analysis. Text mining is also used in some email spam filters as a way of determining the characteristics of messages that are likely to be advertisements or other unwanted material. Text mining plays an important role in determining financial market sentiment.
Future
Increasing interest is being paid to multilingual data mining: the ability to gain information across languages and cluster similar items from different linguistic sources according to their meaning.
The challenge of exploiting the large proportion of enterprise information that originates in "unstructured" form has been recognized for decades. It is recognized in the earliest definition of business intelligence (BI), in an October 1958 IBM Journal article by H.P. Luhn, A Business Intelligence System, which describes a system that will:
"...utilize data-processing machines for auto-abstracting and auto-encoding of documents and for creating interest profiles for each of the 'action points' in an organization. Both incoming and internally generated documents are automatically abstracted, characterized by a word pattern, and sent automatically to appropriate action points."
Yet as management information systems developed starting in the 1960s, and as BI emerged in the '80s and '90s as a software category and field of practice, the emphasis was on numerical data stored in relational databases. This is not surprising: text in "unstructured" documents is hard to process. The emergence of text analytics in its current form stems from a refocusing of research in the late 1990s from algorithm development to application, as described by Prof. Marti A. Hearst in the paper Untangling Text Data Mining:
For almost a decade the computational linguistics community has viewed large text collections as a resource to be tapped in order to produce better text analysis algorithms. In this paper, I have attempted to suggest a new emphasis: the use of large online text collections to discover new facts and trends about the world itself. I suggest that to make progress we do not need fully artificial intelligent text analysis; rather, a mixture of computationally-driven and user-guided analysis may open the door to exciting new results.
Hearst's 1999 statement of need fairly well describes the state of text analytics technology and practice a decade later.
See also
Concept mining
Document processing
Full text search
List of text mining software
Market sentiment
Name resolution (semantics and text extraction)
Named entity recognition
News analytics
Ontology learning
Record linkage
Sequential pattern mining (string and sequence mining)
w-shingling
Web mining, a task that may involve text mining (e.g. first find appropriate web pages by classifying crawled web pages, then extract the desired information from the text content of these pages considered relevant)
References
Citations
Sources
Ananiadou, S. and McNaught, J. (Editors) (2006). Text Mining for Biology and Biomedicine. Artech House Books.
Bilisoly, R. (2008). Practical Text Mining with Perl. New York: John Wiley & Sons.
Feldman, R., and Sanger, J. (2006). The Text Mining Handbook. New York: Cambridge University Press.
Hotho, A., Nürnberger, A. and Paaß, G. (2005). "A brief survey of text mining". In Ldv Forum, Vol. 20(1), p. 19-62
Indurkhya, N., and Damerau, F. (2010). Handbook Of Natural Language Processing, 2nd Edition. Boca Raton, FL: CRC Press.
Kao, A., and Poteet, S. (Editors). Natural Language Processing and Text Mining. Springer.
Konchady, M. Text Mining Application Programming (Programming Series). Charles River Media.
Manning, C., and Schutze, H. (1999). Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press.
Miner, G., Elder, J., Hill. T, Nisbet, R., Delen, D. and Fast, A. (2012). Practical Text Mining and Statistical Analysis for Non-structured Text Data Applications. Elsevier Academic Press.
McKnight, W. (2005). "Building business intelligence: Text data mining in business intelligence". DM Review, 21-22.
Srivastava, A., and Sahami. M. (2009). Text Mining: Classification, Clustering, and Applications. Boca Raton, FL: CRC Press.
Zanasi, A. (Editor) (2007). Text Mining and its Applications to Intelligence, CRM and Knowledge Management. WIT Press.
External links
Marti Hearst: What Is Text Mining? (October, 2003)
Automatic Content Extraction, Linguistic Data Consortium
Automatic Content Extraction, NIST
Artificial intelligence applications
Applied data mining
Computational linguistics
Natural language processing
Statistical natural language processing
Text |
319632 | https://en.wikipedia.org/wiki/Verisign | Verisign | Verisign Inc. is an American company based in Reston, Virginia, United States that operates a diverse array of network infrastructure, including two of the Internet's thirteen root nameservers, the authoritative registry for the , , and generic top-level domains and the and country-code top-level domains, and the back-end systems for the , , and Sponsored top-level domains. Verisign also offers a range of security services, including managed DNS, distributed denial-of-service (DDoS) attack mitigation and cyber-threat reporting.
In 2010, Verisign sold its authentication business unit – which included Secure Sockets Layer (SSL) certificate, public key infrastructure (PKI), Verisign Trust Seal, and Verisign Identity Protection (VIP) services – to Symantec for $1.28 billion. The deal capped a multi-year effort by Verisign to narrow its focus to its core infrastructure and security business units. Symantec later sold this unit to DigiCert in 2017. On October 25, 2018, NeuStar, Inc. acquired VeriSign’s Security Service Customer Contracts. The acquisition effectively transferred Verisign Inc.’s Distributed Denial of Service (DDoS) protection, Managed DNS, DNS Firewall and fee-based Recursive DNS services customer contracts.
Verisign's former chief financial officer (CFO) Brian Robins announced in August 2010 that the company would move from its original location of Mountain View, California, to Dulles in Northern Virginia by 2011 due to 95% of the company's business being on the East Coast. The company is incorporated in Delaware.
History
Verisign was founded in 1995 as a spin-off of the RSA Security certification services business. The new company received licenses to key cryptographic patents held by RSA (set to expire in 2000) and a time-limited non-compete agreement. The new company served as a certificate authority (CA) and its initial mission was "providing trust for the Internet and Electronic Commerce through our Digital Authentication services and products". Prior to selling its certificate business to Symantec in 2010, Verisign had more than 3 million certificates in operation for everything from military to financial services and retail applications, making it the largest CA in the world.
In 2000, Verisign acquired Network Solutions, which operated the , and TLDs under agreements with the Internet Corporation for Assigned Names and Numbers (ICANN) and the United States Department of Commerce. Those core registry functions formed the basis for Verisign's naming division, which by then had become the company's largest and most significant business unit. In 2002, Verisign was charged with violation of the Securities Exchange Act. Verisign divested the Network Solutions retail (domain name registrar) business in 2003, retaining the domain name registry (wholesale) function as its core Internet addressing business.
For the year ended December 31, 2010, Verisign reported revenue of $681 million, up 10% from $616 million in 2009. Verisign operates two businesses, Naming Services, which encompasses the operation of top-level domains and critical Internet infrastructure, and Network Intelligence and Availability (NIA) Services, which encompasses DDoS mitigation, managed DNS and threat intelligence.
On August 9, 2010, Symantec completed its approximately $1.28 billion acquisition of Verisign's authentication business, including the Secure Sockets Layer (SSL) Certificate Services, the Public Key Infrastructure (PKI) Services, the Verisign Trust Services, the Verisign Identity Protection (VIP) Authentication Service, and the majority stake in Verisign Japan. The deal capped a multi-year effort by Verisign to narrow its focus to its core infrastructure and security business units. Following ongoing controversies regarding Symantec's handling of certificate validation, which culminated in Google untrusting Symantec-issued certificates in its Chrome web browser, Symantec sold this unit to DigiCert in 2017 for $950 Million.
Verisign's share price tumbled in early 2014, hastened by the U.S. government's announcement that it would "relinquish oversight of the Internet's domain-naming system to a non-government entity". Ultimately ICANN chose to continue VeriSign's role as the root zone maintainer and the two entered into a new contract in 2016.
Naming services
Verisign's core business is its naming services division. The division operates the authoritative domain name registries for two of the Internet's most important top-level domains, and . It is also the contracted registry operator for the .name and .gov top-level domains as well as the country code top-level domains (Cocos Islands) and (Tuvalu). In addition, Verisign is the primary technical subcontractor for the , and top-level domains for their respective registry operators, which are non-profit organizations; in this role Verisign maintains the zone files for these particular domains and hosts the domains from their domain servers. Registry operators are the "wholesalers" of Internet domain names, while domain name registrars act as the “retailers”, working directly with consumers to register a domain name address.
Verisign also operates two of the Internet's thirteen "root servers" which are identified by the letters A-M (Verisign operates the “A” and “J” root servers). The root servers form the top of the hierarchical Domain Name System that supports most modern Internet communication. Verisign also generates the globally recognized root zone file and is also responsible for processing changes to that file once they are ordered by ICANN via IANA and approved by the U.S. Department of Commerce. Changes to the root zone were originally distributed via the A root server, but now they are distributed to all thirteen servers via a separate distribution system which Verisign maintains. Verisign is the only one of the 12 root server operators to operate more than one of the thirteen root nameservers. The A and J root servers are "anycasted” and are no longer operated from any of the company's own datacenters as a means to increase redundancy and availability and mitigate the threat of a single point of failure. In 2016 the Department of Commerce ended its role in managing the Internet's DNS and transferred full control to ICANN. While this initially negatively impacted VeriSign's stock, ICANN eventually chose to contract with Verisign to continue its role as the root zone maintainer.
VeriSign's naming services division dates back to 1993 when Network Solutions was awarded a contract by the National Science Foundation to manage and operate the civilian side of the Internet's domain name registrations. Network Solutions was the sole registrar for all of the Internet's non governmental generic top-level domains until 1998 when ICANN was established and the new system of competitive registrars was implemented. As a result of these new policies, Network Solutions divided itself into two divisions. The NSI Registry division was established to manage the authoritative registries that the company would still operate and was separated from the customer-facing registrar business that would have to compete with other registrars. The divisions were even geographically split with the NSI Registry moving from the corporate headquarters in Herndon, Virginia, to nearby Dulles, Virginia. In 2000, VeriSign purchased Network Solutions taking over its role in the Internet DNS. The NSI Registry division eventually became VeriSign's naming services division while the remainder of Network Solutions was later sold by Verisign in 2003 to Pivotal Equity Group.
Company properties
Following the sale of its authentication services division in 2010, Verisign relocated from its former headquarters in Mountain View, California, to the headquarters of the naming division in Sterling, Virginia (originally NSI Registry's headquarters). Verisign began shopping that year for a new permanent home shortly after moving. They signed a lease for 12061 Bluemont Way in Reston, the former Sallie Mae headquarters, in 2010 and decided to purchase the building in September 2011. They have since terminated their lease of their current space in two buildings at Lakeside@Loudoun Technology Center. The company completed its move at the end of November 2011. The new headquarters is located in the Reston Town Center development which has become a major commercial and business hub for the region. In addition to its Reston headquarters, Verisign owns three data center properties. One at 22340 Dresden Street in Dulles, Virginia not far from its corporate headquarters (within the large Broad Run Technology Park), one at 21 Boulden Circle in New Castle, Delaware, and a third in Fribourg, Switzerland. Their three data centers are mirrored so that a disaster at one data center has a minimal impact on operations. Verisign also leases an office suite in downtown Washington, D.C., on K street where its government relations office is located. It also has leased server space in numerous internet data centers around the world where the DNS constellation resolution sites are located, mostly at major internet peering facilities. One such facility is at the Equinix Ashburn Datacenter in Ashburn, Virginia, one of the world's largest datacenters and internet transit hubs.
Controversies
2001: Code signing certificate mistake
In January 2001, Verisign mistakenly issued two Class 3 code signing certificates to an individual claiming to be an employee of Microsoft. The mistake was not discovered and the certificates were not revoked until two weeks later during a routine audit. Because Verisign code-signing certificates do not specify a Certificate Revocation List Distribution Point, there was no way for them to be automatically detected as having been revoked, placing Microsoft's customers at risk. Microsoft had to later release a special security patch in order to revoke the certificates and mark them as being fraudulent.
2002: Domain transfer law suit
In 2002, Verisign was sued for domain slamming – transferring domains from other registrars to themselves by making the registrants believe they were merely renewing their domain name. Although they were found not to have broken the law, they were barred from suggesting that a domain was about to expire or claim that a transfer was actually a renewal.
2003: Site Finder legal case
In September 2003, Verisign introduced a service called Site Finder, which redirected Web browsers to a search service when users attempted to go to non-existent or domain names. ICANN asserted that Verisign had overstepped the terms of its contract with the U.S. Department of Commerce, which in essence grants Verisign the right to operate the DNS for and , and Verisign shut down the service. Subsequently, Verisign filed a lawsuit against ICANN in February 2004, seeking to gain clarity over what services it could offer in the context of its contract with ICANN. The claim was moved from federal to California state court in August 2004. In late 2005 Verisign and ICANN announced a proposed settlement which defined a process for the introduction of new registry services in the registry. The documents concerning these settlements are available at ICANN.org. The ICANN comments mailing list archive documents some of the criticisms that have been raised regarding the settlement. Additionally Verisign was involved in the matter decided by the Ninth circuit.
2003: Gives up domain
In keeping with ICANN's charter to introduce competition to the domain name marketplace, Verisign agreed to give up its operation of top-level domain in 2003 in exchange for a continuation of its contract to operate , which, at the time had more than 34 million registered addresses.
2005: Retains domain
In mid-2005, the existing contract for the operation of expired and five companies, including Verisign, bid for management of it. Verisign enlisted numerous IT and telecom heavyweights including Microsoft, IBM, Sun Microsystems, MCI, and others, to assert that Verisign had a perfect record operating . They proposed Verisign continue to manage the DNS due to its critical importance as the domain underlying numerous "backbone" network services. Verisign was also aided by the fact that several of the other bidders were foreign-based, which raised concerns in national security circles. On June 8, 2005, ICANN announced that Verisign had been approved to operate until 2011. More information on the bidding process is available at ICANN. On July 1, 2011, ICANN announced that VeriSign's approval to operate .net was extended another six years, until 2017.
2010: Data breach and disclosure controversy
In February 2012 Verisign revealed that their network security had been repeatedly breached in 2010. Verisign stated that the breach did not impact the Domain Name System (DNS) that they maintain, but would not provide details about the loss of data. Verisign was widely criticized for not disclosing the breach earlier and apparently attempting to hide the news in an October 2011 SEC filing.
Because of the lack of details provided by Verisign, it was not clear whether the breach impacted the Certificate Signing business, acquired by Symantec in late 2010. According to Oliver Lavery, the Director of Security and Research for nCircle "Can we trust any site using Verisign SSL certificates? Without more clarity, the logical answer is no”.
2010: Web site domain seizures
On November 29, 2010, the U.S. Immigration and Customs Enforcement (U.S. ICE) issued seizure orders against 82 web sites with Internet addresses that were reported to be involved in the illegal sale and distribution of counterfeit goods. As registry operator for , Verisign performed the required takedowns of the 82 sites under order from law enforcement. InformationWeek reported that "Verisign will say only that it received sealed court orders directing certain actions to be taken with respect to specific domain names". The removal of the 82 websites was cited as an impetus for the launch of "the Dot-P2P Project" in order to create a decentralized DNS service without centralized registry operators. Following the disappearance of WikiLeaks during the following week and its forced move to wikileaks.ch, a Swiss domain, the Electronic Frontier Foundation warned of the dangers of having key pieces of Internet infrastructure such as DNS name translation under corporate control.
2012: Web site domain seizure
In March 2012, the US government declared that it has the right to seize any domains ending in .com, .net, .cc, .tv, .name and .org because the companies administering the domains are based in the US. The US government can seize the domains ending in .com, .net, .cc, .tv and .name by serving a court-order on VeriSign, which manages those domains. The .org domain is managed by the Virginia based non-profit Public Interest Registry. In March 2012, Verisign shut down the sports-betting site Bodog.com after receiving a court order, even though the domain name was registered to a Canadian company.
References
External links
Digicert SSL Certificates - formerly from Verisign
Oral history interview with James Bidzos, Charles Babbage Institute University of Minnesota, Minneapolis. Bidzos discusses his leadership of software security firm RSA Data Security as it sought to commercialize encryption technology as well as his role in creating the RSA Conference and founding Verisign. Oral history interview 2004, Mill Valley, California.
Internet technology companies of the United States
American companies established in 1995
Domain Name System
Computer companies established in 1995
Companies based in Reston, Virginia
Former certificate authorities
Radio-frequency identification
Domain name registries
1995 establishments in Virginia
DDoS mitigation companies
1998 initial public offerings
Corporate spin-offs |
321869 | https://en.wikipedia.org/wiki/Coding%20theory | Coding theory | Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.
There are four types of coding:
Data compression (or source coding)
Error control (or channel coding)
Cryptographic coding
Line coding
Data compression attempts to remove redundancy from the data from a source in order to transmit it more efficiently. For example, ZIP data compression makes data files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may be studied in combination.
Error correction adds extra data bits to make the transmission of data more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using error correction. A typical music compact disc (CD) uses the Reed–Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and the NASA Deep Space Network all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes.
History of coding theory
In 1948, Claude Shannon published "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the information a sender wants to transmit. In this fundamental work he used tools in probability theory, developed by Norbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while essentially inventing the field of information theory.
The binary Golay code was developed in 1949. It is an error-correcting code capable of correcting up to three errors in each 24-bit word, and detecting a fourth.
Richard Hamming won the Turing Award in 1968 for his work at Bell Labs in numerical methods, automatic coding systems, and error-detecting and error-correcting codes. He invented the concepts known as Hamming codes, Hamming windows, Hamming numbers, and Hamming distance.
In 1972, Nasir Ahmed proposed the discrete cosine transform (DCT), which he developed with T. Natarajan and K. R. Rao in 1973. The DCT is the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3.
Source coding
The aim of source coding is to take the source data and make it smaller.
Definition
Data can be seen as a random variable , where appears with probability .
Data are encoded by strings (words) over an alphabet .
A code is a function
(or if the empty string is not part of the alphabet).
is the code word associated with .
Length of the code word is written as
Expected length of a code is
The concatenation of code words .
The code word of the empty string is the empty string itself:
Properties
is non-singular if injective.
is uniquely decodable if injective.
is instantaneous if is not a prefix of (and vice versa).
Principle
Entropy of a source is the measure of information. Basically, source codes try to reduce the redundancy present in the source, and represent the source with fewer bits that carry more information.
Data compression which explicitly tries to minimize the average length of messages according to a particular assumed probability model is called entropy encoding.
Various techniques used by source coding schemes try to achieve the limit of entropy of the source. C(x) ≥ H(x), where H(x) is entropy of source (bitrate), and C(x) is the bitrate after compression. In particular, no source coding scheme can be better than the entropy of the source.
Example
Facsimile transmission uses a simple run length code. Source coding removes all data superfluous to the need of the transmitter, decreasing the bandwidth required for transmission.
Channel coding
The purpose of channel coding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. While not mutually exclusive, performance in these areas is a trade off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches.
CDs use cross-interleaved Reed–Solomon coding to spread the data out over the disk.
Although not a very good code, a simple repeat code can serve as an understandable example. Suppose we take a block of data bits (representing sound) and send it three times. At the receiver we will examine the three repetitions bit by bit and take a majority vote. The twist on this is that we don't merely send the bits in order. We interleave them. The block of data bits is first divided into 4 smaller blocks. Then we cycle through the block and send one bit from the first, then the second, etc. This is done three times to spread the data out over the surface of the disk. In the context of the simple repeat code, this may not appear effective. However, there are more powerful codes known which are very effective at correcting the "burst" error of a scratch or a dust spot when this interleaving technique is used.
Other codes are more appropriate for different applications. Deep space communications are limited by the thermal noise of the receiver which is more of a continuous nature than a bursty nature. Likewise, narrowband modems are limited by the noise, present in the telephone network and also modeled better as a continuous disturbance. Cell phones are subject to rapid fading. The high frequencies used can cause rapid fading of the signal even if the receiver is moved a few inches. Again there are a class of channel codes that are designed to combat fading.
Linear codes
The term algebraic coding theory denotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched.
Algebraic coding theory is basically divided into two major types of codes:
Linear block codes
Convolutional codes
It analyzes the following three properties of a code – mainly:
Code word length
Total number of valid code words
The minimum distance between two valid code words, using mainly the Hamming distance, sometimes also other distances like the Lee distance
Linear block codes
Linear block codes have the property of linearity, i.e. the sum of any two codewords is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property.
Linear block codes are summarized by their symbol alphabets (e.g., binary or ternary) and parameters (n,m,dmin) where
n is the length of the codeword, in symbols,
m is the number of source symbols that will be used for encoding at once,
dmin is the minimum hamming distance for the code.
There are many types of linear block codes, such as
Cyclic codes (e.g., Hamming codes)
Repetition codes
Parity codes
Polynomial codes (e.g., BCH codes)
Reed–Solomon codes
Algebraic geometric codes
Reed–Muller codes
Perfect codes
Block codes are tied to the sphere packing problem, which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful (24,12) Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is) the dimensions refer to the length of the codeword as defined above.
The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop, or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called "perfect" codes. The only nontrivial and useful perfect codes are the distance-3 Hamming codes with parameters satisfying (2r – 1, 2r – 1 – r, 3), and the [23,12,7] binary and [11,6,5] ternary Golay codes.
Another code property is the number of neighbors that a single codeword may have.
Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly. The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.
Properties of linear block codes are used in many applications. For example, the syndrome-coset uniqueness property of linear block codes is used in trellis shaping, one of the best-known shaping codes.
Convolutional codes
The idea behind a convolutional code is to make every codeword symbol be the weighted sum of the various input message symbols. This is like convolution used in LTI systems to find the output of a system, when you know the input and impulse response.
So we generally find the output of the system convolutional encoder, which is the convolution of the input bit, against the states of the convolution encoder, registers.
Fundamentally, convolutional codes do not offer more protection against noise than an equivalent block code. In many cases, they generally offer greater simplicity of implementation over a block code of equal power. The encoder is usually a simple circuit which has state memory and some feedback logic, normally XOR gates. The decoder can be implemented in software or firmware.
The Viterbi algorithm is the optimum algorithm used to decode convolutional codes. There are simplifications to reduce the computational load. They rely on searching only the most likely paths. Although not optimum, they have generally been found to give good results in low noise environments.
Convolutional codes are used in voiceband modems (V.32, V.17, V.34) and in GSM mobile phones, as well as satellite and military communication devices.
Cryptographic coding
Cryptography or cryptographic coding is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that block adversaries; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
Cryptography prior to the modern age was effectively synonymous with encryption, the conversion of information from a readable state to apparent nonsense. The originator of an encrypted message shared the decoding technique needed to recover the original information only with intended recipients, thereby precluding unwanted persons from doing the same. Since World War I and the advent of the computer, the methods used to carry out cryptology have become increasingly complex and its application more widespread.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.
Line coding
A line code (also called digital baseband modulation or digital baseband transmission method) is a code chosen for use within a communications system for baseband transmission purposes. Line coding is often used for digital data transport.
Line coding consists of representing the digital signal to be transported by an amplitude- and time-discrete signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). The waveform pattern of voltage or current used to represent the 1s and 0s of a digital data on a transmission link is called line encoding. The common types of line encoding are unipolar, polar, bipolar, and Manchester encoding.
Other applications of coding theory
Another concern of coding theory is designing codes that help synchronization. A code may be designed so that a phase shift can be easily detected and corrected and that multiple signals can be sent on the same channel.
Another application of codes, used in some mobile phone systems, is code-division multiple access (CDMA). Each phone is assigned a code sequence that is approximately uncorrelated with the codes of other phones. When transmitting, the code word is used to modulate the data bits representing the voice message. At the receiver, a demodulation process is performed to recover the data. The properties of this class of codes allow many users (with different codes) to use the same radio channel at the same time. To the receiver, the signals of other users will appear to the demodulator only as a low-level noise.
Another general class of codes are the automatic repeat-request (ARQ) codes. In these codes the sender adds redundancy to each message for error checking, usually by adding check bits. If the check bits are not consistent with the rest of the message when it arrives, the receiver will ask the sender to retransmit the message. All but the simplest wide area network protocols use ARQ. Common protocols include SDLC (IBM), TCP (Internet), X.25 (International) and many others. There is an extensive field of research on this topic because of the problem of matching a rejected packet against a new packet. Is it a new one or is it a retransmission? Typically numbering schemes are used, as in TCP.
Group testing
Group testing uses codes in a different way. Consider a large group of items in which a very few are different in a particular way (e.g., defective products or infected test subjects). The idea of group testing is to determine which items are "different" by using as few tests as possible. The origin of the problem has its roots in the Second World War when the United States Army Air Forces needed to test its soldiers for syphilis.
Analog coding
Information is encoded analogously in the neural networks of brains, in analog signal processing, and analog electronics. Aspects of analog coding include analog error correction,
analog data compression and analog encryption.
Neural coding
Neural coding is a neuroscience-related field concerned with how sensory and other information is represented in the brain by networks of neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among electrical activity of the neurons in the ensemble. It is thought that neurons can encode both digital and analog information, and that neurons follow the principles of information theory and compress information, and detect and correct
errors in the signals that are sent throughout the brain and wider nervous system.
See also
Coding gain
Covering code
Error correction code
Folded Reed–Solomon code
Group testing
Hamming distance, Hamming weight
Lee distance
List of algebraic coding theory topics
Spatial coding and MIMO in multiple antenna research
Spatial diversity coding is spatial coding that transmits replicas of the information signal along different spatial paths, so as to increase the reliability of the data transmission.
Spatial interference cancellation coding
Spatial multiplex coding
Timeline of information theory, data compression, and error correcting codes
Notes
References
Elwyn R. Berlekamp (2014), Algebraic Coding Theory, World Scientific Publishing (revised edition), .
MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003.
Vera Pless (1982), Introduction to the Theory of Error-Correcting Codes, John Wiley & Sons, Inc., .
Randy Yates, A Coding Theory Tutorial.
Error detection and correction |
321943 | https://en.wikipedia.org/wiki/Sky%20UK | Sky UK | Sky UK Limited is a British broadcaster and telecommunications company that provides television and broadband Internet services, fixed line and mobile telephone services to consumers and businesses in the United Kingdom. It is a subsidiary of Sky Group and from 2018 onwards – part of Comcast. It is the UK's largest pay-TV broadcaster with 12.7 million customers as of end of 2019 for its digital satellite TV platform. Sky's flagship products are Sky Q and the internet-based Sky Glass, and its flagship channels are Sky Showcase, Sky Sports and Sky Atlantic.
Formed as British Sky Broadcasting (BSkyB) in November 1990 through the merger of Sky Television and British Satellite Broadcasting, it grew into a major media company by the end of the decade, notably owning all the television broadcasting rights for the Premier League and almost all the domestic rights of Hollywood films. Following BSkyB's acquisition of Sky Italia and a majority interest in Sky Deutschland in 2014, its holding company British Sky Broadcasting Group plc changed its name to Sky plc (now Sky Group Limited). The UK subsidiary's name was changed from British Sky Broadcasting Limited to Sky UK Limited, and continuing to trade as "Sky".
Sky UK Limited is a wholly owned subsidiary of Comcast-owned Sky Group, with its current company directors (including that of Sky Ireland) being Executive Vice-President Stephen van Rooyen and Chief Commercial Officer (CCO) Lyssa McGowen. Its corporate headquarters are at the Sky Studios in Isleworth.
History
Origins
The present service can trace its heritage back to 1990, when BSkyB's predecessors British Satellite Broadcasting encrypted their respective film channels – Sky Movies and The Movie Channel which required viewers to get decoding equipment and a subscription to watch the channels. After the two companies merged, subscribers could get access to both channels, and two years later, the sports channel Sky Sports also became encrypted.
Premier League football
In the autumn of 1991, talks were held for the broadcast rights for Premier League for a five-year period, from the 1992 season. ITV were the current rights holders, and fought hard to retain the new rights. ITV had increased its offer from £18m to £34m per year to keep control of the rights. BSkyB joined forces with the BBC to make a counter bid. The BBC was given the highlights of most of the matches, while BSkyB paying £304m for the Premier League rights, would give them a monopoly of all live matches, up to 60 per year from the 1992 season. Murdoch described sport as a "battering ram" for pay-television, providing a strong customer base. A few weeks after the deal, ITV went to the High Court to get an injunction as it believed their bid details had been leaked before the decision was taken. ITV also asked the Office of Fair Trading to investigate since it believed Rupert Murdoch's media empire via its newspapers had influenced the deal. A few days later neither action took effect, ITV believed BSkyB was telephoned and informed of its £262m bid, and Premier League advised BSkyB to increase its counter bid.
BSkyB retained the rights paying £670m 1997–2001 deal, but was challenged by ONdigital for the rights from 2001 to 2004, thus were forced to £1.1 billion which gave them 66 live games a year.
Following a lengthy legal battle with the European Commission, which deemed the exclusivity of the rights to be against the interests of competition and the consumer, BSkyB's monopoly came to an end from the 2007–08 season. In May 2006, the Irish broadcaster Setanta Sports was awarded two of the six Premier League packages that the English FA offered to broadcasters. Sky picked up the remaining four for £1.3bn.
In February 2015, Sky bid £4.2bn for a package of 120 premier league games across the three seasons from 2016. This represented an increase of 70% on the previous contract and was said to be £1bn more than the company had expected to pay. The move has been followed by staff cuts, increased subscription prices (including 9% in Sky's family package) and the dropping of the 3D channel.
Sky Multichannels
In September 1993, BSkyB launched Sky Multichannels which was the present digital platform's analogue predecessor. Sky Multichannels was a subscription package that gave access not only to Sky's own channels but also those of third party broadcasters.
The service started on 1 September 1993. It was based on an idea by then CEO Sam Chisholm and chairman Rupert Murdoch of converting the company business strategy to an entirely fee-based concept. The new package included four channels formerly available free-to-air, broadcasting on Astra's satellites, as well as introducing new channels. The service continued until the closure of BSkyB's analogue service on 27 September 2001, due to the expansion of the Sky Digital platform after its launch three years before. Some of the channels did broadcast either in the clear or soft encrypted (whereby a Videocrypt decoder was required to decode, without a subscription card) prior to their addition to the Sky Multichannels package. Within two months of the launch, BSkyB gained 400,000 new subscribers, with the majority taking at least one premium channel as well, which helped BSkyB reach 3.5 million households by mid-1994. Michael Grade criticised the operations in front of the Select Committee on National Heritage, mainly for the lack of original programming on many of the new channels.
Launch of Sky Digital
BSkyB's digital service was officially launched on 1 October 1998 under the name Sky Digital, although small-scale tests were carried out before then. At this time the use of the Sky Digital brand made an important distinction between the new service and Sky's analogue services. Key selling points were the improvement in picture and sound quality, increased number of channels and an interactive service branded Open...., later called Sky Active. BSkyB competed with the ONdigital (later ITV Digital) terrestrial offering and cable services. Within 30 days, over 100,000 digiboxes had been sold, which helped bolster BSkyB's decision to give away free digiboxes and minidishes from May 1999.
In addition to most channels from the Sky Multichannels package, many of which broadcast additional hours on Sky Digital, Sky Digital launched with several new channels that were exclusive to the digital offer.
The switch-over from analogue to digital proceeded relatively quickly. In 1998, there were 6 million 'multichannel' TV homes in the UK (i.e. homes that watch television other than the traditional analogue terrestrial), and over half of these homes watched television using BSkyB's analogue service. BSkyB's digital service surpassed the analogue service in terms of subscribers in late 1999.
By June 2000 the service had 3.6 million subscribers, which gave BSkyB 8.988 million subscribers across all platforms. This substantial growth reflected BSkyB's 34% share of viewers in multi-channel homes (up from 13.4% from 1999).
BSkyB's analogue service ended in October 2001, and the digital service would eventually be marketed as just 'Sky'.
By June 2005, the number of digital subscribers increase to 7.8m, while it produced 38,375 hours of sport in 2005.
In November 2005, in partnership with Vodafone, Sky Mobile TV was launched which was the UK's first commercially available mobile TV service. Vodafone live! customers with 3G enabled handsets would receive the service.
2010s
Sky's direct-to-home satellite service became available in 10 million homes in 2010, Europe's first pay-TV platform in to achieve that milestone. Confirming it had reached its target, the broadcaster said its reach into 36% of households in the UK represented an audience of more than 25m people. The target was first announced in August 2004, since then an additional 2.4m customers had subscribed to Sky's direct-to-home service. Media commentators had debated whether the figure could be reached as the growth in subscriber numbers elsewhere in Europe flattened.
In December, the UK's parliament heard a claim that a subscription to Sky was 'often damaging' to welfare recipients, along with alcohol, tobacco and gambling. Conservative MP Alec Shelbrooke was proposing the payments of benefits and tax credits on a "Welfare Cash Card", in the style of the American Supplemental Nutrition Assistance Program, that could be used to buy only "essentials".
In 2016, Sky launched its new TV and entertainment service called Sky Q.
On 1 March 2018, it was reported that Sky UK had concluded successful negotiations with Netflix to offer Sky subscribers access to its international streaming service.
Comcast, the largest cable TV provider in the United States, outbid 21st Century Fox, on 22 September 2018 in an auction for control of Sky UK. On 4 October 2018, Fox sold their stake to Comcast, giving the latter a 76.8% controlling stake. On 12 October 2018, Comcast announced it will compulsorily acquire the rest of Sky after its bid gained acceptance from 95.3% of the broadcaster's shareholders with the company being delisted by early 2019. Sky was delisted on 7 November 2018 after Comcast acquired all remaining shares.
2020s
On 17 September 2020, Sky Arts became the first premium Sky channel to become available on the free to air terrestrial Freeview service, joining Sky News and a couple of channels which trace their linage back to Flextech (Pick and Challenge).
On 28 July 2021, Sky announced that its flagship channel Sky One would shut down on 1 September, to be replaced by two channels; Sky Showcase, showing a mixture of content from other Sky Channels, and Sky Max, showing Sky's original programming and entertainment previously shown on Sky One.
On 7 October 2021, Sky announced a new all-in-one TV set called Sky Glass. It is designed to support streaming of Sky TV and streaming service shows over WiFi, eliminating the need for a satellite dish or box. It launched on 18 October 2021, with three sizes available: 43-inch, 55-inch and 65-inch.
Services
Digital terrestrial television
Sky initially faced competition from the ONdigital digital terrestrial television service (later renamed ITV Digital). ITV Digital failed for numerous reasons, including, but not limited to numerous administrative and technical failures, nervous investors after a large down-turn in the advertising market and the dot com crash, and Sky's aggressive marketing and domination of premium sporting rights.
While Sky had been excluded from being a part of the ONdigital consortium, thereby making them a competitor by default, Sky was able to join ITV Digital's free-to-air replacement, Freeview, in which it holds an equal stake with the BBC, ITV, Channel 4 and Arqiva. Prior to October 2005, three Sky channels were available on this platform: Sky News, Sky Three, and Sky Sports News. Initially, Sky provided Sky Travel to the service. However, this was replaced by Sky Three on 31 October 2005, which was itself later re-branded as 'Pick TV' in 2011.
On 8 February 2007, Sky announced its intention to replace its three free-to-air digital terrestrial channels with four subscription channels. It was proposed that these channels would offer a range of content from the Sky portfolio including sport (including English Premier League Football), films, entertainment and news. The announcement came a day after Setanta Sports confirmed that it would launch in March as a subscription service on the digital terrestrial platform, and on the same day that NTL's services re-branded as Virgin Media. However, industry sources believe Sky will be forced to shelve plans to withdraw its channels from Freeview and replace them with subscription channels, due to possible lost advertising revenue.
Video on demand
Sky initially faced increased competition from telecommunications providers to deliver pay television services over existing telephone lines using ADSL. Such providers are able to offer "triple-play" or "quad-play" packages combining landline telephone, broadband Internet, mobile telephone and pay television services.
To compete with these providers, in October 2005, Sky bought the broadband Internet service provider Easynet for £211 million. This acquisition allowed Sky to start offering a Sky-branded broadband service as well as a "triple play" package combining satellite television, land-line telephone and Broadband service. Sky also offers some streaming live TV channels to a computer using Microsoft's Silverlight.
In early 2012, Sky released an update to its Sky Anytime service. This update offers customers the chance to buy and rent films from the Sky Store.
On 26 September 2012, Sky relaunched its "Anytime+" on-demand-via-broadband service as "On Demand" as the BBC's iPlayer joined the line-up of channels offering catch-up TV on the company's Sky+ HD box – linked to a router, the signal from which was recorded before viewing. The BBC was making the preceding week's programmes available alongside ITV, Channel 4's All 4, Channel 5 and the partly BBC Worldwide-owned UKTV, as well as Sky's own channels.
Sky Go
Sky Go is provided free of charge for Sky (satellite TV) subscribers and allows them to watch channels live and on demand through an internet connection on a computer or mobile device.
On 29 May 2009, it was confirmed that Sky Go would be made available on the Xbox 360. In November 2011 Sony Computer Entertainment struck a deal with Sky to bring some of its shows to the PlayStation Store Video Store. Users are able buy individual TV episodes in SD or HD. On 3 December 2014, Sky Go became available on the PlayStation 4 under the name "TV from Sky", followed by the PlayStation 3 on 29 January 2015.
Sky Broadband
On 1 March 2013, it was announced that Sky would buy O2's and Be's broadband services from Telefónica for £180 million up front plus another £20 million once customers have been transferred. Telefónica said the deal would allow it to concentrate on providing better mobile services, including rolling out 4G.
Sky offers superfast broadband, Sky Fibre, using ADSL2+ technology and fibre-optic, which are provided by Openreach.
Sky Mobile
On 21 October 2016, it was announced that public pre-registration for Sky's new mobile network, Sky Mobile, would take place from 31 October 2016. The network will operate as a Full MVNO, utilising the O2 radio access network infrastructure, and O2's full network speeds and 4G+.
On 5 January 2017 Sky Mobile went live to the public across the UK. Coining itself as the Smarter Network, with tariffs mainly focused on data rather than traditional calls & text, effectively saving consumers money in wasted unused minutes and texts. 1GB data costs £10.00 per month while 5GB costs £15.00 per month with 10GB data costing £20.00 per month. With all those data tariffs, the customer can choose from two different call & text packages with the 'Pay as you use' costing 10p per minute of calls and 10p per text message sent or £10.00 per month for Unlimited Calls & Texts. The Unlimited Calls & Texts package is free for new or existing Sky TV customers using the Sky Mobile network. Since it was launched Sky has reduced the cost of its tariffs with, as of March 2021, 2GB now starting for £6.00 per month, 8GB for £10.00 per month, 10GB for £12.00 per month, 25GB for £15.00 per month, 30GB for £20 per month and 60GB for £30 has and they have also expanded the Sky VIP offering to mobile plans. They have also expanded the "piggybank" facility to allow customers to "cash-in" piggybank data to bring the monthly cost of a phone down.
As of 30 March 2017, Sky Mobile is offering handset deals. Products are available from manufacturers such as Samsung, Sony, LG and Apple.
Sky Store
Sky Store has a library of films from Sky Cinema that can be rented or bought, either via an app or physical DVD/Blu-ray copies by post. Sky Store is available on Sky Q boxes as well as through apps on devices such as computers and mobile devices. It is available to anyone with a compatible device and does not require a Sky TV subscription.
NOW
An over-the-top contract-free television service from Sky. The service is provided on a NOW device or through an app on selected computers, mobile devices, set-top boxes and smart TVs. NOW is separate from the core Sky TV service.
Products
Sky Digibox
Sky launched with a set top box known as the Sky Digibox, using the Slogans "What do you want to watch?", "Entertainment your way" and the current slogan "Believe in Better". This was followed by Sky+, a digital video recorder with an internal hard drive which allows viewers to 'pause live television' (by switching from a live feed to a paused real-time recording that can be restarted at any point) and schedule programs to record in the future.
In later years the Sky+ box and then the Sky+ HD box replaced the original Digibox. The first photos of a prototype Sky HD box began appearing in magazines in August 2005. Sky launched HDTV services in May 2006. All Sky+ HD boxes incorporate a version of Sky+ using a 300GB, 500GB, or 1TB hard drive (of which 160GB, 250GB or 500GB are available to the user) to accommodate the necessary extra data.
Sky+
Sky initially charged an additional subscription fee for using a Sky+ PVR with their service; waiving the charge for subscribers whose package included two or more premium channels. This changed as from 1 July 2007, and now customers that subscribe to any Sky package have Sky+ included at no extra charge. Customers that do not subscribe to Sky's channels can still pay a monthly fee to enable Sky+ functions. In September 2007, Sky launched a new TV advertising campaign targeting Sky+ at women. As of 31 March 2008, Sky had 3,393,000 Sky+ users.
In January 2010 Sky discontinued the Sky+ Box, limited the standard Sky Box to Multiroom upgrade only and started to issue the Sky+ HD Box as standard, thus giving all new subscribers the functions of Sky+. In February 2011 Sky discontinued the non-HD variant of its Multiroom box, offering a smaller version of the SkyHD box without Sky+ functionality.
Sky+ HD
Sky launched its HDTV service, Sky+ HD, on 22 May 2006. Prior to its launch, Sky claimed that 40,000 people had registered to receive the HD service. In the week before the launch, rumours started to surface that Sky was having supply issues with its set top box (STB) from manufacturer Thomson. On Thursday 18 May 2006, and continuing through the weekend before launch, people were reporting that Sky had either cancelled or rescheduled its installation. Finally, the BBC reported that 17,000 customers had yet to receive the service due to failed deliveries. On 31 March 2012, Sky announced the total number of homes with Sky+ HD was 4,222,000.
In early 2012, Sky released an update to its Sky Anytime service. This update offers customers the chance to buy and rent films from the Sky Store. In June 2012, Sky launched a new EPG for Sky+ HD boxes. The update included a new modernised look and improved functionality. As of 1 October 2012, Sky Anytime was rebranded as Sky On Demand which included ITV Player and Demand 5. BBC iPlayer followed in late Autumn with 4oD which changed to All 4 on 30 March 2015, launched in early 2013.
Sky 3D
Sky began to broadcast programmes in 3D in April 2010. This included new 3D channels, including a Sky Sports 3D and Sky Movies 3D. Sky previously experimented with 3D broadcasting by broadcasting an Arsenal vs Manchester United football game live in 3D in nine pubs situated throughout the United Kingdom.
Sky Q
On 18 November 2015, Sky announced Sky Q, a range of products and services to be available in 2016. The Sky Q range consists of three set top boxes (Sky Q 1TB, Sky Q 2TB and Sky Q Mini), a broadband router (Sky Q Hub) and mobile applications.
The Sky Q set top boxes introduce a new user interface, Wi-Fi hotspot functionality, Power-line and Bluetooth connectivity and a new touch-sensitive remote control. The Sky Q Mini set top boxes connect to the Sky Q set top boxes with a Wi-Fi or Power-line connection rather than receive their own satellite feeds. This allows all set top boxes in a household to share recordings and other media. Sky Q Mini boxes are not capable of UHD playback due to hardware limitations.
Sky Q became available to order on 9 February 2016.
Unlike Sky, Sky+, and Sky+ HD boxes, Sky Q boxes remain the property of Sky and must be returned when the contract ends. Charges of up to £140 per item apply for unreturned items – Sky claims this charge does not buy the equipment, which must still be given back.
4K UHD
The Sky Q 2TB set top box is capable of receiving and displaying 4K UHD broadcasts. HDR was added on 27 May 2020, alongside the launch of Sky Nature and Sky Documentary channels. The HLG format is to be included on a select number of UHD VoD downloads, starting with Sky Nature before being added to other channel content. UHD broadcasts started on 13 August 2016, with the first live Premier League football match of the 2016/17 season, Hull vs Leicester City. UHD broadcasts are available free of charge to Sky Q 2TB multiscreen customers with any other relevant subscriptions.
Sky Glass
On 7 October 2021, Sky announced an in-house Smart TV line known as Sky Glass, which features a 4K quantum dot display, integrated Dolby Atmos surround sound speakers, and voice controls. They also support a webcam developed in partnership with Microsoft (and similar in functionality to former Xbox accessory Kinect), which will be used for motion controls and gestures, games, and social viewing of programmes with others. It is subsidised as part of a Sky Ultimate TV subscription. The televisions are designed to stream video over an internet connection, a satellite dish is therefore not required. The TV set includes a backup Freeview tuner in case a broadband connection is not available. Sky Glass was released on 18 October 2021
Television channels
Sky, and its sister companies, operate a number of channels in the UK and Ireland with some being joint ventures with other companies:
Entertainment
Sky Showcase (+1 available)
Sky Max
Sky Atlantic (+1 available)
Sky Comedy
Sky Witness (+1 available)
Sky Replay
Syfy
Comedy Central (+1 available) (joint venture with Paramount Global)
Comedy Central Extra (joint venture)
Pick (+1 available)
Challenge
Lifestyle
Blaze (part-time +1 available) (joint venture - A+E Networks UK)
Factual
Sky Arts
Sky Crime (+1 available)
Sky Documentaries
Sky History (+1 available) (joint venture - A+E Networks UK)
Sky History 2 (joint venture - A+E Networks UK)
Sky Nature
Crime+Investigation (+1 available) (joint venture - A+E Networks UK)
News
Sky News
Sky News Arabia (joint venture)
Sports
Sky Sports Main Event
Sky Sports Premier League
Sky Sports Football
Sky Sports Cricket
Sky Sports Golf
Sky Sports F1
Sky Sports Action
Sky Sports Arena
Sky Sports Mix
Sky Sports News
Sky Sports NFL
Sky Sports Racing
Movies (Films)
Sky Cinema Premiere
Sky Cinema Select
Sky Cinema Hits
Sky Cinema Greats
Sky Cinema Animation
Sky Cinema Family
Sky Cinema Action
Sky Cinema Comedy
Sky Cinema Thriller
Sky Cinema Drama
Sky Cinema Sci-Fi & Horror
Kids
Nickelodeon (+1 available) (joint venture with Paramount Global)
Nick Jr. (+1 available) (joint venture with Paramount Global)
Nick Jr. Too (joint venture with Paramount Global)
Nicktoons (joint venture with Paramount Global)
BabyTV (joint venture)
Other - A+E Networks UK
A+E Networks UK a joint-venture between Sky UK with American company A+E Networks and includes the noted channels above and the following Freeview channel:
TCC (licence held under the name TCC Broadcasting Ltd)
Marketing
Sky (formerly marketed as Sky Digital) is the brand name for Sky plc's United Kingdom digital satellite television and telecommunications services. Slogans that have been used for marketing include "What do you want to watch?", "Entertainment your way" and the current slogan "Believe in Better". Sky has also aired several advertisements featuring characters from Minions, Inside Out, Kung Fu Panda 3, The Secret Life of Pets, The Lego Batman Movie, Despicable Me 3 and Monster Family.
Broadcasting
Transmission
When Sky Digital was launched in 1998 the new service used the Astra 2A satellite which was located at the 28.2°E orbital position, unlike the analogue service which was broadcast from 19.2°E. This was subsequently followed by more Astra satellites as well as Eutelsat's Eurobird 1 (now Eutelsat 33C) at 28.5°E), enabled the company to launch a new all-digital service, Sky, with the potential to carry hundreds of television and radio channels. The old position was shared with broadcasters from several European countries, while the new position at 28.5°E came to be used almost exclusively for channels that broadcast to the United Kingdom.
New Astra satellites joined the position in 2000 and 2001, and the number of channels available to customers increased accordingly. This trend continued with the launch of Eurobird 1 (now Eutelsat 33C) in 2001. Additionally, some channels occasionally received new numbering – However, in early 2006, the majority of channels received new numbering, with some receiving single digit changes, whilst others received new numbers entirely.
It was the country's most popular digital TV service until it was overtaken by Freeview in April 2007.
Prior to the migration to Astra 2E, 2F and 2G, Sky was transmitted from the Astra satellites located at 28.2° east (2A/2C/2E/2F) and Eutelsat's Eutelsat 33C satellite at 28.5°E.
As of 2019, Astra 2E, 2F and 2G are the sole satellites used by Sky UK; a number of services are served via narrow UK-only spot beams, the other services downlinked with a Europe-wide footprint. UK-only spot beams are intentionally tightly focused over mainland UK, however they are still receivable dependent on location and access to a sufficiently large dish and sensitive LNB.
Eutelsat 33C was subsequently moved to 33° East, then moved again to 133° West and renamed as 'Eutelsat 133 West A' to serve transponders offering European and African language services.
Low-noise block converter
Provided is a universal Ku band LNB (9.75/10.600 GHz) which is fitted at the end of the dish and pointed at the correct satellite constellation; most digital receivers will receive the free to air channels. Some broadcasts are free-to-air and unencrypted, some are encrypted but do not require a monthly subscription (known as free-to-view), some are encrypted and require a monthly subscription, and some are pay-per-view services. To view the encrypted content a VideoGuard UK equipped receiver (all of which are dedicated to the Sky service, and cannot be used to decrypt other services) needs to be used. Unofficial CAMs are now available to view the service, although use of
them breaks the user's contract with Sky and invalidates the user's rights to use the card.
Standard definition broadcasts
Sky's standard definition broadcasts are in DVB-compliant MPEG-2, with the Sky Cinema and Sky Box Office channels including optional Dolby Digital soundtracks for recent films, although these are only accessible with a Sky+ box. Sky+ HD material is broadcast using MPEG-4 and most of the HD material uses the DVB-S2 standard. Interactive services and 7-day EPG use the proprietary OpenTV system, with set-top boxes including modems for a return path. Sky News, amongst other channels, provides a pseudo-video on demand interactive service by broadcasting looping video streams.
Digital satellite receivers
Sky utilises the VideoGuard pay-TV scrambling system owned by NDS, a Cisco Systems company. There are tight controls over use of VideoGuard decoders; they are not available as stand-alone DVB CAMs (conditional-access modules). Sky has design authority over all digital satellite receivers capable of receiving their service. The receivers, though designed and built by different manufacturers, must conform to the same user interface look-and-feel as all the others. This extends to the Personal video recorder (PVR) offering (branded Sky+).
Sky does not now market its products for use on any satellite systems, concentrating on fixed line only. Sky satellite receivers will be phased out.
Electronic programme guide
Technology
Sky maintains an electronic programme guide (EPG) which provides information about upcoming programmes and a list of channels. TV channels available on Sky are assigned a three digit logical channel number which can be entered on a remote control to access the channel and determines in what order channels are listed. Radio channels similarly receive a four digit EPG number (0 prefix + three digits).
The EPG differs depending on the viewer's location due to limited regional availability of certain channels or conditions relating to their must-carry status. For example, this ensures that viewers get access to the correct BBC or ITV region or that S4C gets a prominent listing in Wales.
All channels are grouped into categories depending on their content. What section of the EPG a channel gets allocated is determined by rules set up by Sky.
Sky has no veto over the presence of channels on their EPG, with open access being an enforced as part of their operating licence from Ofcom. Any channel which can get carriage on a suitable beam of a satellite at 28° East is entitled to access to Sky's EPG for a fee, ranging from £15–100,000. Third-party channels which opt for encryption receive discounts ranging from reduced price to free EPG entries, free carriage on a Sky leased transponder, or actual payment for being carried. However, even in this case, Sky does not carry any control over the channel's content or carriage issues such as picture quality.
In October 2007, Sky announced that they would not accept new applications to launch channel on their EPG, citing "very significant memory constraints" on many of its older digiboxes.
In June 2012, Sky launched a new EPG dubbed "Darwin" for their Sky+HD receivers, offering a more modern, refreshed interface and some improved functionality. Newer SkyQ UHD receivers utilise different hardware, so run a different software stack.
Numbering system
The EPG numbering is altered frequently when new channels launch or receive new numbers. A few times, the EPG has been substantially altered. For example:
In early 2006, most channels received new numbering. This shake-up intended to split up the original ten categories in sixteen. For example, several channels that had been listed under the 'Entertainment' category were split off into a new 'Lifestyle & Culture' category while the 'News & Documentaries' category was split into two. The 'Specialist' category, which had included shopping, dating, gambling, international and adult channels was split into several genres.
Following the integration of Living TV Group into Sky in early 2011, several prominent slots were freed up as many channels were closed down. It was, reported Broadband TV News, the biggest reshuffle in EPG positions for over a decade, with MTV, Comedy Central, Universal Channel, Syfy, News Corporation's FX, and 40 HD channels moving to more prominent places.
Documentaries and other channels moving towards general entertainment programming and many other channel categories being overspilled into other parts of the EPG. Sky reshuffled the channels again on 1 May 2018 which merged the documentaries and entertainment channels making many documentaries move to more prominent slots which had been occupied by time shift and standard definition channels which had been given their own areas in the 200s and 800s with the 800s being also filled with SD channels from other categories.
Competition
On 12 July 2011, former Prime Minister Gordon Brown claimed that Sky's largest shareholder – News Corporation – attempted to affect government policy with regards to the BBC in pursuit of its own commercial interests. He went further, in a speech in Parliament on 13 July 2011, stating:
"Mr James Murdoch, which included his cold assertion that profit not standards was what mattered in the media, underpinned an ever more aggressive News International and Sky agenda under his and Mrs Brooks' leadership that was brutal in its simplicity. Their aim was to cut the BBC licence fee, to force BBC online to charge for its content, for the BBC to sell off its commercial activities, to open up more national sporting events to bids from Sky and move them away from the BBC, to open up the cable and satellite infrastructure market, and to reduce the power of their regulator, Ofcom. I rejected those policies."
On 13 July 2011, MP Chris Bryant stated to the House of Commons, in the Parliamentary Debate on the Rupert Murdoch and News Corporation Bid for Sky that the company was anti-competitive:
"The company has lots of technological innovation that only a robust entrepreneur could to bring to British society, but it has also often been profoundly anti-competitive. I believe that the bundling of channels so as to increase the profit and make it impossible for others to participate in the market is anti-competitive. I believe that the way in which the application programming interface—the operating system—has been used has been anti-competitive and that Sky has deliberately set about selling set-top boxes elsewhere, outside areas where they have proper rights. If one visits a flat in Spain where a British person lives, one finds that they mysteriously manage to have a Sky box there even though it is registered to a house in the United Kingdom."
Virgin Media dispute
Virgin Media (re-branded in 2007 from NTL:Telewest) started to offer a high-definition television (HDTV) capable set top box, although from 30 November 2006 until 30 July 2009 it only carried one linear HD channel, BBC HD, after the conclusion of the ITV HD trial. Virgin Media has claimed that other HD channels were "locked up" or otherwise withheld from their platform, although Virgin Media did in fact have an option to carry Channel 4 HD in the future. Nonetheless, the linear channels were not offered, Virgin Media instead concentrating on its Video on Demand service to carry a modest selection of HD content. Virgin Media has nevertheless made a number of statements over the years, suggesting that more linear HD channels are on the way.
In 2007, Sky and Virgin Media became involved in a dispute over the carriage of Sky channels on cable TV. The failure to renew the existing carriage agreements negotiated with NTL and Telewest resulted in Virgin Media removing the basic channels from the network on 1 March 2007. Virgin Media claimed that Sky had substantially increased the asking price for the channels, a claim which Sky denied, on the basis that their new deal offered "substantially more value" by including HD channels and Video On Demand content which was not previously carried by cable.
In response, Sky ran a number of TV, radio and print advertisements claiming that Virgin Media 'doubted the value' of the channels concerned, at first urging Virgin Media customers to call their cable operator to show their support for Sky, and later urging Virgin Media customers to migrate to Sky to continue receiving the channels. The broadcasting regulator Ofcom subsequently found these adverts in breach of their code.
The availability (at an extra charge) of Sky's premium sport and movie services was not affected by the dispute, and Sky Sports 3 was offered as a replacement to Sky One on many Virgin Media packages. This impasse continued for twenty-one months, with both companies initiating High Court proceedings. Amongst Virgin Media's claims to the court (denied by Sky) were that Sky had unfairly reduced the amount which it paid to VMTV for the carriage of Virgin Media's own channels on satellite.
Eventually, on 4 November 2008 it was announced that an agreement had been struck for Sky's basic channels – including Sky One, Sky Two, Sky Three, Sky News, Sky Sports News, Sky Arts 1, Sky Arts 2, Sky Real Lives and Sky Real Lives 2 to return to Virgin Media from 13 November 2008 until 12 June 2011. In exchange, Sky would be provided continued carriage of Virgin Media Television's channels – Living, Livingit, Bravo, Bravo +1, Trouble, Challenge and Virgin1 for the same period.
The agreements include fixed annual carriage fees of £30m for the channels with both channel suppliers able to secure additional capped payments if their channels meet certain performance-related targets. Currently there is no indication as to whether the new deal includes the additional Video On Demand and High Definition content which had previously been offered by Sky. As part of the agreements, both Sky and Virgin Media agreed to terminate all High Court proceedings against each other relating to the carriage of their respective basic channels.
Discovery Networks dispute
On 25 January 2017, Discovery Networks announced that they were in a dispute with Sky UK over the costs of payment fees to the broadcaster.
The broadcaster threatened to blackout their channels on the Sky platform, which includes Eurosport, Discovery Channel, TLC, Animal Planet, Investigation Discovery, DMAX, Discovery Turbo, Discovery Shed, Discovery Science, Discovery History and Home & Health, unless Sky accepted their request for fair pricing. Sky indicated that they will not bow down, and the channels would likely become unavailable from 1 February 2017.
The same day, Discovery sent out a press release on the dispute and also blocked access to its own website with the news. Discovery stated 'enough is enough' and claimed that Sky is paying less for its channels than it was ten years ago. In that time, Discovery had stated that its viewing share has grown by more than 20%. Managing Director Susanna Dinnage stated in the press release: "We believe Sky is using what we consider to be its dominant market position to further its own commercial interest over those of viewers and independent broadcasters. The vitality of independent broadcasters like Discovery and plurality in TV is under threat."
In response, Sky stated that they had been "overpaying Discovery for years." It comes after Sky had paid £4.2 billion on Premier League rights for the following three seasons.
The channel 'blackout' would have also affect the Sky owned NOW TV with the removal of Discovery Channel from both the live stream and On Demand service.
On 31 January 2017 at around 21:00, Sky UK revealed that they would continue to broadcast the Discovery Networks Channels by releasing the following statement:
"Great news, we can confirm that Sky will continue to carry the Discovery and Eurosport channels. This means you can still watch channels including: Animal Planet, Discovery HD, Discovery History, Discovery Home & Health, Discovery Science, Discovery Shed, Discovery Turbo, DMAX, Eurosport1, Eurosport2, Investigation Discovery, TLC, and Quest."
At the time of the statement release, it had not been revealed how much Sky UK had paid Discovery Networks.
Litigation
In July 2013, the English High Court of Justice found that Microsoft's use of the term "SkyDrive" infringed on Sky's right to the "Sky" trademark. On 31 July 2013, Sky and Microsoft announced their settlement, in which Microsoft will not appeal the ruling, and will rename its SkyDrive cloud storage service after an unspecified "reasonable period of time to allow for an orderly transition to a new brand," plus "financial and other terms, the details of which are confidential". On 27 January 2014, Microsoft announced "that SkyDrive will soon become OneDrive" and "SkyDrive Pro" becomes "OneDrive for Business".
Hello Games was in legal negotiations with Sky over the trademark on the word "Sky" used for the title of their video game No Man's Sky for three years. The issue was ultimately settled in June 2016, allowing Hello Games to continue to use the name.
Criticism and controversies
Awards and nominations
See also
Sky
Sky Deutschland
Sky Ireland
Sky Italia
Sky España
Sky+
Sky+ HD
Sky Q
Sky Betting and Gaming
Sky Go
Sky Vision
Sky Magazine
Astra
Digibox
NOW
Freesat from Sky
Team Sky
References
External links
Sky Vision — official production and distribution arm of Sky TV
Sky TV Guide — online 'electronic programme guide' (EPG)
List of channels on Sky (UK and Ireland) — at www.TVChannelLists.com wiki site
British brands
Television networks in the United Kingdom
Companies based in the London Borough of Hounslow
British companies established in 1994
Telecommunications companies established in 1994
Digital television in the United Kingdom
Direct broadcast satellite services
Isleworth
Media and communications in the London Borough of Hounslow
Mass media companies based in London
Sky Group
Sports television in the United Kingdom |
322689 | https://en.wikipedia.org/wiki/7-Zip | 7-Zip | 7-Zip is a free and open-source file archiver, a utility used to place groups of files within compressed containers known as "archives". It is developed by Igor Pavlov and was first released in 1999. 7-Zip has its own archive format called 7z, but can read and write several others.
The program can be used from a command-line interface as the command p7zip, or through a graphical user interface that also features shell integration. Most of the 7-Zip source code is under the LGPL-2.1-or-later license; the unRAR code, however, is under the LGPL-2.1-or-later license with an "unRAR restriction", which states that developers are not permitted to use the code to reverse-engineer the RAR compression algorithm.
Since version 21.01 alpha, preliminary Linux support has been added to the upstream instead of the p7zip project.
Formats
7z
By default, 7-Zip creates 7z-format archives with a .7z file extension. Each archive can contain multiple directories and files. As a container format, security or size reduction are achieved by looking for similarities throughout the data using a stacked combination of filters. These can consist of pre-processors, compression algorithms, and encryption filters.
The core 7z compression uses a variety of algorithms, the most common of which are bzip2, PPMd, LZMA2, and LZMA. Developed by Pavlov, LZMA is a relatively new system, making its debut as part of the 7z format. LZMA uses an LZ-based sliding dictionary of up to 3840 MB in size, backed by a range coder.
The native 7z file format is open and modular. File names are stored as Unicode.
In 2011, TopTenReviews found that the 7z compression was at least 17% better than ZIP, and 7-Zip's own site has since 2002 reported that while compression ratio results are very dependent upon the data used for the tests, "Usually, 7-Zip compresses to 7z format 30–70% better than to zip format, and 7-Zip compresses to zip format 2–10% better than most other zip-compatible programs."
The 7z file format specification is distributed with the program's source code, in the "doc" sub-directory.
Other formats
7-Zip supports a number of other compression and non-compression archive formats (both for packing and unpacking), including ZIP, gzip, bzip2, xz, tar, and WIM. The utility also supports unpacking APM, ar, ARJ, chm, cpio, deb, FLV, JAR, LHA/LZH, LZMA, MSLZ, Office Open XML, onepkg, RAR, RPM, smzip, SWF, XAR, and Z archives and cramfs, DMG, FAT, HFS, ISO, MBR, NTFS, SquashFS, UDF, and VHD disk images. 7-Zip supports the ZIPX format for unpacking only. It has had this support since at least version 9.20, which was released in late 2010.
7-Zip can open some MSI files, allowing access to the meta-files within along with the main contents. Some Microsoft CAB (LZX compression) and NSIS (LZMA) installer formats can be opened. Similarly, some Microsoft executable programs (.EXEs) that are self-extracting archives or otherwise contain archived content (e.g., some setup files) may be opened as archives.
When compressing ZIP or gzip files, 7-Zip uses its own DEFLATE encoder, which may achieve higher compression, but at lower speed, than the more common zlib DEFLATE implementation. The 7-Zip deflate encoder implementation is available separately as part of the AdvanceCOMP suite of tools.
The decompression engine for RAR archives was developed using freely available source code of the unRAR program, which has a licensing restriction against creation of a RAR compressor. 7-Zip v15.06 and later support extraction of files in the RAR5 format. Some backup systems use formats supported by archiving programs such as 7-Zip; e.g., some Android backups are in tar format, and can be extracted by archivers such as 7-Zip.
7-Zip ZS, a port of 7-Zip FM with Zstandard .zst (and other formats) support, is developed by Tino Reichardt.
Modern7z, a Zstandard .zst (and other formats) plugin for 7-Zip FM, is developed by Denis Anisimov (TC4shell).
File manager
7-Zip comes with a file manager along with the standard archiver tools. The file manager has a toolbar with options to create an archive, extract an archive, test an archive to detect errors, copy, move, and delete files, and open a file properties menu exclusive to 7-Zip. The file manager, by default, displays hidden files because it does not follow Windows Explorer's policies. The tabs show name, modification time, original and compressed sizes, attributes, and comments (4DOS descript.ion format).
When going up one directory on the root, all drives, removable or internal appear. Going up again shows a list with four options:
Computer: loads the drives list
Documents: loads user's documents, usually at %UserProfile%\My Documents
Network: loads a list of all network clients connected
\\.: Same as "Computer" except loads the drives in low-level NTFS access. This results in critical drive files and deleted files still existing on the drive to appear. (NOTE: As of November 2020, access to the active partition in low-level mode is not allowed for currently unknown reasons.)
Features
7-Zip supports:
Encryption via the 256-bit AES cipher, which can be enabled for both files and the 7z hierarchy. When the hierarchy is encrypted, users are required to supply a password to see the filenames contained within the archive. WinZip-developed Zip file AES encryption standard is also available in 7-Zip to encrypt ZIP archives with AES 256-bit, but it does not offer filename encryption as in 7z archives.
Volumes of dynamically variable sizes, allowing use for backups on removable media such as writable CDs and DVDs
Usability as a basic orthodox file manager when used in dual panel mode
Multiple-core CPU threading
Opening EXE files as archives, allowing the decompression of data from inside many "Setup" or "Installer" or "Extract" type programs without having to launch them
Unpacking archives with corrupted filenames, renaming the files as required
Create self-extracting single-volume archives
Command-line interface
Graphical user interface. The Windows version comes with its own GUI; however, p7zip uses the GUI of the Unix/Linux Archive Manager.
Calculating checksums in the formats CRC-32, CRC-64, SHA-1, or SHA-256 for files on disk, available either via command line or Explorer's context menu
Variants
Two command-line versions are provided: 7z.exe, using external libraries; and a standalone executable 7za.exe, containing built-in modules, but with compression/decompression support limited to 7z, ZIP, gzip, bzip2, Z and tar formats. A 64-bit version is available, with support for large memory maps, leading to faster compression. All versions support multi-threading.
The 7za.exe version of 7-Zip is available for Unix-like operating systems (including Linux, FreeBSD, and macOS), FreeDOS, OpenVMS, AmigaOS 4, and MorphOS under the p7zip project.
Software development kit
7-Zip has a LZMA SDK which was originally dual-licensed under both the GNU LGPL and Common Public License, with an additional special exception for linked binaries. On 2 December 2008, the SDK was placed by Igor Pavlov in the public domain.
Security
On older versions, self-extracting archives were vulnerable to arbitrary code execution through DLL hijacking: they load and run a DLL named UXTheme.dll, if it is in the same folder as the executable file. 7-Zip 16.03 Release notes say that the installer and SFX modules have added protection against DLL preloading attack.
Versions of 7-Zip prior to 18.05 contain an arbitrary code execution vulnerability in the module for extracting files from RAR archives (), a vulnerability that was fixed on 30 April 2018.
Reception and usage
Snapfiles.com in 2012 rated 7-Zip 4.5 stars out of 5, noting, "[its] interface and additional features are fairly basic, but the compression ratio is outstanding".
On TechRepublic in 2009, Justin James found the detailed settings for Windows File Manager integration were "appreciated" and called the compression-decompression benchmark utility "neat". And though the archive dialog has settings that "will confound most users", he concluded: "7-Zip fits a nice niche in between the built-in Windows capabilities and the features of the paid products, and it is able to handle a large variety of file formats in the process."
Between 2002 and 2016, 7-Zip was downloaded 410 million times from SourceForge alone.
The software has received awards, In 2007, SourceForge granted it community choice awards for "Technical Design" and for "Best Project". In 2013, Tom's Hardware conducted a compression speed test comparing 7-ZIP, MagicRAR, WinRAR, WinZip; they concluded that 7-ZIP beat out all the others with regards to compression speed, ratio, and size and awarded the software the 2013 Tom's Hardware Elite award.
See also
Comparison of archive formats
Comparison of file archivers
List of archive formats
References
External links
7-Zip Portable at PortableApps.com
1999 software
Cross-platform free software
Disk image extractors
File archivers
Free data compression software
Free file managers
Free multilingual software
Free software programmed in C
Free software programmed in C++
Portable software
Software using the LGPL license
Windows compression software |
322702 | https://en.wikipedia.org/wiki/DGCA%20%28computing%29 | DGCA (computing) | In computing, DGCA is a freeware compression utility created in 2001 by . DGCA is also a compressed archive format, the next generation of 'GCA'. DGCA has a better compression ratio than ZIP, stronger encryption and Unicode filenames. However, DGCA is not a major compression format.
See also
List of archive formats
Comparison of file archivers
External links
Tottemo Gohan
Archive formats
Data compression software |
326123 | https://en.wikipedia.org/wiki/Windows%207 | Windows 7 | Windows 7 is a major release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on July 22, 2009, and became generally available on October 22, 2009. It is the successor to Windows Vista, released nearly three years earlier. It remained an operating system for use on personal computers, including home and business desktops, laptops, tablet PCs and media center PCs, and itself was replaced in November 2012 by Windows 8, the name spanning more than three years of the product.
Until April 9, 2013, Windows 7 original release includes updates and technical support, after which installation of Service Pack 1 is required for users to receive support and updates. Windows 7's server counterpart, Windows Server 2008 R2, was released at the same time. The last supported version of Windows based on this operating system was released on July 1, 2011, entitled Windows Embedded POSReady 7. Extended support ended on January 14, 2020, over ten years after the release of Windows 7, after which the operating system ceased receiving further support. A support program is currently available for enterprises, providing security updates for Windows 7 for up to four years since the official end of life. However, Windows Embedded POSReady 7, the last Windows 7 variant, continued to receive security updates until October 2021.
Windows 7 was intended to be an incremental upgrade to Microsoft Windows, addressing Windows Vista's poor critical reception while maintaining hardware and software compatibility. Windows 7 continued improvements on Windows Aero user interface with the addition of a redesigned taskbar that allows pinned applications, and new window management features. Other new features were added to the operating system, including libraries, the new file-sharing system HomeGroup, and support for multitouch input. A new "Action Center" was also added to provide an overview of system security and maintenance information, and tweaks were made to the User Account Control system to make it less intrusive. Windows 7 also shipped with updated versions of several stock applications, including Internet Explorer 8, Windows Media Player, and Windows Media Center.
Unlike Vista, Windows 7 received critical acclaim, with critics considering the operating system to be a major improvement over its predecessor because of its improved performance, its more intuitive interface, fewer User Account Control popups, and other improvements made across the platform. Windows 7 was a major success for Microsoft; even before its official release, pre-order sales for the operating system on the online retailer Amazon.com had surpassed previous records. In just six months, over 100 million copies had been sold worldwide, increasing to over 630 million licenses by July 2012. By January 2018, Windows 10 surpassed Windows 7 as the most popular version of Windows worldwide. , 12.76% of traditional PCs running Windows are running Windows 7. It still remains popular in countries such as Syria, China, India, and Venezuela.
Development history
Originally, a version of Windows codenamed "Blackcomb" was planned as the successor to Windows XP and Windows Server 2003 in 2000. Major features were planned for Blackcomb, including an emphasis on searching and querying data and an advanced storage system named WinFS to enable such scenarios. However, an interim, minor release, codenamed "Longhorn," was announced for 2003, delaying the development of Blackcomb. By the middle of 2003, however, Longhorn had acquired some of the features originally intended for Blackcomb. After three major malware outbreaks—the Blaster, Nachi, and Sobig worms—exploited flaws in Windows operating systems within a short time period in August 2003, Microsoft changed its development priorities, putting some of Longhorn's major development work on hold while developing new service packs for Windows XP and Windows Server 2003. Development of Longhorn (Windows Vista) was also restarted, and thus delayed, in August 2004. A number of features were cut from Longhorn. Blackcomb was renamed Vienna in early 2006, and was later canceled in 2007 due to the scope of the project.
When released, Windows Vista was criticized for its long development time, performance issues, spotty compatibility with existing hardware and software at launch, changes affecting the compatibility of certain PC games, and unclear assurances by Microsoft that certain computers shipping with XP before launch would be "Vista Capable" (which led to a class-action lawsuit), among other critiques. As such, the adoption of Vista in comparison to XP remained somewhat low. In July 2007, six months following the public release of Vista, it was reported that the next version of Windows would then be codenamed Windows 7, with plans for a final release within three years. Bill Gates, in an interview with Newsweek, suggested that Windows 7 would be more "user-centric". Gates later said that Windows 7 would also focus on performance improvements. Steven Sinofsky later expanded on this point, explaining in the Engineering Windows 7 blog that the company was using a variety of new tracing tools to measure the performance of many areas of the operating system on an ongoing basis, to help locate inefficient code paths and to help prevent performance regressions. Senior Vice President Bill Veghte stated that Windows Vista users migrating to Windows 7 would not find the kind of device compatibility issues they encountered migrating from Windows XP. An estimated 1,000 developers worked on Windows 7. These were broadly divided into "core operating system" and "Windows client experience", in turn organized into 25 teams of around 40 developers on average.
In October 2008, it was announced that Windows 7 would also be the official name of the operating system. There has been some confusion over naming the product Windows 7, while versioning it as 6.1 to indicate its similar build to Vista and increase compatibility with applications that only check major version numbers, similar to Windows 2000 and Windows XP both having 5.x version numbers. The first external release to select Microsoft partners came in January 2008 with Milestone 1, build 6519. Speaking about Windows 7 on October 16, 2008, Microsoft CEO Steve Ballmer confirmed compatibility between Windows Vista and Windows 7, indicating that Windows 7 would be a refined version of Windows Vista.
At PDC 2008, Microsoft demonstrated Windows 7 with its reworked taskbar. On December 27, 2008, the Windows 7 Beta was leaked onto the Internet via BitTorrent. According to a performance test by ZDNet, Windows 7 Beta beat both Windows XP and Vista in several key areas, including boot and shutdown time and working with files, such as loading documents. Other areas did not beat XP, including PC Pro benchmarks for typical office activities and video editing, which remain identical to Vista and slower than XP. On January 7, 2009, the x64 version of the Windows 7 Beta (build 7000) was leaked onto the web, with some torrents being infected with a trojan. At CES 2009, Microsoft CEO Steve Ballmer announced the Windows 7 Beta, build 7000, had been made available for download to MSDN and TechNet subscribers in the format of an ISO image. The stock wallpaper of the beta version contained a digital image of the Betta fish.
The release candidate, build 7100, became available for MSDN and TechNet subscribers, and Connect Program participants on April 30, 2009. On May 5, 2009, it became available to the general public, although it had also been leaked onto the Internet via BitTorrent. The release candidate was available in five languages and expired on June 1, 2010, with shutdowns every two hours starting March 1, 2010. Microsoft stated that Windows 7 would be released to the general public on October 22, 2009, less than three years after the launch of its predecessor. Microsoft released Windows 7 to MSDN and Technet subscribers on August 6, 2009. Microsoft announced that Windows 7, along with Windows Server 2008 R2, was released to manufacturing in the United States and Canada on July 22, 2009. Windows 7 RTM is build 7600.16385.090713-1255, which was compiled on July 13, 2009, and was declared the final RTM build after passing all Microsoft's tests internally.
Features
New and changed
Among Windows 7's new features are advances in touch and handwriting recognition, support for virtual hard disks, improved performance on multi-core processors, improved boot performance, DirectAccess, and kernel improvements. Windows 7 adds support for systems using multiple heterogeneous graphics cards from different vendors (Heterogeneous Multi-adapter), a new version of Windows Media Center, a Gadget for Windows Media Center, improved media features, XPS Essentials Pack and Windows PowerShell being included, and a redesigned Calculator with multiline capabilities including Programmer and Statistics modes along with unit conversion for length, weight, temperature, and several others. Many new items have been added to the Control Panel, including ClearType Text Tuner Display Color Calibration Wizard, Gadgets, Recovery, Troubleshooting, Workspaces Center, Location and Other Sensors, Credential Manager, Biometric Devices, System Icons, and Display. Windows Security Center has been renamed to Windows Action Center (Windows Health Center and Windows Solution Center in earlier builds), which encompasses both security and maintenance of the computer. ReadyBoost on 32-bit editions now supports up to 256 gigabytes of extra allocation. Windows 7 also supports images in RAW image format through the addition of Windows Imaging Component-enabled image decoders, which enables raw image thumbnails, previewing and metadata display in Windows Explorer, plus full-size viewing and slideshows in Windows Photo Viewer and Windows Media Center. Windows 7 also has a native TFTP client with the ability to transfer files to or from a TFTP server.
The taskbar has seen the biggest visual changes, where the old Quick Launch toolbar has been replaced with the ability to pin applications to the taskbar. Buttons for pinned applications are integrated with the task buttons. These buttons also enable Jump Lists to allow easy access to common tasks, and files frequently used with specific applications. The revamped taskbar also allows the reordering of taskbar buttons. To the far right of the system clock is a small rectangular button that serves as the Show desktop icon. By default, hovering over this button makes all visible windows transparent for a quick look at the desktop. In touch-enabled displays such as touch screens, tablet PCs, etc., this button is slightly (8 pixels) wider in order to accommodate being pressed by a finger. Clicking this button minimizes all windows, and clicking it a second time restores them.
Window management in Windows 7 has several new features: Aero Snap maximizes a window when it is dragged to the top, left, or right of the screen. Dragging windows to the left or right edges of the screen allows users to snap software windows to either side of the screen, such that the windows take up half the screen. When a user moves windows that were snapped or maximized using Snap, the system restores their previous state. Snap functions can also be triggered with keyboard shortcuts. Aero Shake hides all inactive windows when the active window's title bar is dragged back and forth rapidly.
Windows 7 includes 13 additional sound schemes, titled Afternoon, Calligraphy, Characters, Cityscape, Delta, Festival, Garden, Heritage, Landscape, Quirky, Raga, Savanna, and Sonata. Internet Spades, Internet Backgammon and Internet Checkers, which were removed from Windows Vista, were restored in Windows 7. Users are able to disable or customize many more Windows components than was possible in Windows Vista. New additions to this list of components include Internet Explorer 8, Windows Media Player 12, Windows Media Center, Windows Search, and Windows Gadget Platform. A new version of Microsoft Virtual PC, newly renamed as Windows Virtual PC was made available for Windows 7 Professional, Enterprise, and Ultimate editions. It allows multiple Windows environments, including Windows XP Mode, to run on the same machine. Windows XP Mode runs Windows XP in a virtual machine, and displays applications within separate windows on the Windows 7 desktop. Furthermore, Windows 7 supports the mounting of a virtual hard disk (VHD) as a normal data storage, and the bootloader delivered with Windows 7 can boot the Windows system from a VHD; however, this ability is only available in the Enterprise and Ultimate editions. The Remote Desktop Protocol (RDP) of Windows 7 is also enhanced to support real-time multimedia application including video playback and 3D games, thus allowing use of DirectX 10 in remote desktop environments. The three application limit, previously present in the Windows Vista and Windows XP Starter Editions, has been removed from Windows 7. All editions include some new and improved features, such as Windows Search, Security features, and some features new to Windows 7, that originated within Vista. Optional BitLocker Drive Encryption is included with Windows 7 Ultimate and Enterprise. Windows Defender is included; Microsoft Security Essentials antivirus software is a free download. All editions include Shadow Copy, which—every day or so—System Restore uses to take an automatic "previous version" snapshot of user files that have changed. Backup and restore have also been improved, and the Windows Recovery Environment—installed by default—replaces the optional Recovery Console of Windows XP.
A new system known as "Libraries" was added for file management; users can aggregate files from multiple folders into a "Library." By default, libraries for categories such as Documents, Pictures, Music, and Video are created, consisting of the user's personal folder and the Public folder for each. The system is also used as part of a new home networking system known as HomeGroup; devices are added to the network with a password, and files and folders can be shared with all other devices in the HomeGroup, or with specific users. The default libraries, along with printers, are shared by default, but the personal folder is set to read-only access by other users, and the Public folder can be accessed by anyone.
Windows 7 includes improved globalization support through a new Extended Linguistic Services API to provide multilingual support (particularly in Ultimate and Enterprise editions). Microsoft also implemented better support for solid-state drives, including the new TRIM command, and Windows 7 is able to identify a solid-state drive uniquely. Native support for USB 3.0 is not included because of delays in the finalization of the standard. At WinHEC 2008 Microsoft announced that color depths of 30-bit and 48-bit would be supported in Windows 7 along with the wide color gamut scRGB (which for HDMI 1.3 can be converted and output as xvYCC). The video modes supported in Windows 7 are 16-bit sRGB, 24-bit sRGB, 30-bit sRGB, 30-bit with extended color gamut sRGB, and 48-bit scRGB.
For developers, Windows 7 includes a new networking API with support for building SOAP-based web services in native code (as opposed to .NET-based WCF web services), new features to simplify development of installation packages and shorten application install times. Windows 7, by default, generates fewer User Account Control (UAC) prompts because it allows digitally signed Windows components to gain elevated privileges without a prompt. Additionally, users can now adjust the level at which UAC operates using a sliding scale.
Removed
Certain capabilities and programs that were a part of Windows Vista are no longer present or have been changed, resulting in the removal of certain functionalities; these include the classic Start Menu user interface, some taskbar features, Windows Explorer features, Windows Media Player features, Windows Ultimate Extras, Search button, and InkBall. Four applications bundled with Windows Vista—Windows Photo Gallery, Windows Movie Maker, Windows Calendar and Windows Mail—are not included with Windows 7 and were replaced by Windows Live-branded versions as part of the Windows Live Essentials suite.
Editions
Windows 7 is available in six different editions, of which the Home Premium, Professional, and Ultimate were available at retail in most countries, and as pre-loaded software on most new computers. Home Premium and Professional were aimed at home users and small businesses respectively, while Ultimate was aimed at enthusiasts. Each edition of Windows 7 includes all of the capabilities and features of the edition below it, and adds additional features oriented towards their market segments; for example, Professional adds additional networking and security features such as Encrypting File System and the ability to join a domain. Ultimate contained a superset of the features from Home Premium and Professional, along with other advanced features oriented towards power users, such as BitLocker drive encryption; unlike Windows Vista, there were no "Ultimate Extras" add-ons created for Windows 7 Ultimate. Retail copies were available in "upgrade" and higher-cost "full" version licenses; "upgrade" licenses require an existing version of Windows to install, while "full" licenses can be installed on computers with no existing operating system.
The remaining three editions were not available at retail, of which two were available exclusively through OEM channels as pre-loaded software. The Starter edition is a stripped-down version of Windows 7 meant for low-cost devices such as netbooks. In comparison to Home Premium, Starter has reduced multimedia functionality, does not allow users to change their desktop wallpaper or theme, disables the "Aero Glass" theme, does not have support for multiple monitors, and can only address 2GB of RAM. Home Basic was sold only in emerging markets, and was positioned in between Home Premium and Starter. The highest edition, Enterprise, is functionally similar to Ultimate, but is only sold through volume licensing via Microsoft's Software Assurance program.
All editions aside from Starter support both IA-32 and x86-64 architectures, Starter only supports 32-bit systems. Retail copies of Windows 7 are distributed on two DVDs: one for the IA-32 version and the other for x86-64. OEM copies include one DVD, depending on the processor architecture licensed. The installation media for consumer versions of Windows 7 are identical, the product key and corresponding license determines the edition that is installed. The Windows Anytime Upgrade service can be used to purchase an upgrade that unlocks the functionality of a higher edition, such as going from Starter to Home Premium, and Home Premium to Ultimate. Most copies of Windows 7 only contained one license; in certain markets, a "Family Pack" version of Windows 7 Home Premium was also released for a limited time, which allowed upgrades on up to three computers. In certain regions, copies of Windows 7 were only sold in, and could only be activated in a designated region.
Support lifecycle
Support for Windows 7 without Service Pack 1 ended on April 9, 2013, requiring users to update in order to continue receiving updates and support after 3 years, 8 months, and 18 days. Microsoft ended the sale of new retail copies of Windows 7 in October 2014, and the sale of new OEM licenses for Windows 7 Home Basic, Home Premium, and Ultimate ended on October 31, 2014. OEM sales of PCs with Windows 7 Professional pre-installed ended on October 31, 2016. The sale of non-Professional OEM licenses was stopped on October 31, 2014. Support for Windows Vista ended on April 11, 2017, requiring users to upgrade in order to continue receiving updates and support.
Mainstream support for Windows 7 ended on January 13, 2015. Extended support for Windows 7 ended on January 14, 2020. In August 2019, Microsoft announced it will be offering a 'free' extended security updates to some business users.
On September 7, 2018, Microsoft announced a paid "Extended Security Updates" service that will offer additional updates for Windows 7 Professional and Enterprise for up to three years after the end of extended support.
Variants of Windows 7 for embedded systems and thin clients have different support policies: Windows Embedded Standard 7 support ended in October 2020. Windows Thin PC and Windows Embedded POSReady 7 had support until October 2021. Windows Embedded Standard 7 and Windows Embedded POSReady 7 also get Extended Security Updates for up to three years after their end of extended support date. However, these embedded edition updates aren't able to be downloaded on non-embedded Windows 7 editions with a simple registry hack, unlike Windows XP with its embedded editions updates. Instead, a more complex patching tool, that allows the installation of pirated Extended Security Updates, ended up being the only solution to allow consumer variants to continue to receive updates. The Extended Security Updates service on Windows Embedded POSReady 7 will expire on October 14, 2024. This will mark the final end of the Windows NT 6.1 product line after 15 years, 2 months, and 17 days.
In March 2019, Microsoft announced that it would display notifications to users informing users of the upcoming end of support, and direct users to a website urging them to purchase a Windows 10 upgrade or a new computer.
In August 2019, researchers reported that "all modern versions of Microsoft Windows" may be at risk for "critical" system compromise because of design flaws of hardware device drivers from multiple providers. In the same month, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Microsoft Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may now include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from the older Windows XP version to the most recent Windows 10 versions; a patch to correct the flaw is currently available. As of January 15, 2020, Windows Update is blocked from running on Windows 7.
In September 2019, Microsoft announced that it would provide free security updates for Windows 7 on federally-certified voting machines through the 2020 United States elections.
System requirements
Additional requirements to use certain features:
Windows XP Mode (Professional, Ultimate and Enterprise): Requires an additional 1 GB of RAM and additional 15 GB of available hard disk space. The requirement for a processor capable of hardware virtualization has been lifted.
Windows Media Center (included in Home Premium, Professional, Ultimate and Enterprise), requires a TV tuner to receive and record TV.
Extent of hardware support
Physical memory
The maximum amount of RAM that Windows 7 supports varies depending on the product edition and on the processor architecture, as shown in the following table.
Processor limits
Windows 7 Professional and up support up to 2 physical processors (CPU sockets),
whereas Windows 7 Starter, Home Basic, and Home Premium editions support only 1. Physical processors with either multiple cores, or hyper-threading, or both, implement more than one logical processor per physical processor. The x86 editions of Windows 7 support up to 32 logical processors; x64 editions support up to 256 (4 x 64).
In January 2016, Microsoft announced that it would no longer support Windows platforms older than Windows 10 on any future Intel-compatible processor lines, citing difficulties in reliably allowing the operating system to operate on newer hardware. Microsoft stated that effective July 17, 2017, devices with Intel Skylake CPUs were only to receive the "most critical" updates for Windows 7 and 8.1, and only if they have been judged not to affect the reliability of Windows 7 on older hardware. For enterprise customers, Microsoft issued a list of Skylake-based devices "certified" for Windows 7 and 8.1 in addition to Windows 10, to assist them in migrating to newer hardware that can eventually be upgraded to 10 once they are ready to transition. Microsoft and their hardware partners provide special testing and support for these devices on 7 and 8.1 until the July 2017 date.
On March 18, 2016, in response to criticism from enterprise customers, Microsoft delayed the end of support and non-critical updates for Skylake systems to July 17, 2018, but stated that they would also continue to receive security updates through the end of extended support. In August 2016, citing a "strong partnership with our OEM partners and Intel", Microsoft retracted the decision and stated that it would continue to support Windows 7 and 8.1 on Skylake hardware through the end of their extended support lifecycle. However, the restrictions on newer CPU microarchitectures remain in force.
In March 2017, a Microsoft knowledge base article announced which implies that devices using Intel Kaby Lake, AMD Bristol Ridge, or AMD Ryzen, would be blocked from using Windows Update entirely. In addition, official Windows 7 device drivers are not available for the Kaby Lake and Ryzen platforms.
Security updates released since March 2018 contain bugs which affect processors that do not support SSE2 extensions, including all Pentium III processors. Microsoft initially stated that it would attempt to resolve the issue, and prevented installation of the affected patches on these systems. However, on June 15, 2018, Microsoft retroactively modified its support documents to remove the promise that this bug would be resolved, replacing it with a statement suggesting that users obtain a newer processor. This effectively ends future patch support for Windows 7 on these systems.
Updates
Service Pack 1
Windows 7 Service Pack 1 (SP1) was announced on March 18, 2010. A beta was released on July 12, 2010. The final version was released to the public on February 22, 2011. At the time of release, it was not made mandatory. It was available via Windows Update, direct download, or by ordering the Windows 7 SP1 DVD. The service pack is on a much smaller scale than those released for previous versions of Windows, particularly Windows Vista.
Windows 7 Service Pack 1 adds support for Advanced Vector Extensions (AVX), a 256-bit instruction set extension for processors, and improves IKEv2 by adding additional identification fields such as E-mail ID to it. In addition, it adds support for Advanced Format 512e as well as additional Identity Federation Services. Windows 7 Service Pack 1 also resolves a bug related to HDMI audio and another related to printing XPS documents.
In Europe, the automatic nature of the BrowserChoice.eu feature was dropped in Windows 7 Service Pack 1 in February 2011 and remained absent for 14 months despite Microsoft reporting that it was still present, subsequently described by Microsoft as a "technical error." As a result, in March 2013, the European Commission fined Microsoft €561 million to deter companies from reneging on settlement promises.
Platform Update
The Platform Update for Windows 7 SP1 and Windows Server 2008 R2 SP1 was released on February 26, 2013 after a pre-release version had been released on November 5, 2012. It is also included with Internet Explorer 10 for Windows 7.
It includes enhancements to Direct2D, DirectWrite, Direct3D, Windows Imaging Component (WIC), Windows Advanced Rasterization Platform (WARP), Windows Animation Manager (WAM), XPS Document API, H.264 Video Decoder and JPEG XR decoder. However support for Direct3D 11.1 is limited as the update does not include DXGI/WDDM 1.2 from Windows 8, making unavailable many related APIs and significant features such as stereoscopic frame buffer, feature level 11_1 and optional features for levels 10_0, 10_1 and 11_0.
Disk Cleanup update
In October 2013, a Disk Cleanup Wizard addon was released that lets users delete outdated Windows updates on Windows 7 SP1, thus reducing the size of the WinSxS directory. This update backports some features found in Windows 8.
Windows Management Framework 5.0
Windows Management Framework 5.0 includes updates to Windows PowerShell 5.0, Windows PowerShell Desired State Configuration (DSC), Windows Remote Management (WinRM), Windows Management Instrumentation (WMI). It was released on February 24, 2016 and was eventually superseded by Windows Management Framework 5.1.
Convenience rollup
In May 2016, Microsoft released a "Convenience rollup update for Windows 7 SP1 and Windows Server 2008 R2 SP1," which contains all patches released between the release of SP1 and April 2016. The rollup is not available via Windows Update, and must be downloaded manually. This package can also be integrated into a Windows 7 installation image.
Since October 2016, all security and reliability updates are cumulative. Downloading and installing updates that address individual problems is no longer possible, but the number of updates that must be downloaded to fully update the OS is significantly reduced.
Monthly update rollups (July 2016-January 2020)
In June 2018, Microsoft announced that they'll be moving Windows 7 to a monthly update model beginning with updates released in September 2018 - two years after Microsoft switched the rest of their supported operating systems to that model.
With the new update model, instead of updates being released as they became available, only two update packages were released on the second Tuesday of every month until Windows 7 reached its end of life - one package containing security and quality updates, and a smaller package that contained only the security updates. Users could choose which package they wanted to install each month. Later in the month, another package would be released which was a preview of the next month's security and quality update rollup.
Installing the preview rollup package released for Windows 7 on March 19, 2019, or any later released rollup package, that makes Windows more reliable. This change was made so Microsoft could continue to service the operating system while avoiding “version-related issues”.
Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows XP and Windows Me would end on July 31, 2019 (and for Windows 7 on January 22, 2020).
The last non-extended security update rollup packages were released on January 14, 2020, the last day that Windows 7 had extended support.
End of support (after January 14, 2020)
On January 14, 2020, Windows 7 support ended with Microsoft no longer providing security updates or fixes after that date, except for subscribers of the Windows 7 Extended Security Updates. However, there have been two updates that have been issued to non-ESU subscribers:
In February 2020, Microsoft released an update via Windows Update to fix a black wallpaper issue caused by the January 2020 update for Windows 7.
In June 2020, Microsoft released an update via Windows Update to roll out the new Chromium-based Microsoft Edge to Windows 7 and 8.1 machines that are not connected to Active Directory. Users, e.g. those on Active Directory, can download Edge from Microsoft's website.
In a support document, Microsoft has stated that a full-screen upgrade warning notification would be displayed on Windows 7 PCs on all editions except the Enterprise edition after January 15. The notification does not appear on machines connected to Active Directory, machines in kiosk mode, or machines subscribed for Extended Security Updates.
Reception
Critical reception
Windows 7 received critical acclaim, with critics noting the increased usability and functionality when compared with its predecessor, Windows Vista. CNET gave Windows 7 Home Premium a rating of 4.5 out of 5 stars, stating that it "is more than what Vista should have been, [and] it's where Microsoft needed to go". PC Magazine rated it a 4 out of 5 saying that Windows 7 is a "big improvement" over Windows Vista, with fewer compatibility problems, a retooled taskbar, simpler home networking and faster start-up. Maximum PC gave Windows 7 a rating of 9 out of 10 and called Windows 7 a "massive leap forward" in usability and security, and praised the new Taskbar as "worth the price of admission alone." PC World called Windows 7 a "worthy successor" to Windows XP and said that speed benchmarks showed Windows 7 to be slightly faster than Windows Vista. PC World also named Windows 7 one of the best products of the year.
In its review of Windows 7, Engadget said that Microsoft had taken a "strong step forward" with Windows 7 and reported that speed is one of Windows 7's major selling points—particularly for the netbook sets. Laptop Magazine gave Windows 7 a rating of 4 out of 5 stars and said that Windows 7 makes computing more intuitive, offered better overall performance including a "modest to dramatic" increase in battery life on laptop computers. TechRadar gave Windows 7 a rating of 5 out of 5 stars, concluding that "it combines the security and architectural improvements of Windows Vista with better performance than XP can deliver on today's hardware. No version of Windows is ever perfect, but Windows 7 really is the best release of Windows yet." USA Today and The Telegraph also gave Windows 7 favorable reviews.
Nick Wingfield of The Wall Street Journal wrote, "Visually arresting," and "A pleasure." Mary Branscombe of Financial Times wrote, "A clear leap forward." of Gizmodo wrote, "Windows 7 Kills Snow Leopard." Don Reisinger of CNET wrote, "Delightful." David Pogue of The New York Times wrote, "Faster." J. Peter Bruzzese and Richi Jennings of Computerworld wrote, "Ready."
Some Windows Vista Ultimate users have expressed concerns over Windows 7 pricing and upgrade options. Windows Vista Ultimate users wanting to upgrade from Windows Vista to Windows 7 had to either pay $219.99 to upgrade to Windows 7 Ultimate or perform a clean install, which requires them to reinstall all of their programs.
The changes to User Account Control on Windows 7 were criticized for being potentially insecure, as an exploit was discovered allowing untrusted software to be launched with elevated privileges by exploiting a trusted component. Peter Bright of Ars Technica argued that "the way that the Windows 7 UAC 'improvements' have been made completely exempts Microsoft's developers from having to do that work themselves. With Windows 7, it's one rule for Redmond, another one for everyone else." Microsoft's Windows kernel engineer Mark Russinovich acknowledged the problem, but noted that malware can also compromise a system when users agree to a prompt.
Sales
In July 2009, in only eight hours, pre-orders of Windows 7 at amazon.co.uk surpassed the demand which Windows Vista had in its first 17 weeks. It became the highest-grossing pre-order in Amazon's history, surpassing sales of the previous record holder, the seventh Harry Potter book. After 36 hours, 64-bit versions of Windows 7 Professional and Ultimate editions sold out in Japan. Two weeks after its release its market share had surpassed that of Snow Leopard, released two months previously as the most recent update to Apple's Mac OS X operating system. According to Net Applications, Windows 7 reached a 4% market share in less than three weeks; in comparison, it took Windows Vista seven months to reach the same mark. As of February 2014, Windows 7 had a market share of 47.49% according to Net Applications; in comparison, Windows XP had a market share of 29.23%.
On March 4, 2010, Microsoft announced that it had sold more than 90 million licenses.
By April 23, 2010, more than 100 million copies were sold in six months, which made it Microsoft's fastest-selling operating system. As of June 23, 2010, Windows 7 has sold 150 million copies which made it the fastest selling operating system in history with seven copies sold every second. Based on worldwide data taken during June 2010 from Windows Update 46% of Windows 7 PCs run the 64-bit edition of Windows 7. According to Stephen Baker of the NPD Group during April 2010 in the United States 77% of PCs sold at retail were pre-installed with the 64-bit edition of Windows 7. As of July 22, 2010, Windows 7 had sold 175 million copies. On October 21, 2010, Microsoft announced that more than 240 million copies of Windows 7 had been sold. Three months later, on January 27, 2011, Microsoft announced total sales of 300 million copies of Windows 7. On July 12, 2011, the sales figure was refined to over 400 million end-user licenses and business installations. As of July 9, 2012, over 630 million licenses have been sold; this number includes licenses sold to OEMs for new PCs.
Antitrust concerns
As with other Microsoft operating systems, Windows 7 was studied by United States federal regulators who oversee the company's operations following the 2001 United States v. Microsoft Corp. settlement. According to status reports filed, the three-member panel began assessing prototypes of the new operating system in February 2008. Michael Gartenberg, an analyst at Jupiter Research, said, "[Microsoft's] challenge for Windows 7 will be how can they continue to add features that consumers will want that also don't run afoul of regulators."
In order to comply with European antitrust regulations, Microsoft proposed the use of a "ballot" screen containing download links to competing web browsers, thus removing the need for a version of Windows completely without Internet Explorer, as previously planned. Microsoft announced that it would discard the separate version for Europe and ship the standard upgrade and full packages worldwide, in response to criticism involving Windows 7 E and concerns from manufacturers about possible consumer confusion if a version of Windows 7 with Internet Explorer were shipped later, after one without Internet Explorer.
As with the previous version of Windows, an N version, which does not come with Windows Media Player, has been released in Europe, but only for sale directly from Microsoft sales websites and selected others.
See also
BlueKeep, a security vulnerability discovered in May 2019 that affected most Windows NT-based computers up to Windows 7
References
Further reading
External links
Windows 7 Service Pack 1 (SP1)
Windows 7 SP1 update history
2009 software
IA-32 operating systems
7
X86-64 operating systems |
326430 | https://en.wikipedia.org/wiki/Finitely%20generated%20module | Finitely generated module | In mathematics, a finitely generated module is a module that has a finite generating set. A finitely generated module over a ring R may also be called a finite R-module, finite over R, or a module of finite type.
Related concepts include finitely cogenerated modules, finitely presented modules, finitely related modules and coherent modules all of which are defined below. Over a Noetherian ring the concepts of finitely generated, finitely presented and coherent modules coincide.
A finitely generated module over a field is simply a finite-dimensional vector space, and a finitely generated module over the integers is simply a finitely generated abelian group.
Definition
The left R-module M is finitely generated if there exist a1, a2, ..., an in M such that for any x in M, there exist r1, r2, ..., rn in R with x = r1a1 + r2a2 + ... + rnan.
The set {a1, a2, ..., an} is referred to as a generating set of M in this case. A finite generating set need not be a basis, since it need not be linearly independent over R. What is true is: M is finitely generated if and only if there is a surjective R-linear map:
for some n (M is a quotient of a free module of finite rank.)
If a set S generates a module that is finitely generated, then there is a finite generating set that is included in S, since only finitely many elements in S are needed to express any finite generating set, and these finitely many elements form a generating set. However, it may occur that S does not contain any finite generating set of minimal cardinality. For example and the set of the prime numbers are generating sets of viewed as -module, but a generating set formed from prime numbers has at least two elements.
In the case where the module M is a vector space over a field R, and the generating set is linearly independent, n is well-defined and is referred to as the dimension of M (well-defined means that any linearly independent generating set has n elements: this is the dimension theorem for vector spaces).
Any module is the union of the directed set of its finitely generated submodules.
A module M is finitely generated if and only if any increasing chain Mi of submodules with union M stabilizes: i.e., there is some i such that Mi = M. This fact with Zorn's lemma implies that every nonzero finitely generated module admits maximal submodules. If any increasing chain of submodules stabilizes (i.e., any submodule is finitely generated), then the module M is called a Noetherian module.
Examples
If a module is generated by one element, it is called a cyclic module.
Let R be an integral domain with K its field of fractions. Then every finitely generated R-submodule I of K is a fractional ideal: that is, there is some nonzero r in R such that rI is contained in R. Indeed, one can take r to be the product of the denominators of the generators of I. If R is Noetherian, then every fractional ideal arises in this way.
Finitely generated modules over the ring of integers Z coincide with the finitely generated abelian groups. These are completely classified by the structure theorem, taking Z as the principal ideal domain.
Finitely generated (say left) modules over a division ring are precisely finite dimensional vector spaces (over the division ring).
Some facts
Every homomorphic image of a finitely generated module is finitely generated. In general, submodules of finitely generated modules need not be finitely generated. As an example, consider the ring R = Z[X1, X2, ...] of all polynomials in countably many variables. R itself is a finitely generated R-module (with {1} as generating set). Consider the submodule K consisting of all those polynomials with zero constant term. Since every polynomial contains only finitely many terms whose coefficients are non-zero, the R-module K is not finitely generated.
In general, a module is said to be Noetherian if every submodule is finitely generated. A finitely generated module over a Noetherian ring is a Noetherian module (and indeed this property characterizes Noetherian rings): A module over a Noetherian ring is finitely generated if and only if it is a Noetherian module. This resembles, but is not exactly Hilbert's basis theorem, which states that the polynomial ring R[X] over a Noetherian ring R is Noetherian. Both facts imply that a finitely generated commutative algebra over a Noetherian ring is again a Noetherian ring.
More generally, an algebra (e.g., ring) that is a finitely generated module is a finitely generated algebra. Conversely, if a finitely generated algebra is integral (over the coefficient ring), then it is finitely generated module. (See integral element for more.)
Let 0 → M′ → M → M′′ → 0 be an exact sequence of modules. Then M is finitely generated if M′, M′′ are finitely generated. There are some partial converses to this. If M is finitely generated and M' is finitely presented (which is stronger than finitely generated; see below), then M′ is finitely generated. Also, M is Noetherian (resp. Artinian) if and only if M′, M′′ are Noetherian (resp. Artinian).
Let B be a ring and A its subring such that B is a faithfully flat right A-module. Then a left A-module F is finitely generated (resp. finitely presented) if and only if the B-module B ⊗A F is finitely generated (resp. finitely presented).
Finitely generated modules over a commutative ring
For finitely generated modules over a commutative ring R, Nakayama's lemma is fundamental. Sometimes, the lemma allows one to prove finite dimensional vector spaces phenomena for finitely generated modules. For example, if f : M → M is a surjective R-endomorphism of a finitely generated module M, then f is also injective, and hence is an automorphism of M. This says simply that M is a Hopfian module. Similarly, an Artinian module M is coHopfian: any injective endomorphism f is also a surjective endomorphism.
Any R-module is an inductive limit of finitely generated R-submodules. This is useful for weakening an assumption to the finite case (e.g., the characterization of flatness with the Tor functor.)
An example of a link between finite generation and integral elements can be found in commutative algebras. To say that a commutative algebra A is a finitely generated ring over R means that there exists a set of elements G = {x1, ..., xn} of A such that the smallest subring of A containing G and R is A itself. Because the ring product may be used to combine elements, more than just R-linear combinations of elements of G are generated. For example, a polynomial ring R[x] is finitely generated by {1,x} as a ring, but not as a module. If A is a commutative algebra (with unity) over R, then the following two statements are equivalent:
A is a finitely generated R module.
A is both a finitely generated ring over R and an integral extension of R.
Generic rank
Let M be a finitely generated module over an integral domain A with the field of fractions K. Then the dimension is called the generic rank of M over A. This number is the same as the number of maximal A-linearly independent vectors in M or equivalently the rank of a maximal free submodule of M. (cf. rank of an abelian group.) Since , is a torsion module. When A is Noetherian, by generic freeness, there is an element f (depending on M) such that is a free -module. Then the rank of this free module is the generic rank of M.
Now suppose the integral domain A is generated as algebra over a field k by finitely many homogeneous elements of degrees . Suppose M is graded as well and let be the Poincaré series of M.
By the Hilbert–Serre theorem, there is a polynomial F such that . Then is the generic rank of M.
A finitely generated module over a principal ideal domain is torsion-free if and only if it is free. This is a consequence of the structure theorem for finitely generated modules over a principal ideal domain, the basic form of which says a finitely generated module over a PID is a direct sum of a torsion module and a free module. But it can also be shown directly as follows: let M be a torsion-free finitely generated module over a PID A and F a maximal free submodule. Let f be in A such that . Then is free since it is a submodule of a free module and A is a PID. But now is an isomorphism since M is torsion-free.
By the same argument as above, a finitely generated module over a Dedekind domain A (or more generally a semi-hereditary ring) is torsion-free if and only if it is projective; consequently, a finitely generated module over A is a direct sum of a torsion module and a projective module. A finitely generated projective module over a Noetherian integral domain has constant rank and so the generic rank of a finitely generated module over A is the rank of its projective part.
Equivalent definitions and finitely cogenerated modules
The following conditions are equivalent to M being finitely generated (f.g.):
For any family of submodules {Ni | i ∈ I} in M, if , then for some finite subset F of I.
For any chain of submodules {Ni | i ∈ I} in M, if , then Ni = M for some i in I.
If is an epimorphism, then the restriction is an epimorphism for some finite subset F of I.
From these conditions it is easy to see that being finitely generated is a property preserved by Morita equivalence. The conditions are also convenient to define a dual notion of a finitely cogenerated module M. The following conditions are equivalent to a module being finitely cogenerated (f.cog.):
For any family of submodules {Ni | i ∈ I} in M, if , then for some finite subset F of I.
For any chain of submodules {Ni | i ∈ I} in M, if , then Ni = {0} for some i in I.
If is a monomorphism, where each is an R module, then is a monomorphism for some finite subset F of I.
Both f.g. modules and f.cog. modules have interesting relationships to Noetherian and Artinian modules, and the Jacobson radical J(M) and socle soc(M) of a module. The following facts illustrate the duality between the two conditions. For a module M:
M is Noetherian if and only if every submodule N of M is f.g.
M is Artinian if and only if every quotient module M/N is f.cog.
M is f.g. if and only if J(M) is a superfluous submodule of M, and M/J(M) is f.g.
M is f.cog. if and only if soc(M) is an essential submodule of M, and soc(M) is f.g.
If M is a semisimple module (such as soc(N) for any module N), it is f.g. if and only if f.cog.
If M is f.g. and nonzero, then M has a maximal submodule and any quotient module M/N is f.g.
If M is f.cog. and nonzero, then M has a minimal submodule, and any submodule N of M is f.cog.
If N and M/N are f.g. then so is M. The same is true if "f.g." is replaced with "f.cog."
Finitely cogenerated modules must have finite uniform dimension. This is easily seen by applying the characterization using the finitely generated essential socle. Somewhat asymmetrically, finitely generated modules do not necessarily have finite uniform dimension. For example, an infinite direct product of nonzero rings is a finitely generated (cyclic!) module over itself, however it clearly contains an infinite direct sum of nonzero submodules. Finitely generated modules do not necessarily have finite co-uniform dimension either: any ring R with unity such that R/J(R) is not a semisimple ring is a counterexample.
Finitely presented, finitely related, and coherent modules
Another formulation is this: a finitely generated module M is one for which there is an epimorphism mapping Rk onto M :
f : Rk → M.
Suppose now there is an epimorphism,
φ : F → M.
for a module M and free module F.
If the kernel of φ is finitely generated, then M is called a finitely related module. Since M is isomorphic to F/ker(φ), this basically expresses that M is obtained by taking a free module and introducing finitely many relations within F (the generators of ker(φ)).
If the kernel of φ is finitely generated and F has finite rank (i.e. F=Rk), then M is said to be a finitely presented module. Here, M is specified using finitely many generators (the images of the k generators of F=Rk) and finitely many relations (the generators of ker(φ)). See also: free presentation. Finitely presented modules can be characterized by an abstract property within the category of R-modules: they are precisely the compact objects in this category.
A coherent module''' M is a finitely generated module whose finitely generated submodules are finitely presented.
Over any ring R, coherent modules are finitely presented, and finitely presented modules are both finitely generated and finitely related. For a Noetherian ring R, finitely generated, finitely presented, and coherent are equivalent conditions on a module.
Some crossover occurs for projective or flat modules. A finitely generated projective module is finitely presented, and a finitely related flat module is projective.
It is true also that the following conditions are equivalent for a ring R:
R is a right coherent ring.
The module RR is a coherent module.
Every finitely presented right R module is coherent.
Although coherence seems like a more cumbersome condition than finitely generated or finitely presented, it is nicer than them since the category of coherent modules is an abelian category, while, in general, neither finitely generated nor finitely presented modules form an abelian category.
See also
Integral element
Artin–Rees lemma
Countably generated module
Finite algebra
References
Textbooks
Bourbaki, Nicolas, Commutative algebra. Chapters 1--7''. Translated from the French. Reprint of the 1989 English translation. Elements of Mathematics (Berlin). Springer-Verlag, Berlin, 1998. xxiv+625 pp.
.
Module theory
fr:Module sur un anneau#Propriétés de finitude |
329099 | https://en.wikipedia.org/wiki/Secret%20decoder%20ring | Secret decoder ring | A secret decoder ring (or secret decoder) is a device that allows one to decode a simple substitution cipher—or to encrypt a message by working in the opposite direction.
As inexpensive toys, secret decoders have often been used as promotional items by retailers, as well as radio and television programs, from the 1930s through to the current day.
Decoders, whether badges or rings, are an entertaining way for children to tap into a common fascination with encryption, ciphers, and secret codes, and are used to send hidden messages back and forth to one another.
History
Secret decoders are generally circular scales, descendants of the cipher disk developed in the 15th century by Leon Battista Alberti. Rather than the complex polyalphabetic Alberti cipher method, the decoders for children invariably use simple Caesar cipher substitutions.
The most well-known example started in 1934 with the Ovaltine company's sponsored radio program Little Orphan Annie. The show's fan club, "Radio Orphan Annie's Secret Society", distributed a member's handbook that included a simple substitution cipher with a resulting numeric cipher text. This was followed the next year with a membership pin that included a cipher disk - enciphering the letters A-Z to numbers 1-26. From 1935 to 1940, metal decoders were produced for the promotion. From 1941 on, paper decoders were produced. Similar metal badges and pocket decoders continued with the Captain Midnight radio and television programs.
None of these early decoders were in the form of finger rings; however, "secret compartment" rings were common radio program premiums. In the early 1960s, secret decoder rings appeared - notably in conjunction with the Jonny Quest television program sponsored by PF Shoes. A later, less ornate, decoder ring was offered by Kix Cereals.
Today, high quality, stainless steel decoder rings for children and adults are being produced by companies such as Retroworks and Think Geek.
Messages
Ovaltine and other companies that marketed early decoders to children often included "secret messages" on their radio shows aimed at children. These could be decoded for a preview of the next episode of the show.
Film references
The film A Christmas Story (1983) depicts the Little Orphan Annie radio show transmitting a secret message that deciphered to: "Be sure to drink your Ovaltine", unlike the actual broadcasts' secret code segments, which usually previewed the upcoming episode.
Decoder rings are mentioned by Arnold Schwarzenegger's character in Last Action Hero.
A "Drogan's Decoder Ring" is mentioned in the 1985 comedy movie Spies Like Us between characters played by Stephen Hoye and Dan Aykroyd.
Laura Petrie mentions her husband, Rob's "Captain Midnight Decoder Ring," in Season 5, episode 27 of The Dick Van Dyke Show.
See also
Caesar cipher
Cipher disk
Jefferson disk
References
Encryption devices
History of cryptography
Mechanical puzzles
1930s toys |
329664 | https://en.wikipedia.org/wiki/Television%20in%20Finland | Television in Finland | Television was introduced in Finland in 1955. Color television started in 1971. Prior to 1986, Yle monopolized Finnish television. All terrestrial analogue stations stopped broadcasting on 1 September 2007 after the introduction of digital television; cable providers were allowed to continue analog broadcasting in their networks until 1 March 2008.
Typically, foreign-language content is subtitled, retaining the original language soundtrack. This includes interview responses in news or magazine programmes not given in the main language of that programme. Foreign programming intended for children is, however, usually dubbed into one of the national languages. Regardless of the intended audience or original language, many shows receive a Finnish and/or Swedish title which is used in programme schedules.
In 2016 it was said that 47% of people watch via terrestrial antenna, 43% via cable, 11% via IPTV and 4% via satellite.
Digital terrestrial
Digital terrestrial television was launched on 21 August 2001. The analogue networks continued its broadcasts alongside the digital ones until 1 September 2007, when they were shut down nationwide.
Before the analogue switchoff, the terrestrial network had three multiplexes: MUX A, MUX B and MUX C. MUX A contained the channels of the public broadcaster Yleisradio and MUX B was shared between the two commercial broadcasters: MTV3 and Nelonen. MUX C contained channels of various other broadcasters. After the analogue closedown, a fourth multiplex named MUX E was launched.
In addition the free-to-air broadcasts, two companies are providing encryption cards for pay television: Canal Digital and PlusTV. Canal Digital was the first to launch, originally only offering four Canal+ channels (the Disney Channel was added later on). PlusTV was launched in November 2006, originally only broadcasting MTV3 Max and Subtv Juniori (later on adding Subtv Leffa and Urheilu+kanava). Both packages got more channels with the launch of MUX E in September 2007: SVT Europa and MTV3 Fakta was added to PlusTV and KinoTV was added to Canal Digital, while Discovery Channel, Eurosport, MTV Finland and Nickelodeon were added to both packages.
September 2007 also saw the launch of the SveaTV package in Ostrobothnia which broadcasts channels from Sweden.
The digital channel YLE Extra was closed on 31 December 2007 and was replaced by YLE TV1+, a simulcast of TV1 with subtitles included in the video stream. TV1+ was closed on 4 August 2008 due to its low viewing share.
Finland has started DVB-T2 switchover that will be finished on 31.3.2020.
Cable
Analogue cable television were switched off in Finland on 1 March 2008, but digital cable television is widespread all over the country and its infrastructure used for cable internet services.
The major cable operators are DNA, Welho and TTV, operating in Turku, Helsinki and Tampere areas. All pay television uses digital broadcasts, DVB-C set-top boxes have been available since 2001.
Satellite
Digital satellite television started in Nordic countries, and also in Finland, by Multichoice Nordic pay-TV platform during 1996. The first set-top boxes available were manufactured by Nokia and Pace. After that the service merged with Canal Digital in late 1997. Competing pay television Viasat and Yle's channel TV Finland started digital broadcasts in 1999.
Canal Digital launched some HDTV channels, like Discovery HD, on their digital paytv-package during 2006. Pan-European HDTV-channel Euro1080 HD1 is available also in Finland.
List of channels
All Yle channels are broadcast free-to-air and so are a few commercial ones including MTV3, Nelonen, Sub, Jim, TV5, FOX and Kutonen. Yle channels are state owned and are funded by a ring fenced so-called "Yle tax".
Most of the channels are the same throughout mainland Finland. In Ostrobothnia and Åland there is an extra multiplex available which provides encrypted channels from Sweden, along with respective local stations, and of course due to overlapping signals, Russian, Swedish, Norwegian and Estonian stations are able to be seen near the border areas and vice versa.
DVB-T Channels
DVB-T2 channels
Viewing shares
Notes
See also
Media of Finland
References
External links
Viestinävirasto - Finnish Communications Regulatory Authority
Digita - Terrestrial Broadcast Operator
DNA - Terrestrial Broadcast Operator
Finnpanel - Measures TV viewing and radio listening
Telkku.com - TV-guide |
332222 | https://en.wikipedia.org/wiki/Hushmail | Hushmail | Hushmail is an encrypted proprietary web-based email service offering PGP-encrypted e-mail and vanity domain service. Hushmail uses OpenPGP standards. If public encryption keys are available to both recipient and sender (either both are Hushmail users or have uploaded PGP keys to the Hush keyserver), Hushmail can convey authenticated, encrypted messages in both directions. For recipients for whom no public key is available, Hushmail will allow a message to be encrypted by a password (with a password hint) and stored for pickup by the recipient, or the message can be sent in cleartext. In July, 2016, the company launched an iOS app that offers end-to-end encryption and full integration with the webmail settings. The company is located in Vancouver, British Columbia, Canada.
History
Hushmail was founded by Cliff Baltzley in 1999 after he left Ultimate Privacy.
Accounts
Individuals
There is one type of paid account, Hushmail Premium, which provides 10GB of storage, as well as IMAP and POP3 service. Hushmail offers a two-week free trial of this account.
Businesses
The standard business account provides the same features as the paid individual account, plus other features like vanity domain, email forwarding, catch-all email and user admin. A standard business plan with email archiving is also available. Features like secure forms and email archiving can be found in the healthcare and legal industry-specific plans.
Additional security features include hidden IP addresses in e-mail headers, two-step verification and HIPAA compliant encryption.
Instant messaging
An instant messaging service, Hush Messenger, was offered until July 1, 2011.
Compromises to email privacy
Hushmail received favorable reviews in the press. It was believed that possible threats, such as demands from the legal system to reveal the content of traffic through the system, were not imminent in Canada unlike the United States and that if data were to be handed over, encrypted messages would be available only in encrypted form.
Developments in November 2007 led to doubts amongst security-conscious users about Hushmail's security specifically, concern over a backdoor. The issue originated with the non-Java version of the Hush system. It performed the encrypt/decrypt steps on Hush's servers, and then used SSL to transmit the data to the user. The data is available as cleartext during this small window of time, with the passphrase being capturable at this point, facilitating the decryption of all stored messages and future messages using this passphrase. Hushmail stated that the Java version is also vulnerable, in that they may be compelled to deliver a compromised Java applet to a user.
Hushmail supplied cleartext copies of private email messages associated with several addresses at the request of law enforcement agencies under a Mutual Legal Assistance Treaty with the United States: e.g. in the case of United States v. Stumbo. In addition, the contents of emails between Hushmail addresses were analyzed, and 12 CDs were supplied to U.S. authorities. Hushmail privacy policy states that it logs IP addresses in order "to analyze market trends, gather broad demographic information, and prevent abuse of our services."
Hush Communications, the company that provides Hushmail, states that it will not release any user data without a court order from the Supreme Court of British Columbia, Canada, and that other countries seeking access to user data must apply to the government of Canada via an applicable Mutual Legal Assistance Treaty. Hushmail states, "...that means that there is no guarantee that we will not be compelled, under a court order issued by the Supreme Court of British Columbia, Canada, to treat a user named in a court order differently, and compromise that user's privacy" and "[...]if a court order has been issued by the Supreme Court of British Columbia compelling us to reveal the content of your encrypted email, the "attacker" could be Hush Communications, the actual service provider."
See also
Comparison of mail servers
Comparison of webmail providers
References
External links
Cryptographic software
Webmail
Internet privacy software
OpenPGP
Internet properties established in 1999 |
335541 | https://en.wikipedia.org/wiki/Sveriges%20Television | Sveriges Television | Sveriges Television AB ("Sweden's Television Stock Company"), shortened to SVT (), is the Swedish national public state-controlled television broadcaster, funded by a public service tax on personal income set by the Riksdag (national parliament). Prior to 2019, SVT was funded by a television licence fee payable by all owners of television sets. The Swedish public broadcasting system is largely modelled after the system used in the United Kingdom, and Sveriges Television shares many traits with its British counterpart, the BBC.
SVT is a public limited company that can be described as a quasi-autonomous non-government organisation. Together with the other two public broadcasters, Sveriges Radio and Sveriges Utbildningsradio, it is owned by an independent foundation, Förvaltningsstiftelsen för Sveriges Radio AB, Sveriges Television AB och Sveriges Utbildningsradio AB. The foundation's board consists of 13 politicians, representing the political parties in the Riksdag and appointed by the Government. The foundation in turn appoints the members of the SVT board. SVT's regulatory framework is governed by Swedish law. Originally, SVT and Sveriges Radio were a joint company, but since 1979 they and Sveriges Utbildningsradio are sister companies sharing some joint services.
SVT maintained a monopoly in domestic terrestrial broadcasting from its start in 1956 until the privately held TV4 started broadcasting terrestrially in 1992. It is barred from accepting advertisements except in the case of sponsors for sporting events. Until the launch of the Swedish language satellite television channel TV3 in 1987, Sveriges Television provided the only Swedish television available to the public. SVT is still the biggest TV network in Sweden, with an audience share of 36.4 percent.
History
When radio broadcasting was first organized in the 1920s, it was decided to adopt a model similar to that of the British Broadcasting Company in the United Kingdom. Radio would be a monopoly funded by a licence fee and organized as a limited company, AB Radiotjänst ("Radio Service Ltd."), owned by the radio industry and the press. The transmitters were owned by the state through Telegrafverket and the press held a monopoly on newscasts through Tidningarnas Telegrambyrå. AB Radiotjänst was one of 23 founding broadcasting organizations of the European Broadcasting Union in 1950.
Tidningarnas Telegrambyrå lost its monopoly on newscasts de jure in 1947 and de facto in 1956, but otherwise the same model would be applied to television.
It was decided to start test transmissions of television in June 1954. The first transmissions were made on 29 October 1954 from the Royal Institute of Technology in Stockholm.
In 1956 the Riksdag decided that television broadcasting should continue on a permanent basis and on 4 September Radiotjänst initiated official transmissions from the new Nacka transmitter. A television licence for those owning a television set was introduced in October of that year.
Regularly scheduled television programming began in 1957. At the same time, Radiotjänst was renamed Sveriges Radio (SR) and its ownership was changed. The state and the press would have equal 40% shares, while the company itself would own 20% (in 1967, the state increased its share to 60% at the expense of the press).
In 1958, the first newscast, Aktuellt, was broadcast. During the 1960s the establishment of a second TV channel was frequently discussed. These discussions resulted in the launch of TV2 on 5 December 1969. The original channel became TV1 and it was intended that the two channels would broadcast in "stimulating competition" within the same company.
The first stage of the main headquarters building and TV studios for Sveriges Television, called TV-huset (sv), was inaugurated on Oxenstiernsgatan in the Östermalm district in Stockholm on 30 October 1967. The completion of the second stage of TV-huset and its official opening was on 5 December 1969, the same day as the start of operations of TV2, making it one of the largest television studios in Europe at that time.
1970 saw the start of the first regional programme, Sydnytt from Malmö. More regional news programmes launched in 1972 and the entire country was covered by regional news programmes by 1987 when ABC from Stockholm began.
When TV2 started the news programmes were reorganized. Aktuellt was replaced by TV-nytt, which was responsible for the main 19.30 bulletin on TV1 as well as news updates on both channels. In addition, the two channels would get one "commentary bulletin" each. TV2's was entitled Rapport and TV1's was Nu.
In 1972, the news was reorganized once again. Rapport was moved to the 19.30 slot on TV2 while Aktuellt was revived, to broadcast at 18.00 and 21.00 on TV1. These timeslots would mostly stay unchanged for the following decades.
In 1966, the first colour broadcast was made, with regular colour broadcasts being introduced in 1970. Teletext started in 1978.
At the end of the 1970s, SR was reorganized. From 1 July 1979, Sveriges Radio AB became the mother of four companies: Sveriges Riksradio (RR) for national radio, Sveriges Lokalradio AB (LRAB) for local radio, Sveriges Utbildningsradio (UR) for educational broadcasting and Sveriges Television (SVT) for television. SVT would provide all television broadcasting, except for educational programming which was the responsibility of Sveriges Utbildningsradio. The abbreviation SVT was chosen over the arguably more logical "STV" as that abbreviation was already occupied by Scottish Television in the EBU. The Swedish EBU membership is currently jointly held by SVT, SR and UR.
The two channels were reorganized in 1987. TV1 was renamed Kanal 1 and contained almost all programmes produced in Stockholm, while TV2 consisted of the ten regional districts and the Rapport news desk.
Broadcasting in Nicam Stereo was made permanent in 1988. This year also saw the launch of a channel called SVT World in southern Finland, broadcasting content from SVT for Finland-Swedes. The channel, which was later renamed SVT4, was rebranded as SVT Europa in 1997, when it started broadcasting to all of Europe via satellite. Following its expansion into Asia and Africa, it was rebranded as SVT World in 2005.
In 1992, the Riksdag decided that Sveriges Radio would be reorganized once again, this time into three independent companies (with RR and LRAB merged). From 1994, they would be owned by three independent foundations. The three foundations later became one.
In 1990, the television broadcasting day would usually begin at 16.00 and end before 24.00. The 1990s saw an increase in broadcasting hours, with the addition of reruns in the afternoon, a morning show, and lunch-time news bulletins. SVT also met competition from new commercial broadcasters. TV3 became the first channel to break SVT's monopoly on television in Sweden and in 1992 the newly elected right-wing parliamentary majority allowed TV4 to start terrestrial broadcasting. TV4 soon established nationwide coverage and in 1995 passed TV2 in the overall ratings to become the nation's most viewed channel.
In 1996, the channels were once again reorganized. The previous organization and competition between the two channels disappeared as they became part of a single organization. Kanal 1 and TV2 were renamed SVT1 and SVT2. The first season of Expedition: Robinson (Survivor) was shown in 1997.
The first digital terrestrial television (DTT) broadcasts took place in 1999. SVT started six new channels: the news channel SVT24 and five regional channels. 2000 saw the reorganization of the news desks. Aktuellt, Rapport, and SVT24 all came under the control of one central news desk.
In 2001 a new logo and new programme schedules, among other things, were introduced. This made SVT1 the broader mainstream channel with higher ratings and SVT2 the narrower channel. The main news bulletins at 19.30 and 21.00 switched channels, with Aktuellt now shown on SVT2 and Rapport on SVT1.
The regional channels were closed at the beginning of 2002 and replaced by SVT Extra. In December 2002, a new channel known as Barnkanalen began showing children's programmes during the day. On 24 February 2003 SVT24 and SVT Extra were renamed 24, a theme channel for news and sports. Also in 2003, all the SVT channels dropped their encryption in the DTT network.
On 25 June 2003, SVT broadcast its first programme with 5.1 sound on DTT. The first 5.1 show was Allsång på Skansen. In November 2004, SVT added two audio streams that read out the translation subtitles on SVT1 and SVT2. The knowledge-oriented channel Kunskapskanalen started broadcasting in September 2004.
The switch-off of analogue transmitters started in 2005 on Gotland. By 2007 all analogue transmissions from SVT had ceased.
SVT started VODcasting a number of programmes in February 2006. Altogether three broadcasters competed to be the first one to VODcast in Sweden. In the end, all three started in the same week.
SVT made its first broadcasts in high-definition television during the 2006 FIFA World Cup on a channel operated in co-operation with TV4 AB. Regular high-definition broadcasting started on the SVT HD channel on 22 October 2006. The first programme was the film Lost in Translation, followed the next day by a 50th anniversary tribute to television in Sweden, which was the first live entertainment programme to be broadcast in high definition in Sweden. On 25 August 2008, new logos and channel identities were introduced on the network with Barnkanalen renamed SVTB and 24 returning to its former name of SVT24, while SVT1 began carrying Regionala Nyheter (regional news bulletins) for the first time.
SVT was the host broadcaster for the 1975, 1985, 1992, 2000, 2013, and 2016 Eurovision Song Contests.
In 2018, the Riksdag voted to replace the traditional TV licensing system with a new public-service fee based on personal income tax.
Programming
News
News programmes are an important part of SVT. Since 1972 there have been two main news programmes: Rapport and Aktuellt (translated "Report" and "Current [events]", respectively). The two news programmes had completely separate organizations, meaning a lot of duplicated coverage was provided. After some co-operation in the 1990s, the two programmes were allowed to merge in 2000 with the newly created SVT24 to form a single organization. The different programme names and identities were kept, however. Eventually, Rapport has become the main news programme, and Aktuellt will only broadcast one bulletin per day from autumn 2007.
The main national news bulletins are Rapport, broadcast at 18.00 and 19.30, and Aktuellt which reports in greater depth at 21.00. Additionally, shorter news bulletins are shown in the mornings and throughout the day on SVT1, SVT2, and SVT24. These are styled SVT Nyheter. SVT also broadcasts video news on the Internet through a service called Play Rapport.
SVT provides news programmes in various minority languages: Uutiset in Finnish, Nyhetstecken in Swedish Sign Language, and, in co-operation with NRK and Yle, Ođđasat in Northern Sami, as well as special editions of Sverige idag in Meänkieli and Romani.
There are also regional news bulletins
– on SVT1 at 18.33 on Mondays to Fridays and 18.10 on Sundays, as well as 19.55 daily except Saturdays
– on SVT2 at 21.46 on Mondays to Thursdays and 21.25 on Fridays
ABC from Stockholm (Stockholm and Uppsala)
Gävledala from Falun (Dalarna and Gävleborg)
Mittnytt from Sundsvall (Västernorrland)
Nordnytt from Luleå (Norrbotten)
Östnytt from Norrköping (Östergötland, Södermanland and Gotland)
Smålandsnytt from Växjö (Kronoberg, Kalmar and Jönköping)
SVT Nyheter Jämtland from Östersund (Jämtland)
SVT Nyheter Väst from Gothenburg (Västra Götaland and Halland)
Sydnytt from Malmö (Skåne and Blekinge)
Tvärsnytt from Örebro (Örebro and Västmanland)
Värmlandsnytt from Karlstad (Värmland)
Västerbottensnytt from Umeå (Västerbotten)
Party political orientation
A survey in 1999 claimed that 33 percent of the journalists working for SVT and SR supported the Left Party, which was about the same proportion as among journalists employed in commercial broadcasting and the print media, but significantly higher than among the general public, only 15 percent of whom supported the Left Party. Support for the Left Party, the Green Party and the Liberal Party was stronger among journalists on SVT and SR than among the general public, while the Moderate Party, the Social Democrats, and the Christian Democrats had significantly less support among SVT and SR journalists than they did among the public at large. The study nevertheless concluded that the private political opinions of the journalists had little impact on their work and that news stories are treated the same regardless of the political colour of individual journalists. It is also worth mentioning that SJF, the organisation amongst whose members the study was partly conducted, is a trade union, which could have skewed the representivity of the sample (lessening the validity of the survey) as it is possible it excluded some rightwing journalists. The University of Gothenburg also made another study during the Swedish 2006 general election, comparing SVT's news programme Rapport to the country's five largest newspapers. The study concluded that Rapport'''s coverage of the election was the most balanced of them all.
Entertainment
Entertainment shows on Fridays and Saturdays are, together with popular sports, the programmes that attract the largest audiences.Melodifestivalen (1959-), the preselection for the Eurovision Song Contest, is very popular in Sweden. The final generally gets around 4 million viewers.Expedition Robinson (1997–2004, 2009–2012), the original Swedish version of Survivor. Sveriges Television was the first network to broadcast this reality television series in 1997. The show, with a name alluding to Robinson Crusoe, was a major hit in Sweden. The show which consistently held high ratings was concluded after its seventh and final year on the network (the final season aired 2003–2004). The popular series was continued on the commercial channel TV3, but with much lower ratings. Expedition Robinson was aired in 2009 with a brand new season, but the series is/was just called "Robinson", and now on the commercial channel TV4.På spåret (1987–present), popular entertainment show in which celebrities answer questions related to different locations. A cut down film in extremely high speed of a train journey towards the location is shown and the sooner the contestants stop it the higher the points. In later years even car journeys has been filmed that way. Humouristic and well hidden clues are given verbally during the journey. The name of the show means "on the track" in English. The show is one of very few Swedish original ideas. It has been syndicated in other countries as well.Så ska det låta (1997–present), the Swedish adaptation of The Lyrics Board which has on several occasions reached more than 3 million viewers.Allsång på Skansen (1979–present) Broadcast live from Skansen in Stockholm, this popular summer show features sing-alongs with Swedish folk music. The first sing-along at Skansen was held in 1935. Radio transmissions of the event started shortly after that. The sing-along at Skansen has been a tradition every summer since then.Antikrundan (1989-), the Swedish version of Antiques Roadshow which has often attracted approximately 2 million viewers.
Sveriges Television hosted the Eurovision Song Contest 1985, 1992, 2000, 2013, and 2016, and also as a part of Sveriges Radio hosted the 1975 contest.
Drama
SVT produces drama in several genres and forms.Rederiet (1992–2002) was one of the most popular soap operas in Sweden.
Regional programming
Regional content is entirely solely restricted to news which is broadcast on SVT1 and SVT2. The eighteen news programmes are: ABC, Blekingenytt, Gävledala, Hallandsnytt, Jämtlandsnytt, Jönköpingsnytt, Mittnytt, Nordnytt, Smålandsnytt, Sydnytt, Tvärsnytt, Uppland, Värmlandsnytt, Västerbottensnytt, Västmanlandsnytt, Västnytt and Östnytt. The regional news programmes are broadcast on SVT1 at 18.33–18.45 on Mondays to Fridays (18.10–18.15 on Sundays), with a follow-up bulletin at 19.55–20.00. SVT2 also broadcasts regional news following Aktuellt at 21.46–21.56 on Mondays to Thursdays and 21.25–21.30 on Fridays. There are no regional news bulletins on Saturdays.
ChildrenKalles klätterträd ran on Sveriges Television starting in 1975 and grew to become one of the most popular children's programmes of the 1970s. The children's strand Bolibompa was broadcast every day at 18.00 on SVT1, before moving to SVTB in August 2008.
Foreign programming
SVT also airs foreign programming, primarily from the United States, United Kingdom and other Nordic countries, in their original audio with Swedish subtitles, as is the case on other Nordic television channels.
The only cases in which dubbing is widespread is in programming aimed directly at children who are not expected to have learned reading skills yet. However, for some programmes, viewers may also access 'talking subtitles' through their remote where someone reads the subtitles to viewers (though 'talking subtitles' are not strictly in sync with the original audio as dubbing is). These same practices are also done for segments of local programmes that contain foreign language dialogue.
33% of the national first-time broadcasts consisted of foreign content in 2005. Of all acquired programming (including Swedish programming not produced by SVT) 27% came from the United States, 22% from the United Kingdom, 13% from Sweden, 13% from the other Nordic countries, 6% from France, 4% from Germany and 9% from the rest of Europe.
SVT often cooperates with the other Nordic public broadcasters via Nordvision. Thus, many Danish, Norwegian, Icelandic and Finnish programming air on SVT, while DR, NRK, YLE, KVF and RÚV show Swedish programmes. When there is major breaking news out of Denmark however, SVT may also source live coverage from TV2.
Channels
SVT has five regular channels broadcasting to Sweden:
SVT1 – The main channel with broad and regional content. The 10 most seen Swedish TV shows in 2006 were shown on this channel. SVT1 HD simulcasts in high definition.
SVT2 – A channel with slightly narrower programming with an emphasis on culture, current affairs and documentaries. SVT2 HD simulcasts in high definition.
Barnkanalen (The Children's Channel) – Programmes for children and (pre-)teens.
Kunskapskanalen (The Knowledge Channel'') – Broadcasting debates, seminars and documentaries in cooperation with UR.
SVT24 – Reruns of programmes from SVT1 and SVT2 in the evening and continuous news updates during the night. Shares frequency with Barnkanalen.
In addition to these channels, SVT has a special events channel called SVT Extra. It is generally unused and was (as of 2006) last used for live coverage during the 2004 Summer Olympic Games. In 2006, SVT launched a high-definition channel called SVT HD, simulcasting HD versions of programmes on the other SVT channels.
All channels, except SVT1 HD and SVT2 HD, are available in most of Sweden through the digital terrestrial television network and encrypted from Thor and Sirius satellites. Until September 2005, both SVT1 and SVT2 were available nationwide via analogue terrestrial transmitters. Cable networks are required to broadcast four SVT channels for free in either digital or analogue form.
SVT World, a mix of the SVT channels, is broadcast on satellite and worldwide via IPTV, and also as a terrestrial channel in Swedish-speaking areas of southern Finland. For rights reasons, SVT World does not show acquired material, such as movies, sport, or English language programming.
SVT1, SVT2, Barnkanalen, SVT24, and Kunskapskanalen are also available through DTT on Åland and can be distributed on Finnish cable networks. In Ostrobothnia, Finland, SVT1, SVT2, SVTB and SVT24 are transmitted through DTT as pay TV to the Swedish-speaking population. The signals from the terrestrial transmitters in Sweden can be received in some areas of Denmark and Norway as well as in northernmost Finland near Sweden. With special equipment reception of Swedish terrestrial transmitters is possible even on some parts of the Finnish coast as well as the Polish and German coast closest to Sweden. Cable networks in the Nordic countries generally redistribute SVT1 and SVT2 often for an additional monthly subscription charge in addition to the subscriber's main package. Some Nordic hotels, especially in Denmark and Norway, also offer SVT1 or SVT2 to guests.
SVT considers their website, svt.se, a channel in its own right. SVT also provides an on-demand service called SVT Play through which most of the programmes produced for SVT and air on its channels are available. However most non-news and non-current event programmes on SVT Play are only available for viewing in Sweden.
Organization
The executive management of SVT is handled by a CEO, appointed by the board. The CEO of SVT is currently Hanna Stjärne who took over the role from Eva Hamilton in 2015. The Chairman of the Board is Lars Engqvist, deputy Prime Minister of the previous Social Democratic government.
SVT is divided into eight operative programme-producing units - four of these are located in Stockholm while the other four are located around the country at regional studios and are based on the ten regional transmission areas which were merged in 2000:
Malmö – SVT Syd (SVT Malmö and SVT Växjö)
Gothenburg – SVT Väst (SVT Göteborg)
Norrköping – SVT Mellansverige (SVT Falun, Dövas TV Leksand, SVT Karlstad, SVT Örebro and SVT Norrköping)
Umeå – SVT Nord (formerly SVT Luleå, SVT Umeå and SVT Sundsvall)
These four district areas produce networked output and co-ordinate ten of the eleven regional news services broadcast daily on SVT1.
The Stockholm-based units are:
SVT Nyheter & Samhälle – national news, current affairs, documentaries and the regional news service ABC.
SVT Sport
SVT Fiktion – drama, entertainment, youth and children's programming.
SVTi – multimedia and interactive services.
Chair of the Board of Directors
Anna-Greta Leijon, 1994–2000
Allan Larsson, 2000–2005
Lars Engqvist, 2005–present
Chief executive officers
Before Sveriges Television was formed in 1978, television broadcasting was controlled by channel controllers. Nils Erik Baehrentz was the controller between 1958 and 1968. He was succeeded by Håkan Unsgaard who became TV1's controller in 1968 and Örjan Wallquist who became the TV2 controller in 1969.
Magnus Faxén, 1978–1981
Sam Nilsson, 1981–1999
Maria Curman, 2000–2001
Christina Jutterström, 2001–2006
Eva Hamilton, 2006–2014
Hanna Stjärne, 2014–present
When Sam Nilsson retired, the executive chair was split between a CEO and a Programme Officer. This position was abolished in 2007.
Mikael Olsson, 2000–2000
Leif Jakobsson, 2001–2007
Share of viewing figures
Since the arrival of commercial television, SVT's combined viewing share has declined steadily and digital channels have also provided competition. The commercial TV4 became the most watched station in 1995 and maintained its position until 2002, when SVT1 regained the status. TV4 became the most watched channel again 2006.
The combined viewing share of the SVT channels declined from 50% in 1997 to 40% in 2005. SVT was the most watched network in Sweden with a share of 38.3% in 2006, although all three major commercial channels attract a higher share of 15- to 24-year-olds than the two SVT channels combined.
Audience trust
A media study released in 2020 showed that trust in SVT programming polarised the audience. 80-90% of viewers who supported liberal or left-leaning parties Green Party, Swedish Social Democratic Party, Liberals, Left Party and Centre Party had high trust in SVT whereas fewer viewers who supported conservative-leaning parties Christian Democrats (60%), Moderate Party (54%) and Sweden Democrats (30%) had high trust in SVT. This meant that SVT was an issue that polarised the audience more than US president Donald Trump.
See also
List of Swedish television channels
Radiotjänst i Kiruna
Sveriges Utbildningsradio
Swedish Broadcasting Commission
Teracom
TV4
Viasat
References
External links
Official English website
SVT Play, a video on demand service
1956 establishments in Sweden
Commercial-free television networks
European Broadcasting Union members
Publicly funded broadcasters
Swedish companies established in 1956
Television networks in Sweden
Television channels and stations established in 1956
Television in Sweden |
337011 | https://en.wikipedia.org/wiki/Distributed.net | Distributed.net | Distributed.net is a distributed computing effort that is attempting to solve large scale problems using otherwise idle CPU or GPU time. It is governed by Distributed Computing Technologies, Incorporated (DCTI), a non-profit organization under U.S. tax code 501(c)(3).
Distributed.net is working on RC5-72 (breaking RC5 with a 72-bit key), and OGR-28 (searching for the optimal 28-mark Golomb ruler). The RC5-72 project is on pace to exhaust the keyspace in just under 150 years, although the project will end whenever the required key is found. Both problems are part of a series: OGR is part of an infinite series; RC5 has eight unsolved challenges from RSA Security, although in May 2007, RSA Security announced that they would no longer be providing prize money for a correct key to any of their secret key challenges. distributed.net has decided to sponsor the original prize offer for finding the key as a result.
In 2001, distributed.net was estimated to have a throughput of over 30 TFLOPS. , the throughput was estimated to be the same as the Lonestar 5 supercomputer, or around 1.25 petaFLOPs.
History
A coordinated effort was started in February 1997 by Earle Ady and Christopher G. Stach II of Hotjobs.com and New Media Labs, as an effort to break the RC5-56 portion of the RSA Secret-Key Challenge, a 56-bit encryption algorithm that had a $10,000 USD prize available to anyone who could find the key. Unfortunately, this initial effort had to be suspended as the result of SYN flood attacks by participants upon the server.
A new independent effort, named distributed.net, was coordinated by Jeffrey A. Lawson, Adam L. Beberg, and David C. McNett along with several others who would serve on the board and operate infrastructure. By late March 1997 new proxies were released to resume RC5-56 and work began on enhanced clients. A cow head was selected as the icon of the application and the project's mascot.
The RC5-56 challenge was solved on October 19, 1997 after 250 days. The correct key was "0x532B744CC20999" and the plaintext message read "The unknown message is: It's time to move to a longer key length".
The RC5-64 challenge was solved on July 14, 2002 after 1,757 days. The correct key was "0x63DE7DC154F4D039" and the plaintext message read "The unknown message is: Some things are better left unread".
The search for OGRs of order 24, 25, 26, and 27 were completed by distributed.net on 13 October 2004, 25 October 2008, 24 February 2009, and 19 February 2014 respectively.
Client
"DNETC" is the file name of the software application which users run to participate in any active distributed.net project. It is a command line program with an interface to configure it, available for a wide variety of platforms. distributed.net refers to the software application simply as the "client". , volunteers running 32-bit Windows with ATI/AMD Stream enabled GPUs have contributed the most processing power to the RC5-72 project and volunteers running 64-bit Linux have contributed the most processing power to the OGR-28 project.
Portions of the source code for the client are publicly available, although users are not permitted to distribute modified versions themselves.
Distributed.net's RC5-72 and OGR-28 projects are available on the BOINC client through the Moo! Wrapper and yoyo@home projects respectively.
Development of GPU-enabled clients
In recent years, most of the work on the RC5-72 project has been submitted by clients that run on the GPU of modern graphics cards. Although the project had already been underway for almost 6 years when the first GPUs began submitting results, as of March 2018, GPUs represent 78% of all completed work units, and complete nearly 93% of all work units each day.
NVIDIA
In late 2007, work began on the implementation of new RC5-72 cores designed to run on NVIDIA CUDA-enabled hardware, with the first completed work units reported in November 2008. On high-end NVIDIA video cards at the time, upwards of 600 million keys/second was observed For comparison, a 2008-era high-end single CPU working on RC5-72 achieved about 50 million keys/second, representing a very significant advancement for RC5-72. As of January 2020, CUDA clients have completed roughly 10% of all work on the RC5-72 project.
ATI
Similarly, near the end of 2008, work began on the implementation of new RC5-72 cores designed to run on ATI Stream-enabled hardware. Some of the products in the Radeon HD 5000 and 6000 series provided key rates in excess of 1.8 billion keys/second. As of January 2020, Stream clients have completed roughly 43% of all work on the RC5-72 project.
OpenCL
An OpenCL client entered beta testing in late 2012 and was released in 2013. As of January 2020, OpenCL clients have completed about 27% of all work on the RC5-72 project. No breakdown of OpenCL production by GPU manufacturer exists, as AMD, NVIDIA, and Intel GPUs all support OpenCL.
Timeline of distributed.net projects
Current
RSA Lab's 72-bit RC5 Encryption Challenge — In progress, 6.462% complete as of 19 March 2020 (although RSA Labs has discontinued sponsorship)
Optimal Golomb Rulers (OGR-28) — In progress, ~72.28% complete as of 19 March 2020
Cryptography
RSA Lab's 56-bit RC5 Encryption Challenge — Completed 19 October 1997 (after 250 days and 47% of the key space tested).
RSA Lab's 56-bit DES-II-1 Encryption Challenge — Completed 23 February 1998 (after 39 days)
RSA Lab's 56-bit DES-II-2 Encryption Challenge — Ended 15 July 1998 (found independently by the EFF DES cracker after 2.5 days)
RSA Lab's 56-bit DES-III Encryption Challenge — Completed 19 January 1999 (after 22.5 hours with the help of the EFF DES cracker)
CS-Cipher Challenge — Completed 16 January 2000 (after 60 days and 98% of the key space tested).
RSA Lab's 64-bit RC5 Encryption Challenge — Completed 14 July 2002 (after days and 83% of the key space tested).
Golomb rulers
Optimal Golomb Rulers (OGR-24) — Completed 13 October 2004 (after days, confirmed predicted best ruler)
Optimal Golomb Rulers (OGR-25) — Completed 24 October 2008 (after days, confirmed predicted best ruler)
Optimal Golomb Rulers (OGR-26) — Completed 24 February 2009 (after days, confirmed predicted best ruler)
Optimal Golomb Rulers (OGR-27) — Completed 19 February 2014 (after days, confirmed predicted best ruler)
See also
RSA Secret-Key Challenge
Golomb Ruler
DES Challenges
Brute force attack
Cryptanalysis
Key size
List of distributed computing projects
Berkeley Open Infrastructure for Network Computing
References
External links
Official website
Cryptographic attacks
Distributed computing projects
Charities based in the United States
Organizations established in 1997
Articles which contain graphical timelines
Volunteer computing |
337220 | https://en.wikipedia.org/wiki/Personal%20identification%20number | Personal identification number | A personal identification number (PIN), or sometimes redundantly a PIN number or PIN code, is a numeric (sometimes alpha-numeric) passcode used in the process of authenticating a user accessing a system.
The PIN has been the key to facilitating the private data exchange between different data-processing centers in computer networks for financial institutions, governments, and enterprises. PINs may be used to authenticate banking systems with cardholders, governments with citizens, enterprises with employees, and computers with users, among other uses.
In common usage, PINs are used in ATM or POS transactions, secure access control (e.g. computer access, door access, car access), internet transactions, or to log into a restricted website.
History
The PIN originated with the introduction of the automated teller machine (ATM) in 1967, as an efficient way for banks to dispense cash to their customers. The first ATM system was that of Barclays in London, in 1967; it accepted cheques with machine-readable encoding, rather than cards, and matched the PIN to the cheque. 1972, Lloyds Bank issued the first bank card to feature an information-encoding magnetic strip, using a PIN for security. James Goodfellow, the inventor who patented the first personal identification number, was awarded an OBE in the 2006 Queen's Birthday Honours.
Mohamed M. Atalla invented the first PIN-based hardware security module (HSM), dubbed the "Atalla Box," a security system that encrypted PIN and ATM messages and protected offline devices with an un-guessable PIN-generating key. In 1972, Atalla filed for his PIN verification system, which included an encoded card reader and described a system that utilized encryption techniques to assure telephone link security while entering personal ID information that was transmitted to a remote location for verification.
He founded Atalla Corporation (now Utimaco Atalla) in 1972, and commercially launched the "Atalla Box" in 1973. The product was released as the Identikey. It was a card reader and customer identification system, providing a terminal with plastic card and PIN capabilities. The system was designed to let banks and thrift institutions switch to a plastic card environment from a passbook program. The Identikey system consisted of a card reader console, two customer PIN pads, intelligent controller and built-in electronic interface package. The device consisted of two keypads, one for the customer and one for the teller. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor, into another code for the teller. During a transaction, the customer's account number was read by the card reader. This process replaced manual entry and avoided possible key stroke errors. It allowed users to replace traditional customer verification methods such as signature verification and test questions with a secure PIN system. In recognition of his work on the PIN system of information security management, Atalla has been referred to as the "Father of the PIN".
The success of the "Atalla Box" led to the wide adoption of PIN-based hardware security modules. Its PIN verification process was similar to the later IBM 3624. By 1998 an estimated 70% of all ATM transactions in the United States were routed through specialized Atalla hardware modules, and by 2003 the Atalla Box secured 80% of all ATM machines in the world, increasing to 85% as of 2006. Atalla's HSM products protect 250million card transactions every day as of 2013, and still secure the majority of the world's ATM transactions as of 2014.
Financial services
PIN usage
In the context of a financial transaction, usually both a private "PIN code" and public user identifier are required to authenticate a user to the system. In these situations, typically the user is required to provide a non-confidential user identifier or token (the user ID) and a confidential PIN to gain access to the system. Upon receiving the user ID and PIN, the system looks up the PIN based upon the user ID and compares the looked-up PIN with the received PIN. The user is granted access only when the number entered matches the number stored in the system. Hence, despite the name, a PIN does not personally identify the user. The PIN is not printed or embedded on the card but is manually entered by the cardholder during automated teller machine (ATM) and point of sale (POS) transactions (such as those that comply with EMV), and in card not present transactions, such as over the Internet or for phone banking.
PIN length
The international standard for financial services PIN management, ISO 9564-1, allows for PINs from four up to twelve digits, but recommends that for usability reasons the card issuer not assign a PIN longer than six digits. The inventor of the ATM, John Shepherd-Barron, had at first envisioned a six-digit numeric code, but his wife could only remember four digits, and that has become the most commonly used length in many places, although banks in Switzerland and many other countries require a six-digit PIN.
PIN validation
There are several main methods of validating PINs. The operations discussed below are usually performed within a hardware security module (HSM).
IBM 3624 method
One of the earliest ATM models was the IBM 3624, which used the IBM method to generate what is termed a natural PIN. The natural PIN is generated by encrypting the primary account number (PAN), using an encryption key generated specifically for the purpose. This key is sometimes referred to as the PIN generation key (PGK). This PIN is directly related to the primary account number. To validate the PIN, the issuing bank regenerates the PIN using the above method, and compares this with the entered PIN.
Natural PINs cannot be user selectable because they are derived from the PAN. If the card is reissued with a new PAN, a new PIN must be generated.
Natural PINs allow banks to issue PIN reminder letters as the PIN can be generated.
IBM 3624 + offset method
To allow user-selectable PINs it is possible to store a PIN offset value. The offset is found by subtracting natural PIN from the customer selected PIN using modulo 10. For example, if the natural PIN is 1234, and the user wishes to have a PIN of 2345, the offset is 1111.
The offset can be stored either on the card track data, or in a database at the card issuer.
To validate the PIN, the issuing bank calculates the natural PIN as in the above method, then adds the offset and compares this value to the entered PIN.
VISA method
The VISA method is used by many card schemes and is not VISA-specific. The VISA method generates a PIN verification value (PVV). Similar to the offset value, it can be stored on the card's track data, or in a database at the card issuer. This is called the reference PVV.
The VISA method takes the rightmost eleven digits of the PAN excluding the checksum value, a PIN validation key index (PVKI, chosen from one to six, a PVKI of 0 indicates that the PIN cannot be verified through PVS) and the required PIN value to make a 64-bit number, the PVKI selects a validation key (PVK, of 128 bits) to encrypt this number. From this encrypted value, the PVV is found.
To validate the PIN, the issuing bank calculates a PVV value from the entered PIN and PAN and compares this value to the reference PVV. If the reference PVV and the calculated PVV match, the correct PIN was entered.
Unlike the IBM method, the VISA method doesn't derive a PIN. The PVV value is used to confirm the PIN entered at the terminal, was also used to generate the reference PVV. The PIN used to generate a PVV can be randomly generated, user-selected or even derived using the IBM method.
PIN security
Financial PINs are often four-digit numbers in the range 0000–9999, resulting in 10,000 possible combinations. Switzerland issues six-digit PINs by default.
Some systems set up default PINs and most allow the customer to set up a PIN or to change the default one, and on some a change of PIN on first access is mandatory. Customers are usually advised not to set up a PIN-based on their or their spouse's birthdays, on driver license numbers, consecutive or repetitive numbers, or some other schemes. Some financial institutions do not give out or permit PINs where all digits are identical (such as 1111, 2222, ...), consecutive (1234, 2345, ...), numbers that start with one or more zeroes, or the last four digits of the cardholder's social security number or birth date.
Many PIN verification systems allow three attempts, thereby giving a card thief a putative 0.03% probability of guessing the correct PIN before the card is blocked. This holds only if all PINs are equally likely and the attacker has no further information available, which has not been the case with some of the many PIN generation and verification algorithms that financial institutions and ATM manufacturers have used in the past.
Research has been done on commonly used PINs. The result is that without forethought, a sizable portion of users may find their PIN vulnerable. "Armed with only four possibilities, hackers can crack 20% of all PINs. Allow them no more than fifteen numbers, and they can tap the accounts of more than a quarter of card-holders."
Breakable PINs can worsen with length, to wit:
Implementation flaws
In 2002, two PhD students at Cambridge University, Piotr Zieliński and Mike Bond, discovered a security flaw in the PIN generation system of the IBM 3624, which was duplicated in most later hardware. Known as the decimalization table attack, the flaw would allow someone who has access to a bank's computer system to determine the PIN for an ATM card in an average of 15 guesses.
Reverse PIN hoax
Rumours have been in e-mail and Internet circulation claiming that in the event of entering a PIN into an ATM backwards, law enforcement will be instantly alerted as well as money being ordinarily issued as if the PIN had been entered correctly. The intention of this scheme would be to protect victims of muggings; however, despite the system being proposed for use in some US states, there are no ATMs currently in existence that employ this software.
Mobile phone passcodes
A mobile phone may be PIN protected. If enabled, the PIN (also called a passcode) for GSM mobile phones can be between four and eight digits and is recorded in the SIM card. If such a PIN is entered incorrectly three times, the SIM card is blocked until a personal unblocking code (PUC or PUK), provided by the service operator, is entered. If the PUC is entered incorrectly ten times, the SIM card is permanently blocked, requiring a new SIM card from the mobile carrier service.
PINs are also commonly used in smartphones, as a form of personal authentication, so that only those who know the PIN will be able to unlock the device. After a number of failed attempts of entering the correct PIN, the user may be blocked from trying again for an allocated amount of time, all of the data stored on the device may be deleted, or the user may be asked to enter alternate information that only the owner is expected to know to authenticate. Whether any of the formerly mentioned phenomena occur after failed attempts of entering the PIN depends largely upon the device and the owner's chosen preferences in its settings.
See also
ATM SafetyPIN software
Transaction authentication number
References
Banking terms
Identity documents
Password authentication |
337389 | https://en.wikipedia.org/wiki/Domain%20Name%20System%20Security%20Extensions | Domain Name System Security Extensions | The Domain Name System Security Extensions (DNSSEC) is a suite of extension specifications by the Internet Engineering Task Force (IETF) for securing data exchanged in the Domain Name System (DNS) in Internet Protocol (IP) networks. The protocol provides cryptographic authentication of data, authenticated denial of existence, and data integrity, but not availability or confidentiality.
Overview
The original design of the Domain Name System did not include any security features. It was conceived only as a scalable distributed system. The Domain Name System Security Extensions (DNSSEC) attempt to add security, while maintaining backward compatibility. Request for Comments 3833 documents some of the known threats to the DNS, and their solutions in DNSSEC.
DNSSEC was designed to protect applications using DNS from accepting forged or manipulated DNS data, such as that created by DNS cache poisoning. All answers from DNSSEC protected zones are digitally signed. By checking the digital signature, a DNS resolver is able to check if the information is identical (i.e. unmodified and complete) to the information published by the zone owner and served on an authoritative DNS server. While protecting IP addresses is the immediate concern for many users, DNSSEC can protect any data published in the DNS, including text records (TXT) and mail exchange records (MX), and can be used to bootstrap other security systems that publish references to cryptographic certificates stored in the DNS such as Certificate Records (CERT records, RFC 4398), SSH fingerprints (SSHFP, RFC 4255), IPSec public keys (IPSECKEY, RFC 4025), and TLS Trust Anchors (TLSA, RFC 6698).
DNSSEC does not provide confidentiality of data; in particular, all DNSSEC responses are authenticated but not encrypted. DNSSEC does not protect against DoS attacks directly, though it indirectly provides some benefit (because signature checking allows the use of potentially untrustworthy parties).
Other standards (not DNSSEC) are used to secure bulk data (such as a DNS zone transfer) sent between DNS servers. As documented in IETF RFC 4367, some users and developers make false assumptions about DNS names, such as assuming that a company's common name plus ".com" is always its domain name. DNSSEC cannot protect against false assumptions; it can only authenticate that the data is truly from or not available from the domain owner.
The DNSSEC specifications (called DNSSEC-bis) describe the current DNSSEC protocol in great detail. See RFC 4033, RFC 4034, and RFC 4035. With the publication of these new RFCs (March 2005), an earlier RFC, RFC 2535 has become obsolete.
It is widely believed that securing the DNS is critically important for securing the Internet as a whole, but deployment of DNSSEC specifically has been hampered () by several difficulties:
The need to design a backward-compatible standard that can scale to the size of the Internet
Prevention of "zone enumeration" where desired
Deployment of DNSSEC implementations across a wide variety of DNS servers and resolvers (clients)
Disagreement among implementers over who should own the top-level domain root keys
Overcoming the perceived complexity of DNSSEC and DNSSEC deployment
Operation
DNSSEC works by digitally signing records for DNS lookup using public-key cryptography. The correct DNSKEY record is authenticated via a chain of trust, starting with a set of verified public keys for the DNS root zone which is the trusted third party. Domain owners generate their own keys, and upload them using their DNS control panel at their domain-name registrar, which in turn pushes the keys via secDNS to the zone operator (e.g., Verisign for .com) who signs and publishes them in DNS.
Resource records
DNS is implemented by the use of several resource records. To implement DNSSEC, several new DNS record types were created or adapted to use with DNSSEC:
RRSIG (resource record signature) Contains the DNSSEC signature for a record set. DNS resolvers verify the signature with a public key, stored in a DNSKEY record.
DNSKEY Contains the public key that a DNS resolver uses to verify DNSSEC signatures in RRSIG records.
DS (delegation signer) Holds the name of a delegated zone. References a DNSKEY record in the sub-delegated zone. The DS record is placed in the parent zone along with the delegating NS records.
NSEC (next secure record) Contains a link to the next record name in the zone and lists the record types that exist for the record's name. DNS resolvers use NSEC records to verify the non-existence of a record name and type as part of DNSSEC validation.
NSEC3 (next secure record version 3) Contains links to the next record name in the zone (in hashed name sorting order) and lists the record types that exist for the name covered by the hash value in the first label of the NSEC3 record's own name. These records can be used by resolvers to verify the non-existence of a record name and type as part of DNSSEC validation. NSEC3 records are similar to NSEC records, but NSEC3 uses cryptographically hashed record names to avoid the enumeration of the record names in a zone.
NSEC3PARAM (next secure record version 3 parameters) Authoritative DNS servers use this record to calculate and determine which NSEC3 records to include in responses to DNSSEC requests for non-existing names/types.
When DNSSEC is used, each answer to a DNS lookup contains an RRSIG DNS record, in addition to the record type that was requested. The RRSIG record is a digital signature of the answer DNS resource record set. The digital signature is verified by locating the correct public key found in a DNSKEY record. The NSEC and NSEC3 records are used to provide cryptographic evidence of the non-existence of any Resource Record (RR). The DS record is used in the authentication of DNSKEYs in the lookup procedure using the chain of trust. NSEC and NSEC3 records are used for robust resistance against spoofing.
Algorithms
DNSSEC was designed to be extensible so that as attacks are discovered against existing algorithms, new ones can be introduced in a backward-compatible fashion. The following table defines, as of April 2013, the security algorithms that are most often used:
The lookup procedure
From the results of a DNS lookup, a security-aware DNS resolver can determine whether the authoritative name server for the domain being queried supports DNSSEC, whether the answer it receives is secure, and whether there is some sort of error. The lookup procedure is different for recursive name servers such as those of many ISPs, and for stub resolvers such as those included by default in mainstream operating systems. Microsoft Windows uses a stub resolver, and Windows Server 2008 R2 and Windows 7 in particular use a non-validating but DNSSEC-aware stub resolver.
Recursive name servers
Using the chain of trust model, a Delegation Signer (DS) record in a parent domain (DNS zone) can be used to verify a DNSKEY record in a subdomain, which can then contain other DS records to verify further subdomains. Say that a recursive resolver such as an ISP name server wants to get the IP addresses (A record and/or AAAA records) of the domain "www.example.com".
The process starts when a security-aware resolver sets the "DO" ("DNSSEC OK") flag bit in a DNS query. Since the DO bit is in the extended flag bits defined by Extension Mechanisms for DNS (EDNS), all DNSSEC transactions must support EDNS. EDNS support is also needed to allow for the much larger packet sizes that DNSSEC transactions require.
When the resolver receives an answer via the normal DNS lookup process, it then checks to make sure that the answer is correct. Ideally, the security-aware resolver would start with verifying the DS and DNSKEY records at the DNS root. Then it would use the DS records for the "com" top-level domain found at the root to verify the DNSKEY records in the "com" zone. From there, it would see if there is a DS record for the "example.com" subdomain in the "com" zone, and if there were, it would then use the DS record to verify a DNSKEY record found in the "example.com" zone. Finally, it would verify the RRSIG record found in the answer for the A records for "www.example.com".
There are several exceptions to the above example.
First, if "example.com" does not support DNSSEC, there will be no RRSIG record in the answer and there will not be a DS record for "example.com" in the "com" zone. If there is a DS record for "example.com", but no RRSIG record in the reply, something is wrong and maybe a man in the middle attack is going on, stripping the DNSSEC information and modifying the A records. Or, it could be a broken security-oblivious name server along the way that stripped the DO flag bit from the query or the RRSIG record from the answer. Or, it could be a configuration error.
Next, it may be that there is not a domain name named "www.example.com", in which case instead of returning a RRSIG record in the answer, there will be either an NSEC record or an NSEC3 record. These are "next secure" records that allow the resolver to prove that a domain name does not exist. The NSEC/NSEC3 records have RRSIG records, which can be verified as above.
Finally, it may be that the "example.com" zone implements DNSSEC, but either the "com" zone or the root zone do not, creating an "island of security" which needs to be validated in some other way. , deployment of DNSSEC to root is completed. The .com domain was signed with valid security keys and the secure delegation was added to the root zone on 1 April 2011.
Stub resolvers
Stub resolvers are "minimal DNS resolvers that use recursive query mode to offload most of the work of DNS resolution to a recursive name server." A stub resolver will simply forward a request to a recursive name server, and use the Authenticated Data (AD) bit in the response as a "hint to find out whether the recursive name server was able to validate signatures for all of the data in the Answer and Authority sections of the response." Microsoft Windows uses a stub resolver, and Windows Server 2008 R2 and Windows 7 in particular use a non-validating but AD-bit-aware stub resolver.
A validating stub resolver can also potentially perform its own signature validation by setting the Checking Disabled (CD) bit in its query messages. A validating stub resolver uses the CD bit to perform its own recursive authentication. Using such a validating stub resolver gives the client end-to-end DNS security for domains implementing DNSSEC, even if the Internet service provider or the connection to them is not trusted.
Non-validating stub resolvers must rely on external DNSSEC validation services, such as those controlled by the user's Internet service provider or a public recursive name server, and the communication channels between itself and those name servers, using methods such as DNS over TLS.
Trust anchors and authentication chains
To be able to prove that a DNS answer is correct, one needs to know at least one key or DS record that is correct from sources other than the DNS. These starting points are known as trust anchors and are typically obtained with the operating system or via some other trusted source. When DNSSEC was originally designed, it was thought that the only trust anchor that would be needed was for the DNS root. The root anchors were first published on 15 July 2010.
An authentication chain is a series of linked DS and DNSKEY records, starting with a trust anchor to the authoritative name server for the domain in question. Without a complete authentication chain, an answer to a DNS lookup cannot be securely authenticated.
Signatures and zone signing
To limit replay attacks, there are not only the normal DNS TTL values for caching purposes, but additional timestamps in RRSIG records to limit the validity of a signature. Unlike TTL values which are relative to when the records were sent, the timestamps are absolute. This means that all security-aware DNS resolvers must have clocks that are fairly closely in sync, say to within a few minutes.
These timestamps imply that a zone must regularly be re-signed and re-distributed to secondary servers, or the signatures will be rejected by validating resolvers.
Key management
DNSSEC involves many different keys, stored both in DNSKEY records, and from other sources to form trust anchors.
In order to allow for replacement keys, a key rollover scheme is required. Typically, this involves first rolling out new keys in new DNSKEY records, in addition to the existing old keys. Then, when it is safe to assume that the time to live values have caused the caching of old keys to have passed, these new keys can be used. Finally, when it is safe to assume that the caching of records using the old keys have expired, the old DNSKEY records can be deleted. This process is more complicated for things such as the keys to trust anchors, such as at the root, which may require an update of the operating system.
Keys in DNSKEY records can be used for two different things and typically different DNSKEY records are used for each. First, there are key signing keys (KSK) which are used to sign other DNSKEY records. Second, there are zone signing keys (ZSK) which are used to sign other records. Since the ZSKs are under complete control and use by one particular DNS zone, they can be switched more easily and more often. As a result, ZSKs can be much shorter than KSKs and still offer the same level of protection while reducing the size of the RRSIG/DNSKEY records.
When a new KSK is created, the DS record must be transferred to the parent zone and published there. The DS records use a message digest of the KSK instead of the complete key in order to keep the size of the records small. This is helpful for zones such as the .com domain, which are very large. The procedure to update DS keys in the parent zone is also simpler than earlier DNSSEC versions that required DNSKEY records to be in the parent zone.
DANE Working Group
DNS-based Authentication of Named Entities (DANE) is an IETF working group with the goal of developing protocols and techniques that allow Internet applications to establish cryptographically secured communications with TLS, DTLS, SMTP, and S/MIME based on DNSSEC.
The new protocols will enable additional assurances and constraints for the traditional model based on public key infrastructure. They will also enable domain holders to assert certificates for themselves, without reference to third-party certificate authorities.
Support for DNSSEC stapled certificates was enabled in Google Chrome 14, but was later removed. For Mozilla Firefox, support was provided by an add-on while native support is currently awaiting someone to start working on it.
History
DNS is a critical and fundamental Internet service, yet in 1990 Steve Bellovin discovered serious security flaws in it. Research into securing it began, and progressed dramatically when his paper was made public in 1995. The initial RFC 2065 was published by the IETF in 1997, and initial attempts to implement that specification led to a revised (and believed fully workable) specification in 1999 as IETF RFC 2535. Plans were made to deploy DNSSEC based on RFC 2535.
Unfortunately, the IETF RFC 2535 specification had very significant problems scaling up to the full Internet; by 2001 it became clear that this specification was unusable for large networks. In normal operation DNS servers often get out of sync with their parents. This isn't usually a problem, but when DNSSEC is enabled, this out-of-sync data could have the effect of a serious self-created denial of service. The original DNSSEC required a complex six-message protocol and a lot of data transfers to perform key changes for a child (DNS child zones had to send all of their data up to the parent, have the parent sign each record, and then send those signatures back to the child for the child to store in a SIG record). Also, public key changes could have absurd effects; for example, if the ".com" zone changed its public key, it would have to send 22 million records (because it would need to update all of the signatures in all of its children). Thus, DNSSEC as defined in RFC 2535 could not scale up to the Internet.
The IETF fundamentally modified DNSSEC, which is called DNSSEC-bis when necessary to distinguish it from the original DNSSEC approach of RFC 2535. This new version uses "delegation signer (DS) resource records" to provide an additional level of indirection at delegation points between a parent and child zone. In the new approach, when a child's master public key changes, instead of having six messages for every record in the child, there is one simple message: the child sends the new public key to its parent (signed, of course). Parents simply store one master public key for each child; this is much more practical. This means that a little data is pushed to the parent, instead of massive amounts of data being exchanged between the parent and children. This does mean that clients have to do a little more work when verifying keys. More specifically, verifying a DNS zone's KEY RRset requires two signature verification operations instead of the one required by RFC 2535 (there is no impact on the number of signatures verified for other types of RRsets). Most view this as a small price to pay, since it makes DNSSEC deployment more practical.
Authenticating NXDOMAIN responses and NSEC
Cryptographically proving the absence of a domain requires signing the response to every query for a non-existent domain. This is not a problem for online signing servers, which keep their keys available online. However, DNSSEC was designed around using offline computers to sign records so that zone-signing-keys could be kept in cold storage. This represents a problem when trying to authenticate responses to queries for non-existent domains since it is impossible to pre-generate a response to every possible hostname query.
The initial solution was to create NSEC records for every pair of domains in a zone. Thus if a client queried for a record at the non-existent k.example.com, the server would respond with an NSEC record stating that nothing exists between a.example.com and z.example.com. However, this leaks more information about the zone than traditional unauthenticated NXDOMAIN errors because it exposes the existence of real domains.
Preventing domain walking
The NSEC3 records (RFC 5155) were created as an alternative which hashes the name instead of listing them directly. Over time, advancements in hashing using GPUs and dedicated hardware meant that NSEC3 responses could be cheaply brute forced using offline dictionary attacks. NSEC5 has been proposed to allow authoritative servers to sign NSEC responses without having to keep a private key that can be used to modify the zone. Thus stealing an NSEC5KEY would only result in the ability to more easily enumerate a zone.
Due to the messy evolution of the protocol and a desire to preserve backwards compatibility, online DNSSEC signing servers return a "white lie" instead of authenticating a denial of existence directly. The technique outlined in RFC 4470 returns a NSEC record in which the pairs of domains lexically surrounding the requested domain. For example, request for k.example.com would thus result in an NSEC record proving that nothing exists between the (fictitious) domains j.example.com and l.example.com. This is also possible with NSEC3 records.
CloudFlare pioneered a pair of alternative approches, which manage to achieve the same result in one third of the response size. The first is a variation on the "white lies" approach, called "black lies", which exploits common DNS client behavior to state the nonexistance more compactly. The second approach instead chooses to prove that "the record exists but the requested record type does not", which they call "DNS shotgun".
Deployment
The Internet is critical infrastructure, yet its operation depends on the fundamentally insecure DNS.
Thus, there is strong incentive to secure DNS, and deploying DNSSEC is generally considered to be a critical part of that effort.
For example, the U.S. National Strategy to Secure Cyberspace specifically identified the need to secure DNS.
Wide-scale deployment of DNSSEC could resolve many other security problems as well, such as secure key distribution for e-mail addresses.
DNSSEC deployment in large-scale networks is also challenging. Ozment and Schechter observe that DNSSEC (and other technologies) has a "bootstrap problem": users typically only deploy a technology if they receive an immediate benefit, but if a minimal level of deployment is required before any users receive a benefit greater than their costs (as is true for DNSSEC), it is difficult to deploy. DNSSEC can be deployed at any level of a DNS hierarchy, but it must be widely available in a zone before many others will want to adopt it. DNS servers must be updated with software that supports DNSSEC, and DNSSEC data must be created and added to the DNS zone data. A TCP/IP-using client must have their DNS resolver (client) updated before it can use DNSSEC's capabilities. What is more, any resolver must have, or have a way to acquire, at least one public key that it can trust before it can start using DNSSEC.
DNSSEC implementation can add significant load to some DNS servers. Common DNSSEC-signed responses are far larger than the default UDP size of 512 bytes. In theory, this can be handled through multiple IP fragments, but many "middleboxes" in the field do not handle these correctly. This leads to the use of TCP instead. Yet many current TCP implementations store a great deal of data for each TCP connection; heavily loaded servers can run out of resources simply trying to respond to a larger number of (possibly bogus) DNSSEC requests. Some protocol extensions, such as TCP Cookie Transactions, have been developed to reduce this loading. To address these challenges, significant effort is ongoing to deploy DNSSEC, because the Internet is so vital to so many organizations.
Early deployments
Early adopters include Brazil (.br), Bulgaria (.bg), Czech Republic (.cz), Namibia (.na) Puerto Rico (.pr) and Sweden (.se), who use DNSSEC for their country code top-level domains; RIPE NCC, who have signed all the reverse lookup records (in-addr.arpa) that are delegated to it from the Internet Assigned Numbers Authority (IANA). ARIN is also signing their reverse zones. In February 2007, TDC became the first Swedish ISP to start offering this feature to its customers.
IANA publicly tested a sample signed root since June 2007. During this period prior to the production signing of the root, there were also several alternative trust anchors. The IKS Jena introduced one on January 19, 2006, the Internet Systems Consortium introduced another on March 27 of the same year, while ICANN themselves announced a third on February 17, 2009.
On June 2, 2009, Afilias, the registry service provider for Public Interest Registry's .org zone signed the .org TLD. Afilias and PIR also detailed on September 26, 2008, that the first phase, involving large registrars it has a strong working relationship with ("friends and family") would be the first to be able to sign their domains, beginning "early 2009". On June 23, 2010, 13 registrars were listed as offering DNSSEC records for .ORG domains.
VeriSign ran a pilot project to allow .com and .net domains to register themselves for the purpose of NSEC3 experimentation. On February 24, 2009, they announced that they would deploy DNSSEC across all their top-level domains (.com, .net, etc.) within 24 months, and on November 16 of the same year, they said the .com and .net domains would be signed by the first quarter of 2011, after delays caused by technical aspects of the implementation. This goal was achieved on-schedule and Verisign's DNSSEC VP, Matt Larson, won InfoWorld's Technology Leadership Award for 2011 for his role in advancing DNSSEC.
Deployment at the DNS root
DNSSEC was first deployed at the root level on July 15, 2010. This is expected to greatly simplify the deployment of DNSSEC resolvers, since the root trust anchor can be used to validate any DNSSEC zone that has a complete chain of trust from the root. Since the chain of trust must be traced back to a trusted root without interruption in order to validate, trust anchors must still be configured for secure zones if any of the zones above them are not secure. For example, if the zone "signed.example.org" was secured but the "example.org" zone was not, then, even though the ".org" zone and the root are signed, a trust anchor has to be deployed in order to validate the zone.
Political issues surrounding signing the root have been a continuous concern, primarily about some central issues:
Other countries are concerned about U.S. control over the Internet, and may reject any centralized keying for this reason.
Some governments might try to ban DNSSEC-backed encryption key distribution.
Planning
In September 2008, ICANN and VeriSign each published implementation proposals and in October, the National Telecommunications and Information Administration (NTIA) asked the public for comments. It is unclear if the comments received affected the design of the final deployment plan.
On June 3, 2009, the National Institute of Standards and Technology (NIST) announced plans to sign the root by the end of 2009, in conjunction with ICANN, VeriSign and the NTIA.
On October 6, 2009, at the 59th RIPE Conference meeting, ICANN and VeriSign announced the planned deployment timeline for deploying DNSSEC within the root zone. At the meeting it was announced that it would be incrementally deployed to one root name server a month, starting on December 1, 2009, with the final root name server serving a DNSSEC signed zone on July 1, 2010, and the root zone will be signed with a RSA/SHA256 DNSKEY. During the incremental roll-out period the root zone will serve a Deliberately Unvalidatable Root Zone (DURZ) that uses dummy keys, with the final DNSKEY record not being distributed until July 1, 2010. This means the keys that were used to sign the zone use are deliberately unverifiable; the reason for this deployment was to monitor changes in traffic patterns caused by the larger responses to queries requesting DNSSEC resource records.
The .org top-level domain has been signed with DNSSEC in June 2010, followed by .com, .net, and .edu later in 2010 and 2011. Country code top-level domains were able to deposit keys starting in May 2010. more than 25% of top-level domains are signed with DNSSEC.
Implementation
On January 25, 2010, the L (ell) root server began serving a Deliberately Unvalidatable Root Zone (DURZ). The zone uses signatures of a SHA-2 (SHA-256) hash created using the RSA algorithm, as defined in RFC 5702. As of May 2010, all thirteen root servers have begun serving the DURZ. On July 15, 2010, the first root full production DNSSEC root zone was signed, with the SOA serial 2010071501. Root trust anchors are available from IANA.
Deployment at the TLD level
Underneath the root there is a large set of top-level domains that must be signed in order to achieve full DNSSEC deployment. The List of Internet top-level domains provides details about which of the existing top-level domains have been signed and linked to the root.
DNSSEC Lookaside Validation - historical
In March 2006, the Internet Systems Consortium introduced the DNSSEC Lookaside Validation registry. DLV was intended to make DNSSEC easier to deploy in the absence of a root trust anchor. At the time it was imagined that a validator might have to maintain large numbers of trust anchors corresponding to signed subtrees of the DNS. The purpose of DLV was to allow validators to offload the effort of managing a trust anchor repository to a trusted third party. The DLV registry maintained a central list of trust anchors, instead of each validator repeating the work of maintaining its own list.
To use DLV, a validator that supports it was needed, such as BIND or Unbound, configured with a trust anchor for a DLV zone. This zone contained DLV records; these had exactly the same format as DS records, but instead of referring to a delegated sub-zone, they referred to a zone elsewhere in the DNS tree. When the validator could not find a chain of trust from the root to the RRset it is trying to check, it searched for a DLV record that could provide an alternative chain of trust.
Gaps in the chain of trust, such as unsigned top-level domains or registrars that did not support DNSSEC delegations, meant administrators of lower-level domains could use DLV to allow their DNS data to be validated by resolvers which had been configured to use DLV. This may have hindered DNSSEC deployment by taking pressure off registrars and TLD registries to properly support DNSSEC. DLV also added complexity by adding more actors and code paths for DNSSEC validation.
ISC decommissioned its DLV registry in 2017. DLV support was deprecated in BIND 9.12 and completely removed from BIND 9.16. Unbound version 1.5.4 (July 2015) marked DLV as decommissioned in the example configuration and manual page. Knot Resolver and PowerDNS Recursor never implemented DLV.
In March 2020, the IETF published RFC 8749, retiring DLV as a standard and moving RFC 4432 and RFC 5074 to "Historic" status.
DNSSEC deployment initiative by the U.S. federal government
The Science and Technology Directorate of the U.S. Department of Homeland Security (DHS) sponsors the "DNSSEC Deployment Initiative".
This initiative encourages "all sectors to voluntarily adopt security measures that will improve security of the Internet's naming infrastructure, as part of a global, cooperative effort that involves many nations and organizations in the public and private sectors."
DHS also funds efforts to mature DNSSEC and get it deployed inside the U.S. federal government.
It was reported that on March 30, 2007, the U.S. Department of Homeland Security proposed "to have the key to sign the DNS root zone solidly in the hands of the US government." However no U.S. Government officials were present in the meeting room and the comment that sparked the article was made by another party. DHS later commented on why they believe others jumped to the false conclusion that the U.S. Government had made such a proposal: "The U.S. Department of Homeland Security is funding the development of a technical plan for implementing DNSSec, and last October distributed an initial draft of it to a long list of international experts for comments. The draft lays out a series of options for who could be the holder, or "operator," of the Root Zone Key, essentially boiling down to a governmental agency or a contractor. "Nowhere in the document do we make any proposal about the identity of the Root Key Operator," said Maughan, the cyber-security research and development manager for Homeland Security."
DNSSEC deployment in the U.S. federal government
The National Institute of Standards and Technology (NIST) published NIST Special Publication 800-81 Secure Domain Name System (DNS) Deployment Guide on May 16, 2006, with guidance on how to deploy DNSSEC. NIST intended to release new DNSSEC Federal Information Security Management Act (FISMA) requirements in NIST SP800-53-R1, referencing this deployment guide. U.S. agencies would then have had one year after final publication of NIST SP800-53-R1 to meet these new FISMA requirements. However, at the time NSEC3 had not been completed. NIST had suggested using split domains, a technique that is known to be possible but is difficult to deploy correctly, and has the security weaknesses noted above.
On 22 August 2008, the Office of Management and Budget (OMB) released a memorandum requiring U.S. Federal Agencies to deploy DNSSEC across .gov sites; the .gov root must be signed by January 2009, and all subdomains under .gov must be signed by December 2009. While the memo focuses on .gov sites, the U.S. Defense Information Systems Agency says it intends to meet OMB DNSSEC requirements in the .mil (U.S. military) domain as well. NetworkWorld's Carolyn Duffy Marsan stated that DNSSEC "hasn't been widely deployed because it suffers from a classic chicken-and-egg dilemma... with the OMB mandate, it appears the egg is cracking."
Deployment in resolvers
Several ISPs have started to deploy DNSSEC-validating DNS recursive resolvers. Comcast became the first major ISP to do so in the United States, announcing their intentions on October 18, 2010 and completing deployment on January 11, 2012.
According to a study at APNIC, the proportion of clients who exclusively use DNS resolvers that perform DNSSEC validation rose to 8.3% in May 2013. About half of these clients were using Google's public DNS resolver.
In September 2015, Verisign announced their free public DNS resolver service, and although unmentioned in their press releases, it also performs DNSSEC validation.
By the beginning of 2016, APNIC's monitoring showed the proportion of clients who exclusively use DNS resolvers that perform DNSSEC validation had increased to about 15%.
DNSSEC support
Google's public recursive DNS server enabled DNSSEC validation on May 6, 2013.
BIND, the most popular DNS management software, enables DNSSEC support by default since version 9.5.
The Quad9 public recursive DNS has performed DNSSEC validation on its main 9.9.9.9 address since it was established on May 11, 2016. Quad9 also provides an alternate service which does not perform DNSSEC validation, principally for debugging.
IETF publications
Domain Name System Security Extensions
Indicating Resolver Support of DNSSEC
DNSSEC and IPv6 A6 Aware Server/Resolver Message Size Requirements
A Threat Analysis of the Domain Name System
DNS Security Introduction and Requirements (DNSSEC-bis)
Resource Records for the DNS Security Extensions (DNSSEC-bis)
Protocol Modifications for the DNS Security Extensions (DNSSEC-bis)
Storing Certificates in the Domain Name System (DNS)
The DNSSEC Lookaside Validation (DLV) DNS Resource Record
Minimally Covering NSEC Records and DNSSEC On-line Signing
Use of SHA-256 in DNSSEC Delegation Signer (DS) Resource Records (RRs)
DNSSEC Operational Practices
DNS Security (DNSSEC) Experiments
Automated Updates of DNS Security (DNSSEC) Trust Anchors
DNSSEC Hashed Authenticated Denial of Existence
Use of SHA-2 Algorithms with RSA in DNSKEY and RRSIG Resource Records for DNSSEC
Elliptic Curve Digital Signature Algorithm (DSA) for DNSSEC
DNS Security (DNSSEC) DNSKEY Algorithm IANA Registry Updates
DNSSEC Operational Practices, Version 2
Clarifications and Implementation Notes for DNS Security (DNSSEC)
Authenticated Denial of Existence in the DNS
Automating DNSSEC Delegation Trust Maintenance
DNSSEC Key Rollover Timing Considerations
Edwards-Curve Digital Security Algorithm (EdDSA) for DNSSEC
Algorithm Implementation Requirements and Usage Guidance for DNSSEC
Moving DNSSEC Lookaside Validation (DLV) to Historic Status
Tools
DNSSEC deployment requires software on the server and client side. Some of the tools that support DNSSEC include:
Windows 7 and Windows Server 2008 R2 include a "security-aware" stub resolver that is able to differentiate between secure and non-secure responses by a recursive name server. Windows Server 2012 DNSSEC is compatible with secure dynamic updates with Active Directory-integrated zones, plus Active Directory replication of anchor keys to other such servers.
BIND, the most popular DNS name server (which includes dig), incorporates the newer DNSSEC-bis (DS records) protocol as well as support for NSEC3 records.
Unbound is a DNS name server that was written from the ground up to be designed around DNSSEC concepts.
mysqlBind The GPL DNS management software for DNS ASPs now supports DNSSEC.
OpenDNSSEC is a designated DNSSEC signer tool using PKCS#11 to interface with hardware security modules.
Knot DNS has added support for automatic DNSSEC signing in version 1.4.0.
PowerDNS fully supports DNSSEC as of version 3.0 in pre-signed and live-signed modes.
DNSSEC: What is it and why is it important to implement it for a long time? — Check it Initiative of the Internet community and the Dutch government
See also
DNSCrypt
DNSCurve
Extension Mechanisms for DNS (EDNS)
TSIG
Resource Public Key Infrastructure (RPKI)
References
Further reading
External links
DNSSEC - DNSSEC information site: DNSSEC.net
DNSEXT DNS Extensions IETF Working Group
DNSSEC-Tools Project
Internet Standards
Domain Name System
Domain name system extensions
Public-key cryptography
Key management
sv:DNS#DNSSEC |
339746 | https://en.wikipedia.org/wiki/Commandos%20%28United%20Kingdom%29 | Commandos (United Kingdom) | The Commandos, also known as the British Commandos, were formed during the Second World War in June 1940, following a request from the Prime Minister of the United Kingdom, Winston Churchill, for a force that could carry out raids against German-occupied Europe. Initially drawn from within the British Army from soldiers who volunteered for the Special Service Brigade, the Commandos' ranks would eventually be filled by members of all branches of the British Armed Forces and a number of foreign volunteers from German-occupied countries. By the end of the war 25,000 men had passed through the Commando course at Achnacarry. This total includes not only the British volunteers, but volunteers from Greece, France, Belgium, Netherlands, Canada, Norway, Poland, and the United States Army Rangers and US Marine Corps Raiders, which were modelled on the Commandos.
Reaching a wartime strength of over 30 units and four assault brigades, the Commandos served in all theatres of war from the Arctic Circle to Europe and from the Mediterranean and Middle East to South-East Asia. Their operations ranged from small groups of men landing from the sea or by parachute, to a brigade of assault troops spearheading the Allied invasions of Europe and Asia.
After the war most Commando units were disbanded, leaving only the Royal Marines 3 Commando Brigade. The modern Royal Marine Commandos, Parachute Regiment, Special Air Service, British Army commandos and the Special Boat Service trace their origins to the Commandos. The Second World War Commando legacy also extends to mainland Europe and the United States, the French Naval commandos, Dutch Korps Commandotroepen, Belgian Paracommando Brigade, the Greek 1st Raider/Paratrooper Brigade and the United States Army Rangers were influenced by the wartime Commandos.
Formation
The British Commandos were a formation of the British Armed Forces organized for special service in June 1940. After the events leading to the British Expeditionary Force's (BEF) evacuation from Dunkirk, after the disastrous Battle of France, Winston Churchill, the British Prime Minister, called for a force to be assembled and equipped to inflict casualties on the Germans and bolster British morale. Churchill told the joint chiefs of staff to propose measures for an offensive against German-occupied Europe, and stated in a minute to General Hastings Ismay on 6 June 1940: "Enterprises must be prepared, with specially-trained troops of the hunter class, who can develop a reign of terror down these coasts, first of all on the "butcher and bolt" policy..." The Chief of the Imperial General Staff at that time was General John Dill and his Military Assistant was Lieutenant-Colonel Dudley Clarke. Clarke discussed the matter with Dill at the War Office and prepared a paper for him that proposed the formation of a new force based on the tactics of Boer commandos, 'hit sharp and quick - then run to fight another day'; they became 'The Commandos' from then onwards. Dill, aware of Churchill's intentions, approved Clarke's proposal. The first commando raid, Operation Collar, was conducted on the night of 24/25 June 1940.
The request for volunteers for special service was initially restricted to serving Army soldiers within certain formations still in Britain, and from men of the disbanding divisional Independent Companies originally raised from Territorial Army divisions who had served in the Norwegian Campaign.
By the autumn of 1940 more than 2,000 men had volunteered and in November 1940 these new units were organised into a Special Service Brigade consisting of four battalions under the command of Brigadier J. C. Haydon. The Special Service Brigade was quickly expanded to 12 units which became known as Commandos. Each Commando had a lieutenant-colonel as the commanding officer and numbered around 450 men (divided into 75 man troops that were further divided into 15 man sections). Technically these men were only on secondment to the Commandos; they retained their own regimental cap badges and remained on the regimental roll for pay. The Commando force came under the operational control of the Combined Operations Headquarters. The man initially selected as the commander of Combined Operations was Admiral Roger Keyes, a veteran of the Gallipoli Campaign and the Zeebrugge Raid in the First World War. Keyes resigned in October 1941 and was replaced by Vice Admiral Lord Louis Mountbatten. Major-General Robert Laycock was the last Commander of Combined Operations; he took over from Mountbatten in October 1943.
Organisation
Commando units
The Commando units formed in the United Kingdom were: No. 1, No. 2, No. 3, No. 4, No. 5, No. 6, No. 7, No. 8 (Guards), No. 9, No. 10 (Inter-Allied), No. 11 (Scottish), No. 12, No. 14 (Arctic), No. 30, and No. 62 Commando. At the same time there were four Commando units formed in the Middle East: No. 50, No. 51, No, 52, and the Middle East Commando. The No. 10 (Inter-Allied) Commando was formed from volunteers from the occupied territories and enemy aliens. It was the largest Commando unit formed, and contained troops from France, Belgium, Poland, Norway, the Netherlands, and No. 3 (X) Troop. The No. 3 (X) Troop consisted of enemy aliens; it was also known as the English, Jewish, or British troop and was officially renamed the Miscellaneous Troop in 1944. Most of the troop had German, Austrian, or Eastern European backgrounds, while others were political or religious refugees from Nazi Germany.
Some Commandos were designated for different tasks from the start. No. 2 Commando was always intended to be a parachute unit. In June 1940 they began parachute training and were re-designated the 11th Special Air Service (SAS) Battalion, which eventually became the 1st Parachute Battalion. After their re-designation a new No. 2 Commando was formed. Other Commandos were grouped together in a larger formation known as Layforce and sent to the Middle East. The Special Air Service and the Special Boat Squadron were formed from the survivors of Layforce. The men of No. 14 (Arctic) Commando were specially trained for operations in the Arctic Circle and specialised in using small boats and canoes to attack shipping. The joint service unit No. 30 Commando was formed for intelligence gathering. Its members were trained in the recognition of enemy documents, search techniques, safe cracking, prisoner handling, photography, and escape techniques.
No. 62 Commando or the Small Scale Raiding Force was a small 55–man unit under the operational control of the Special Operations Executive (SOE). They carried out raids planned by SOE such as Operation Postmaster on the Spanish island of Fernando Po off the coast of West Africa.
In February 1941 the Commandos were reorganized in accordance with a new war establishment. Each Commando unit now consisted of a Headquarters and six troops (instead of the previous 10). Each troop would comprise three officers and 62 other ranks; this number was set so each troop would fit into two Assault Landing Craft. The new formation also meant that two complete Commando units could be carried in the 'Glen' type landing ship and one unit in the 'Dutch' type landing ship. The motor transport issued to each commando consisted of one car for the commanding officer, 12 motorcycles (six with sidecars), two 15 hundredweight (cwt) trucks, and one 3-ton truck. These vehicles were only provided for administration and training and were not intended to accompany the men on operations.
In February 1942 the Royal Marines were tasked to organise Commando units of their own. In total nine Commando units were formed by the Royal Marines: No. 40, No. 41, No. 42, No. 43, No. 44, No. 45, No. 46, No. 47 and the last, No. 48, which was only formed in 1944. In 1943 two other Commando units were formed. The first was the Royal Naval Commandos, who were established to carry out tasks associated with establishing, maintaining, and controlling beachheads during amphibious operations. The other was the Royal Air Force Commandos, who would accompany an invasion force either to make enemy airfields serviceable, or to make new airstrips operational and contribute to their defence.
1943 reorganization
In 1943, the formation of the Commando unit was changed. Each Commando now consisted of a small headquarters group, five fighting troops, a heavy weapons troop, and a signals platoon. The fighting troops consisted of 65 men of all ranks divided into two 30–man sections which were subdivided into three 10–man subsections. The heavy weapons troop was made up of 3-inch mortar and Vickers machine gun teams. The Commandos were provided with the motor transport needed to accompany them on operations. Their transport now consisted of the commanding officer's car, 15 motorcycles (six with side cars), ten 15 cwt trucks, and three 3-ton trucks. The heavy weapons troop had seven Jeeps and trailers and one Jeep for each of the fighting troops and the headquarters. This gave them enough vehicles of their own to accommodate two fighting troops, the heavy weapons troop, and the Commando Headquarters.
By now the Commandos started to move away from smaller raiding operations. They were formed into four brigades to spearhead future Allied landing operations. The previous Special Service Brigade Headquarters was replaced by Headquarters Special Services Group under command of Major-General Robert Sturges. Of the remaining 20 Commando units, 17 were used in the formation of the four Special Service brigades. The three remaining Commandos (Nos. 12, 14, and 62) were left out of the brigade structure to concentrate on smaller scale raids. The increased tempo of operations, together with a shortage of volunteers and the need to provide replacements for casualties, forced their disbandment by the end of 1943. The small scale raiding role was then given to the two French troops of No. 10 (Inter-Allied) Commando.
From 1944 the Operational Holding Commando Headquarters was formed. It was responsible for two sub–units: the Army and Royal Marines Holding Commando Wings. Both units had an establishment of five troops and a heavy weapons troop of fully trained commandos. The men in these troops were to provide individual or complete troop replacements for the Commando units in the field. In December 1944, the four Special Service brigades were re-designated as Commando brigades.
Training
When the Commando units were originally formed in 1940, training was the responsibility of the unit commanding officers. Training was hampered by the general shortage of equipment throughout the British Army at this time, as most arms and equipment had been left behind at Dunkirk. In December 1940 a Middle East Commando depot was formed with the responsibility of training and supplying reinforcements for the Commando units in that theatre. In February 1942 the Commando training depot at Achnacarry in the Scottish Highlands was established by Brigadier Charles Haydon under the command of Lieutenant-Colonel Charles Vaughan, the Commando depot was responsible for training complete units and individual replacements. The training regime was for the time innovative and physically demanding, and far in advance of normal British Army training. The depot staff were all hand picked, with the ability to outperform any of the volunteers. Training and assessment started immediately on arrival, with the volunteers having to complete an march with all their equipment from the Spean Bridge railway station to the commando depot. When they arrived they were met by Vaughan, who stressed the physical demands of the course and that any man who failed to live up to the requirements would be 'returned to unit' (RTU).
Exercises were conducted using live ammunition and explosives to make training as realistic as possible. Physical fitness was a prerequisite, with cross country runs and boxing matches to improve fitness. Speed and endurance marches were conducted up and down the nearby mountain ranges and over assault courses that included a zip-line over Loch Arkaig, all while carrying arms and full equipment. Training continued by day and night with river crossings, mountain climbing, weapons training, unarmed combat, map reading, and small boat operations on the syllabus.
Living conditions were primitive in the camp, with trainees housed either under canvas in tents or in Nissen huts and they were responsible for cooking their own meals. Correct military protocols were enforced: Officers were saluted and uniforms had to be clean, with brasses and boots shining on parade. At the end of each course the final exercise was a simulated night beach landing using live ammunition.
Another smaller Commando depot, known as the Commando Mountain and Snow Warfare training camp, was established at Braemar. This camp was run by two famous mountaineers: the depot commander Squadron Leader Frank Smythe and chief instructor Major John Hunt. The depot provided training for operations in Arctic conditions, with instruction in climbing snow-covered mountains, cliff climbing, and small boat and canoe handling. Training was conducted in how to live, fight, and move on foot or on skis in snowy conditions.
A major change in the training programme occurred in 1943. From that point on training concentrated more on the assault infantry role and less on raiding operations. Training now included how to call for fire support from artillery and naval gunfire, and how to obtain tactical air support from the Allied air forces. More emphasis was put on joint training, with two or more Commando units working together in brigades.
By the end of the war 25,000 men had passed through the Commando course at Achnacarry. This total includes not only the British volunteers, but volunteers from Belgium, France, Netherlands, Norway, Poland, and the United States Army Rangers, which were modelled on the Commandos.
Weapons and equipment
As a raiding force, the Commandos were not issued the heavy weapons of a normal infantry battalion. The weapons used were the standard British Army small arms of the time; most riflemen carried the Lee–Enfield rifle and section fire support was provided by the Bren light machine gun. The Thompson was the submachine gun of choice, but later in the war the Commandos also used the cheaper and lighter Sten gun. Commando sections were equipped with a higher number of Bren and Thompson guns than a normal British infantry section. The Webley Revolver was initially used as the standard sidearm, but it was eventually replaced by the Colt 45 pistol, which used the same ammunition as the Thompson submachine gun. Another pistol was the Browning Hi Power chambered in 9mm Parabellum by the Canadian manufacturer John Inglis and Company. One weapon specifically designed for the Commandos was the De Lisle carbine. Modelled on the Lee–Enfield rifle and fitted with a silencer, it used the same .45 cartridge as the Thompson and was designed to eliminate sentries during Commando raids. Some were used and proved successful on operations, but the nature of the Commando role had changed before they were put into full production, and the order for their purchase was cancelled. The Fairbairn-Sykes Fighting Knife was designed especially for Commandos' use in hand-to-hand combat, replacing the BC-41 knuckleduster/dagger, although a whole range of clubs and knives were used in the field. Some of the heavier and crew–served weapons used included the Boys anti-tank rifle and the 2-inch mortar for indirect fire support. After 1943, the Projector, Infantry, Anti Tank, known as the PIAT, replaced the now obsolete Boys anti-tank rifle. With the formation of the heavy weapons troops, Commandos were issued the 3-inch mortar and the Vickers machine gun. The issue of the medium Vickers machine gun to Commando units set them apart from typical British Army infantry divisions, who tended to only employ the weapon in specialist machine gun battalions.
Initially the Commandos were indistinguishable from the rest of the British Army and volunteers retained their own regimental head-dress and insignia. No. 2 Commando adopted Scottish head-dress for all ranks and No. 11 (Scottish) Commando wore the Tam O'Shanter with a black hackle. The official head-dress of the Middle East Commandos was a bush hat with their own knuckleduster cap badge. This badge was modelled on their issue fighting knife (the Mark I trench knife) which had a knuckleduster for a handle. In 1942 the green Commando beret and the Combined Operations tactical recognition flash were adopted.
As the men were equipped for raiding operations and only lightly armed, they did not carry anti-gas protective equipment or large packs, and the standard British steel helmet was replaced by a woollen cap comforter. Instead of heavy ammunition boots they wore lightweight rubber soled gym shoes that allowed them to move silently. All ranks carried a toggle rope, several of which could be linked together to form longer ropes for scaling cliffs or other obstacles. During boat operations an inflatable lifebelt was worn for safety. The Commandos were the first unit to adopt the Bergen rucksack to carry heavy loads of ammunition, explosives, and other demolition equipment. A battle jerkin was produced to wear over battledress and the airborne forces' camouflaged Denison smock became standard issue for Commando forces later in the war.
Operations
The very first Commando raid – Operation Collar on 23 June 1940 – was not actually carried out by a Commando unit, but by one of their predecessors: No.11 Independent Company. The mission, led by Major Ronnie Tod, was an offensive reconnaissance carried out on the French coast south of Boulogne-sur-Mer and Le Touquet. The operation was a limited success; at least two German soldiers were killed whilst the only British injury was a flesh wound suffered by Lieutenant-Colonel Dudley Clarke, who had accompanied the raiders as an observer. A second and similarly inconsequential raid, Operation Ambassador, was made on the German-occupied island of Guernsey on the night of 14 July 1940 by men from H Troop of No. 3 Commando and No. 11 Independent Company. One unit landed on the wrong island and another group disembarked from its launch into water so deep that it came over their heads. Intelligence had indicated that there was a large German barracks on the island but the Commandos found only empty buildings. When they returned to the beach heavy seas had forced their launch offshore, and they were forced to swim out to sea to be picked up.
The size of the raiding force depended on the objective. The smallest raid was conducted by two men from No. 6 Commando in Operation J V. The largest was the 10,500 man Operation Jubilee. Most of the raids were scheduled to only last overnight although some, like Operation Gauntlet, were conducted over a number of days. In north west Europe there were 57 raids made between 1940 and 1944. Of these 36 were against targets in France. There were 12 raids against Norway, seven raids in the Channel Islands, and single raids were made in Belgium and the Netherlands. The success of the raids varied; Operation Chariot, the raid against dock installations at St Nazaire, has been hailed as the greatest raid of all time, but others, like Operation Aquatint and Operation Musketoon, resulted in the capture or death of all involved. The smaller raids ended in mid-1944 on the orders of Major-General Robert Laycock, who suggested that they were no longer as effective and only resulted in the Germans strengthening their beach defences, something that could be extremely detrimental to Allied plans.
Norway
The first Commando raid in Norway, Operation Claymore, was conducted in March 1941 by men of Nos. 3 and 4 Commandos. This was the first large scale raid from the United Kingdom during the war. Their objective was the undefended Norwegian Lofoten Islands. They successfully destroyed the fish-oil factories, petrol dumps, and 11 ships, while capturing 216 Germans, encryption equipment, and codebooks.
In December 1941 there were two raids. The first was Operation Anklet, a raid on the Lofoten Islands by No. 12 Commando on 26 December. The German garrison was in the midst of their Christmas celebrations and was easily overcome; the Commandos re-embarked after two days. Operation Archery was a larger raid at Vågsøy Island. This raid involved men from Nos. 2, 3, 4 and 6 Commandos, a Royal Navy flotilla, and limited air support. The raid caused significant damage to factories, warehouses, and the German garrison, and sank eight ships. After this the Germans increased the garrison in Norway by an extra 30,000 troops, upgraded coastal and inland defences, and sent a number of capital ships to the area.
In September 1942 men from No. 2 Commando took part in Operation Musketoon, a raid against the Glomfjord hydroelectric power plant. The Commandos were landed by submarine and succeeded in blowing up some pipelines, turbines, and tunnels. This effectively destroyed the generating station and the aluminium plant was shut down permanently. One Commando was killed in the raid and another seven were captured while trying to escape. They spent a short time at Colditz Castle before being transferred to Sachsenhausen concentration camp. Shortly after their arrival at Sachsenhausen they were executed. They were the first victims of the secret Commando Order, which mandated the execution of all captured Commandos. The three remaining Commandos managed to reach Sweden and were eventually returned to No. 2 Commando.
In 1943, the Norwegian Troop of No. 10 (Inter-Allied), No. 12, and No. 14 (Arctic) Commandos assisted the Royal Navy in carrying out anti–shipping raids in Norwegian coastal waters. The Commandos provided extra firepower for the navy Motor Torpedo Boats when they were at sea and acted as a guard force when they were at anchor in the Norwegian fjords. In April 1943, seven men of No. 14 (Arctic) Commando took part in a raid on German shipping near Haugesund code named Operation Checkmate. They managed to sink several ships using limpet mines, but were captured and eventually taken to Sachsenhausen and Bergen-Belsen concentration camps, where they were executed.
The Germans responded to the numerous raids directed at Norway by increasing the number of troops stationed there. By 1944 the garrison had risen to 370,000 men. In comparison, a British infantry division in 1944 had an establishment of 18,347 men.
Channel Islands
There were seven Commando missions carried out on the Channel Islands. Operation Ambassador, which focused on Guernsey, was the first and largest of these, employing 140 men from No. 3 Commando and No. 11 Independent Company in a night raid on 14 July 1940. Later raids were much smaller; only 12 men of No. 62 Commando took part in Operation Dryad in September 1942, when they captured seven prisoners and located several German codebooks. Operation Branford, a reconnaissance mission that aimed to identify a suitable gun position to support future raids on Alderney, followed only days later. In October of that year 12 men from No.s 12 and 62 Commandos took part in Operation Basalt, a raid on Sark that saw four Germans killed and one taken prisoner.
All the other Channel Islands raids were less successful. In January 1943, Operation Huckabuck, a raid on Herm, was a failure. After three attempts to scale the islands cliffs the Commandos finally reached the top, but there were no signs of any German occupation troops or of the island's population. The next raids were Operations Hardtack 28 and Hardtack 7 in December 1943. The Hardtack 28 raid on Jersey ended in failure when two men were killed and one wounded after they walked into a minefield. The exploding mines alerted the German garrison and the Commandos had to abandon the operation. In Hardtack 7 the Commandos had returned to Sark, but had to abandon the operation and return to England when they were unable to scale the island's cliffs.
Mediterranean
During 1941, the Middle East Commandos and Layforce were tasked to carry out a campaign of harassment and dislocation against enemy forces in the Mediterranean. At the time that Layforce was raised, the British had the ascendency in the theatre, as they had largely defeated the Italians. It was felt that the Commandos could be employed in the capture of the island of Rhodes. However, the arrival of the Afrika Korps in Cyrenaica and the invasion of Yugoslavia and Greece greatly changed the strategic outlook. By the time Layforce arrived in Egypt in March the situation had become dire. The deployment of forces to Greece meant that the Commandos became the only troops in general reserve. As the strategic situation worsened, it became increasingly difficult to employ them in the manner intended, as they were called upon as reinforcements to the rest of the army.
In May 1941 the majority of Layforce were sent as reinforcements to the Battle of Crete. Almost as soon as they landed it was decided that they could not be employed in an offensive role and would instead be used to cover the withdrawal route towards the south. They were ill-equipped for this type of operation, as they were lacking in indirect fire support weapons such as mortars or artillery; they were armed mainly with rifles and a few Bren light machine guns. By 31 May the evacuation was drawing to a close and the commandos, running low on ammunition, rations, and water, fell back towards Sphakia. In the end, the vast majority of the commandos were left behind on the island, becoming prisoners of war. About 600 of the 800 commandos that had been sent to Crete were listed as killed, missing, or wounded; only 179 commandos managed to get off the island. In April 1941 men from No. 7 Commando took part in the Bardia raid, but by late July 1941 Layforce had been severely reduced in strength. Reinforcements were unlikely given the circumstances. The operational difficulties that had been exposed during the Bardia raid, combined with the inability of the high command to fully embrace the Commando concept, had largely served to make the force ineffective. The decision was made to disband Layforce.
In November 1942, No. 1 and No. 6 Commandos formed part of the spearhead for Allied landings in Algeria as part of Operation Torch. Tensions were high between the British and the Vichy French at this time because of a number of clashes like the Attack on Mers-el-Kébir. As a result, the decision was made for the Commandos to be equipped with American weapons and uniforms in an effort to placate the defenders. The Tunisia Campaign followed the Torch landings. No. 1 and No. 6 Commandos were involved in the first battle of Sedjenane between February and March 1943. Both Commando units remained in theatre until April, when the decision was made to withdraw them from the fighting in North Africa. Lacking the administrative support and reinforcements of regular infantry units, the strength of the two units had fallen and they were no longer considered effective.
In May 1943 a Special Service Brigade comprising No. 2, No. 3, No. 40 (RM), and No. 41 (RM) Commandos was sent to the Mediterranean to take part in the Allied invasion of Sicily. The two Royal Marines Commandos were the first into action, landing ahead of the main force. The 2nd Special Service Brigade serving in the Italian campaign was joined in November 1943 by the Belgian and Polish Troops of No. 10 (Inter-Allied) Commando. The Polish troop captured a German-occupied village on its own when the 2/6th Battalion Queen's Regiment failed to reach a rendezvous on time. On 2 April 1945 the whole of the now named 2nd Commando Brigade were engaged in Operation Roast at Comacchio lagoon in north east Italy. This was the first major action of the big spring offensive to push the Germans back across the River Po and out of Italy. After a fierce three-day battle the Commandos succeeded in clearing the spit separating the lagoon from the Adriatic and secured the flank of the 8th Army. This fostered the idea that the main offensive would be along the coast and not though the Argenta Gap. Major Anders Lassen (Special Air Service) and Corporal Thomas Peck Hunter No. 43 (Royal Marine) Commando were each awarded a posthumous Victoria Cross for their actions during Operation Roast.
France
There were 36 Commando raids targeted against France between 1940–1944, mostly small affairs involving between 10 and 25 men. Some of the larger raids involved one or more commando units. In March 1942, No. 2 Commando plus demolition experts from seven other Commando units took part in Operation Chariot, also known as the St. Nazaire Raid. The destroyer HMS Campbeltown, accompanied by 18 smaller ships, sailed into St. Nazaire where Campbeltown was rammed directly into the Normandie dock gates. The Commandos engaged the German forces and destroyed the dock facilities. Eight hours later, delayed-action fuses set off the explosives in the Campbeltown, which wrecked the dock gates and killed some 360 Germans and French. A total of 611 soldiers and sailors took part in Chariot; 169 were killed and 200 (most wounded) taken prisoner. Only 242 men returned. Of the 241 Commandos who took part 64 were killed or missing and 109 captured. Lieutenant-Colonel Augustus Charles Newman and Sergeant Thomas Durrant of the Commandos, plus three members of the Royal Navy, were awarded the Victoria Cross. Eighty others received decorations for gallantry.
On 19 August 1942 a major landing took place at the French coastal town of Dieppe. The main force was provided by the 2nd Canadian Infantry Division, supported by No. 3 and No. 4 Commandos. The mission of No. 3 Commando was to neutralize a German coastal battery near Berneval-le-Grand that was in a position to fire upon the landing at Dieppe. The landing craft carrying No. 3 Commando ran into a German coastal convoy. Only a handful of commandos, under the second in command Major Peter Young, landed and scaled the barbed wire laced cliffs. Eventually 18 Commandos reached the perimeter of the battery via Berneval and engaged the target with small arms fire. Although unable to destroy the guns, they prevented the Germans from firing effectively on the main assault by harassing their gun crews with sniper fire. In a subsidiary operation No. 4 Commando landed in force along with the French Troop No. 10 (Inter-Allied) Commando and 50 United States Army Rangers and destroyed the artillery battery at Varengeville. Most of No. 4 Commando safely returned to England. Captain Patrick Porteous of No. 4 Commando was awarded the Victoria Cross for his actions during the raid.
During the Normandy landings of 6 June 1944 two Special Service Brigades were deployed. The 1st Special Service Brigade landed behind the British 3rd Infantry Division on Sword Beach. Their main objective was to fight through to the 6th Airborne Division that had landed overnight and was holding the northern flank and the bridges over the Orne River. The Commandos cleared the town of Ouistreham and headed for the bridges, about away. Arriving at the Pegasus Bridge, the Commandos fought on the left flank of the Orne bridgehead until they were ordered to withdraw. The brigade remained in Normandy for ten weeks, sustaining 1,000 casualties, including the brigade commander, Brigadier Lord Lovat. The all Royal Marines 4th Special Service Brigade was also involved in the Normandy landings. No. 48 Commando landed on the left flank of Juno Beach and No. 41 Commando landed on the right flank of Sword Beach and then assaulted Lion-sur-Mer. No. 48 Commando landed in front of the St. Aubin-sur-Mer strong point and lost forty percent of its men. The last 4th Brigade unit ashore was No. 47 Commando, which landed on Gold Beach near the town of Asnells. Five of the Landing Craft Assault carrying them ashore were sunk by mines and beach obstacles, which resulted in the loss of 76 of their 420 men. These losses delayed their advance to their primary objective, the port of Port-en-Bessin, which they captured the following day.
Netherlands
The Battle of the Scheldt started 1 November 1944, with 4th Special Service Brigade assigned to carry out a seaborne assault on the island of Walcheren. The plan was for the island to be attacked from two directions, with the Commandos coming by sea and the Canadian 2nd Division and the 52nd (Lowland) Division attacking across the causeway. No. 4 Commando landed at Flushing and No. 41 and 48 at Westkapelle. No. 47 Commando was held in reserve and landed after No.s 41 and 48. They were to advance past No. 48 Commando and attempt to link up with No. 4 Commando in the south. On the first day No. 41 captured an artillery observation tower at Westkapelle and cleared the rest of the town. They then moved along the coast and dealt with the coastal defence installations.
No. 48 Commando quickly captured a radar station and then advanced on a gun battery south of Westkapelle, which was captured before nightfall.
On 2 November No. 47 Commando advanced through No. 48 Commando to attack a gun battery at Zoutelande. The attack failed, with the unit suffering heavy casualties, including all the rifle troop commanders. The next day No. 47, supported by No. 48 Commando, again attacked the Zoutelande gun battery. This time they managed to continue the advance and link up with No. 4 Commando. The capture of these batteries allowed the navy to start sweeping the channel into Antwerp for mines. On 5 November, No. 41 Commando captured the gun battery north east of Domburg; this left only one battery still under German control. The brigade regrouped and concentrated its assault on the last position. Just before the attack began on 9 November, the 4,000 men in the battery surrendered. This was quickly followed by the surrender of the rest of the island's garrison.
Germany
In January 1945 the 1st Commando Brigade were involved in Operation Blackcock, where Lance Corporal Henry Harden of the Royal Army Medical Corps, attached to No. 45 (Royal Marine) Commando was awarded the Victoria Cross.
The 1st Commando Brigade next took part in Operation Plunder, the crossing of the Rhine River in March 1945. After a heavy artillery bombardment on the evening of 23 March 1945, the brigade carried out the initial assault under cover of darkness with the 15th (Scottish) Division and the 51st (Highland) Division. The Germans had moved most of their reserve troops to the Ludendorff Bridge at Remagen, which had just been captured by the U.S. 9th Armored Division. The Commandos crossed the Rhine at a point west of Wesel. Their crossing was unopposed and the brigade headed to the outskirts of Wesel. Here they waited until a raid of 200 bombers of the Royal Air Force finished their attack, during which over 1,000 tons of bombs were dropped. Moving into the city just after midnight, the Commandos met resistance from defenders organised around an anti-aircraft division. It was not until 25 March that all resistance ended and the brigade declared the city taken.
Burma
During the Burma Campaign in 1944–1945, the 3rd Commando Brigade participated in several coastal landings of the Southern Front offensive. These landings culminated in the battle of Hill 170 at Kangaw. Here Lieutenant George Knowland of No. 1 Commando was awarded a posthumous Victoria Cross. The Commandos' victory in the 36-hour battle for Hill 170 cut off the escape of the 54th Japanese Division. Further amphibious landings by the 25th Indian Infantry Division and the overland advance of the 82nd (West Africa) Division made the Japanese position in the Arakan untenable. A general withdrawal was ordered to avoid the complete destruction of the Twenty-Eighth Japanese Army. The Commando brigade was then withdrawn to India in preparation for Operation Zipper, the planned invasion of Malaya. The Zipper landings were not needed due to the Japanese surrender so the brigade was sent to Hong Kong for policing duties instead.
Legacy
At the end of the Second World War, all the British Army, Royal Navy, Royal Air Force, and some Royal Marines Commandos were disbanded. This left only three Royal Marines Commandos and one brigade (with supporting Army elements). As of 2010, the British Commando force is 3 Commando Brigade, which consists of both Royal Marines and British Army components, as well as commando-trained personnel from the Royal Navy and Royal Air Force. Other units of the British armed forces, which can trace their origins to the British Commandos of the Second World War, are the Parachute Regiment, the Special Air Service, and the Special Boat Service.
Of the Western nations represented in No. 10 (Inter-Allied) Commando, only Norway did not develop a post-war commando force. The French troops were the predecessors of the Naval commandos. The Dutch Troops were the predecessors of the Korps Commandotroepen and the Belgian Troops were the predecessors of the Paracommando Brigade. The 1st Battalion of the United States Army Rangers were also influenced by the British Commandos. Their first volunteers were from troops stationed in Northern Ireland, who were sent to train at the Commando depot at Achnacarry. However, subsequent Ranger battalions were formed and trained independent of British influence.
The men serving with the Commandos were awarded 479 decorations during the war. This includes eight Victoria Crosses awarded to all ranks. Officers were awarded 37 Distinguished Service Orders with nine bars for a second award and 162 Military Crosses with 13 bars. Other ranks were awarded 32 Distinguished Conduct Medals and 218 Military Medals. In 1952 the Commando Memorial was unveiled by the Queen Mother. It is now a Category A listed monument in Scotland, dedicated to the men of the original British Commando Forces raised during Second World War. Situated around a mile from Spean Bridge village, it overlooks the training areas of the Commando Training Depot established in 1942 at Achnacarry Castle.
Battle honours
In the British Army battle honours are awarded to regiments that have seen active service in a significant engagement or campaign, generally (although not always) one with a victorious outcome. The following battle honours were awarded to the British Commandos during the Second World War.
Adriatic
Alethangyaw
Aller
Anzio
Argenta Gap
Burma 1943–1945
Crete
Dieppe
Dives Crossing
Djebel Choucha
Flushing
Greece 1944–1945
Italy 1943–1945
Kangow
Landing at Porto San Venere
Landing in Sicily
Leese
Litani
Madagascar
Middle East 1941, 1942, 1944
Monte Ornito
Myebon
Normandy Landings
North Africa 1941–1943
North-West Europe 1942, 1944, 1945
Norway 1941
Pursuit to Messina
Rhine
St. Nazaire
Salerno
Sedjenane 1
Sicily 1943
Steamroller Farm
Syria 1941
Termoli
Vaagso
Valli di Comacchio
Westkapelle
Footnotes
References
Bibliography
Further reading
External links
Commando Veterans Association
Combined Operations
Commando Veterans Association
30 Commando Assault Unit - Ian Fleming's 'Red Indians'
No. 6 (Army) Commando
No, 47 (Royal Marine) Commando Association
British Army in World War II
Military units and formations established in 1940
1940 establishments in the United Kingdom
Military units and formations disestablished in 1946
1946 disestablishments in the United Kingdom
Military units and formations of the British Army in World War II
Army reconnaissance units and formations |
340093 | https://en.wikipedia.org/wiki/M8 | M8 | M8 or M-8 may refer to:
Computing, electronics, and engineering
M8 (cipher), an encryption algorithm
Leica M8, a digital rangefinder camera
HTC One (M8), a smartphone
Meizu M8, a smartphone
M8, a standard bolt and nut size in the ISO metric screw thread system
Rockets
M8 (rocket), an American World War II air-to-surface and surface-to-surface rocket
M-8 rocket, a variant of the RS-82 rocket used by the Soviet Union in World War II
Transport
Civilian
M8, a Paris Metro line
M8 (New York City bus), a New York City Bus route in Manhattan
M8 (railcar), a Metro-North Railroad car
BMW M8, a sporty car from BMW 8 Series
Military
M8 Armored Gun System, a US Army light tank cancelled in 1996
M8 Grenade Launcher, see M7 grenade launcher
M8 Greyhound, an American armored car used during World War II
M8 Tractor, an artillery tractor used by the US Army
Howitzer Motor Carriage M8, an American self-propelled howitzer vehicle developed during World War II
Loening M-8, a 1910s American fighter monoplane
Miles M.8 Peregrine, a 1930s twin-engined light transport monoplane primarily used by the Royal Aircraft Establishment
Roads
M-8 (Michigan highway), also known as the Davison Freeway
M8 (Johannesburg), a metropolitan road near Johannesburg, South Africa
M8 (Port Elizabeth), a metropolitan road in Port Elizabeth, South Africa
M8 highway (Russia), also known as the Kholmogory Highway
M8 motorway (Hungary)
M8 motorway (Ireland)
M-8 highway (Montenegro)
M8 motorway (Pakistan)
M8 motorway (Scotland)
M8 Motorway (Sydney) in Sydney, Australia
Highway M08 (Ukraine)
Western Freeway (Victoria) in Australia, designated M8
M8 Road (Zambia)
Other uses
M8 Alliance, World Health Summit
M8 (magazine), a dance music magazine based in Scotland
Messier 8, also known as M8 or Lagoon Nebula, a giant interstellar cloud
M8, Internet slang for "mate"
See also
8M (disambiguation) |
340772 | https://en.wikipedia.org/wiki/Bernstein%20v.%20United%20States | Bernstein v. United States | Bernstein v. United States is a set of court cases brought by Daniel J. Bernstein challenging restrictions on the export of cryptography from the United States.
History
The case was first brought in 1995, when Bernstein was a student at University of California, Berkeley, and wanted to publish a paper and associated source code on his Snuffle encryption system. Bernstein was represented by the Electronic Frontier Foundation, who hired outside lawyer Cindy Cohn and also obtained pro bono publico assistance from Lee Tien of Berkeley; M. Edward Ross of the San Francisco law firm of Steefel, Levitt & Weiss; James Wheaton and Elizabeth Pritzker of the First Amendment Project in Oakland; and Robert Corn-Revere, Julia Kogan, and Jeremy Miller of the Washington, DC, law firm of Hogan & Hartson. After four years and one regulatory change, the Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional. Regarding those regulations, the EFF states:
The government requested en banc review. In Bernstein v. U.S. Dept. of Justice, 192 F.3d 1308 (9th Cir. 1999), the Ninth Circuit ordered that this case be reheard by the en banc court, and withdrew the three-judge panel opinion, Bernstein v. U.S. Dept. of Justice, 176 F.3d 1132 (9th Cir. 1999).
The government modified the regulations again, substantially loosening them, and Bernstein, now a professor at the University of Illinois at Chicago, challenged them again. This time, he chose to represent himself, although he had no formal legal training. On October 15, 2003, almost nine years after Bernstein first brought the case, the judge dismissed it and asked Bernstein to come back when the government made a "concrete threat".
Recent
Apple cited Bernstein v. US in its refusal to hack the San Bernardino shooter's iPhone, saying that they could not be compelled to "speak" (write code).
See also
Junger v. Daley
PGP criminal investigation
References
External links
Bernstein v. United States
Netlitigation Summary: Bernstein v. U.S. Dept. of State
EPIC Archive of 9th Circuit Decision
EFF Archive of the Cases
Cryptography law
United States Internet case law
Electronic Frontier Foundation litigation
1996 in United States case law
1997 in United States case law
United States Court of Appeals for the Ninth Circuit cases |
341598 | https://en.wikipedia.org/wiki/Cybercrime | Cybercrime | Cybercrime is a crime that involves a computer and a network. The computer may have been used in the commission of a crime, or it may be the target. Cybercrime may harm someone's security and financial health.
There are many privacy concerns surrounding Cybercrime when confidential information is intercepted or disclosed, lawfully or otherwise. Internationally, both governmental and non-state actors engage in cybercrimes, including espionage, financial theft, and other cross-border crimes. Cybercrimes crossing international borders and involving the actions of at least one nation-state are sometimes referred to as cyberwarfare. Warren Buffett describes Cybercrime as the "number one problem with mankind" and "poses real risks to humanity."
A report (sponsored by McAfee) published in 2014 estimated that the annual damage to the global economy was $445 billion. A 2016 report by Cybersecurity ventures predicted that global damages incurred as a result of cybercrime would cost up to $6 trillion annually by 2021 and $10.5 trillion annually by 2025.
Approximately $1.5 billion was lost in 2012 to online credit and debit card fraud in the US. In 2018, a study by the Center for Strategic and International Studies (CSIS), in partnership with McAfee, concludes that nearly one percent of global GDP, close to $600 billion, is lost to cybercrime each year. The World Economic Forum 2020 Global Risk report confirmed that organized Cybercrimes bodies are joining forces to perpetrate criminal activities online while estimating the likelihood of their detection and prosecution to be less than 1 percent in the US.
Classifications
With traditional crime reducing, global communities continue to witness a sporadic growth in cybercrime. Computer crime encompasses a broad range of activities, from financial crimes to scams, through cybersex trafficking and ad frauds
Financial fraud crimes
Computer fraud is any dishonest misrepresentation of fact intended to let another to do or refrain from doing something which causes loss. In this context, the fraud will result in obtaining a benefit by:
Altering in an unauthorized way. This requires little technical expertise and is a common form of theft by employees altering the data before entry or entering false data, or by entering unauthorized instructions or using unauthorized processes;
Altering, destroying, suppressing, or stealing output, usually to conceal unauthorized transactions. This is difficult to detect;
Altering or deleting stored data;
Other forms of fraud may be facilitated using computer systems, including bank fraud, carding, identity theft, extortion, and theft of classified information. These types of crime often result in the loss of private information or monetary information.
Cyberterrorism
Government officials and information technology security specialists have documented a significant increase in Internet problems and server scams since early 2001. There is a growing concern among government agencies such as the Federal Bureau of Investigations (FBI) and the Central Intelligence Agency (CIA) that such intrusions are part of an organized effort by cyberterrorist foreign intelligence services, or other groups to map potential security holes in critical systems. A cyberterrorist is someone who intimidates or coerces a government or an organization to advance his or her political or social objectives by launching a computer-based attack against computers, networks, or the information stored on them.
Cyberterrorism, in general, can be defined as an act of terrorism committed through the use of cyberspace or computer resources (Parker 1983). As such, a simple propaganda piece on the Internet that there will be bomb attacks during the holidays can be considered cyberterrorism. There are also hacking activities directed towards individuals, families, organized by groups within networks, tending to cause fear among people, demonstrate power, collecting information relevant for ruining peoples' lives, robberies, blackmailing, etc.
Cyberextortion
Cyberextortion occurs when a website, e-mail server, or computer system is subjected to or threatened with repeated denial of service or other attacks by malicious hackers. These hackers demand money in return for promising to stop the attacks and to offer "protection". According to the Federal Bureau of Investigation, cybercrime extortionists are increasingly attacking corporate websites and networks, crippling their ability to operate and demanding payments to restore their service. More than 20 cases are reported each month to the FBI and many go unreported in order to keep the victim's name out of the public domain. Perpetrators typically use a distributed denial-of-service attack. However, other cyberextortion techniques exist such as doxing extortion and bug poaching.
An example of cyberextortion was the attack on Sony Pictures of 2014.
Ransomware is a kind of cyberextortion in which a malware is used to restrict access to files, sometimes threatening permanent data erasure unless a ransom is paid. Kapersky Lab 2016 Security Bulletin report estimates that a business falls victim of Ransomware every 40 minutes. and predicted to attack a business every 11 minutes in 2021. With Ransomware remaining one of the fastest growing cybercrimes in the world, global Ransomware damage is predicted to cost up to $20 billion in 2021.
Cybersex trafficking
Cybersex trafficking is the transportation of victims and then the live streaming of coerced sexual acts and or rape on webcam. Victims are abducted, threatened, or deceived and transferred to 'cybersex dens.' The dens can be in any location where the cybersex traffickers have a computer, tablet, or phone with internet connection. Perpetrators use social media networks, videoconferences, dating pages, online chat rooms, apps, dark web sites, and other platforms. They use online payment systems and cryptocurrencies to hide their identities. Millions of reports of its occurrence are sent to authorities annually. New legislation and police procedures are needed to combat this type of cybercrime.
An example of cybersex trafficking is the 2018–2020 Nth room case in South Korea.
Cyberwarfare
The U.S. Department of Defense notes that the cyberspace has emerged as a national-level concern through several recent events of geostrategic significance. Among those are included, the attack on Estonia's infrastructure in 2007, allegedly by Russian hackers. In August 2008, Russia again allegedly conducted cyber attacks, this time in a coordinated and synchronized kinetic and non-kinetic campaign against the country of Georgia. Fearing that such attacks may become the norm in future warfare among nation-states, the concept of cyberspace operations impacts and will be adapted by warfighting military commanders in the future.
Computer as a target
These crimes are committed by a selected group of criminals. Unlike crimes using the computer as a tool, these crimes require the technical knowledge of the perpetrators. As such, as technology evolves, so too does the nature of the crime. These crimes are relatively new, having been in existence for only as long as computers have—which explains how unprepared society and the world, in general, is towards combating these crimes. There are numerous crimes of this nature committed daily on the internet. It is seldom committed by loners, instead it involves large syndicate groups.
Crimes that primarily target computer networks include:
Computer viruses
Denial-of-service attacks
Malware (malicious code)
Computer as a tool
When the individual is the main target of cybercrime, the computer can be considered as the tool rather than the target. These crimes generally involve less technical expertise. Human weaknesses are generally exploited. The damage dealt is largely psychological and intangible, making legal action against the variants more difficult. These are the crimes which have existed for centuries in the offline world. Scams, theft, and the likes have existed even before the development in high-tech equipment. The same criminal has simply been given a tool which increases their potential pool of victims and makes them all the harder to trace and apprehend.
Crimes that use computer networks or devices to advance other ends include:
Fraud and identity theft (although this increasingly uses malware, hacking or phishing, making it an example of both "computer as target" and "computer as tool" crime)
Information warfare
Phishing scams
Spam
Propagation of illegal obscene or offensive content, including harassment and threats
The unsolicited sending of bulk email for commercial purposes (spam) is unlawful in some jurisdictions.
Phishing is mostly propagated via email. Phishing emails may contain links to other websites that are affected by malware. Or, they may contain links to fake online banking or other websites used to steal private account information.
Obscene or offensive content
The content of websites and other electronic communications may be distasteful, obscene or offensive for a variety of reasons. In some instances, these communications may be illegal.
The extent to which these communications are unlawful varies greatly between countries, and even within nations. It is a sensitive area in which the courts can become involved in arbitrating between groups with strong beliefs.
One area of Internet pornography that has been the target of the strongest efforts at curtailment is child pornography, which is illegal in most jurisdictions in the world. Debarati Halder and K. Jaishankar further define cybercrime from the perspective of gender and defined 'cybercrime against women' as "Crimes targeted against women with a motive to intentionally harm the victim psychologically and physically, using modern telecommunication networks such as internet and mobile phones".
Ad-fraud
Ad-frauds are particularly popular among cybercriminals, as such frauds are less likely to be prosecuted and are particularly lucrative cybercrimes. Jean-Loup Richet, Professor at the Sorbonne Business School, classified the large variety of ad-fraud observed in cybercriminal communities into three categories: (1) identity fraud; (2) attribution fraud; and (3) ad-fraud services.
Identity fraud aims to impersonate real users and inflate audience numbers. Several ad-fraud techniques relate to this category and include traffic from bots (coming from a hosting company or a data center, or from compromised devices); cookie stuffing; falsifying user characteristics, such as location and browser type; fake social traffic (misleading users on social networks into visiting the advertised website); and the creation of fake social signals to make a bot look more legitimate, for instance by opening a Twitter or Facebook account.
Attribution fraud aims to impersonate real users’ behaviors (clicks, activities, conversations, etc.). Multiple ad-fraud techniques belong to this category: hijacked devices and the use of infected users (through a malware) as part of a botnet to participate in ad fraud campaigns; click farms (companies where low-wage employees are paid to click or engage in conversations and affiliates’ offers); incentivized browsing; video placement abuse (delivered in display banner slots); hidden ads (that will never be viewed by real users); domain spoofing (ads served on a website other than the advertised real-time bidding website); and clickjacking (user is forced to click on the ad).
Ad fraud services are related to all online infrastructure and hosting services that might be needed to undertake identity or attribution fraud . Services can involve the creation of spam websites (fake networks of websites created to provide artificial backlinks); link building services; hosting services; creation of fake and scam pages impersonating a famous brand and used as part of an ad fraud campaign.
A successful ad-fraud campaign involves a sophisticated combination of these three types of ad-fraud—sending fake traffic through bots using fake social accounts and falsified cookies; bots will click on the ads available on a scam page that is faking a famous brand.
Online harassment
Whereas content may be offensive in a non-specific way, harassment directs obscenities and derogatory comments at specific individuals focusing for example on gender, race, religion, nationality, sexual orientation.
There are instances where committing a crime using a computer can lead to an enhanced sentence. For example, in the case of United States v. Neil Scott Kramer, the defendant was given an enhanced sentence according to the U.S. Sentencing Guidelines Manual §2G1.3(b)(3) for his use of a cell phone to "persuade, induce, entice, coerce, or facilitate the travel of, the minor to engage in prohibited sexual conduct." Kramer appealed the sentence on the grounds that there was insufficient evidence to convict him under this statute because his charge included persuading through a computer device and his cellular phone technically is not a computer. Although Kramer tried to argue this point, the U.S. Sentencing Guidelines Manual states that the term 'computer' "means an electronic, magnetic, optical, electrochemical, or other high-speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device."
In the United States alone, Missouri and over 40 other states have passed laws and regulations that regard extreme online harassment as a criminal act. These acts can be punished on a federal scale, such as US Code 18 Section 2261A, which states that using computers to threaten or harass can lead to a sentence of up to 20 years, depending on the action taken.
Several countries outside of the United States have also created laws to combat online harassment. In China, a country that supports over 20 percent of the world's internet users, the Legislative Affairs Office of the State Council passed a strict law against the bullying of young people through a bill in response to the Human Flesh Search Engine. The United Kingdom passed the Malicious Communications Act, among other acts from 1997 to 2013, which stated that sending messages or letters electronically that the government deemed "indecent or grossly offensive" and/or language intended to cause "distress and anxiety" can lead to a prison sentence of six months and a potentially large fine. Australia, while not directly addressing the issue of harassment, has grouped the majority of online harassment under the Criminal Code Act of 1995. Using telecommunication to send threats or harass and cause offense was a direct violation of this act.
Although freedom of speech is protected by law in most democratic societies (in the US this is done by the First Amendment), it does not include all types of speech. In fact, spoken or written "true threat" speech/text is criminalized because of "intent to harm or intimidate." That also applies for online or any type of network-related threats in written text or speech.
Cyberbullying has increased drastically with the growing popularity of online social networking. As of January 2020, 44% of adult internet users in the United States have "personally experienced online harassment." Children who experience online harassment deal with negative and sometimes life-threatening side effects. In 2021, reports displayed 41% of children developing social anxiety, 37% of children developing depression, and 26% of children having suicidal thoughts.
The United Arab Emirates was named in a spying scandal where the Gulf nation along with other repressive governments purchased NSO Group’s mobile spyware Pegasus for mass surveillance. Prominent activists and journalists were targeted as part of the campaign, including, Ahmed Mansoor, Princess Latifa, Princess Haya, and more. Ghada Oueiss was one of the many high-profile female journalists and activists who became the target of online harassment. Oueiss filed a lawsuit against the UAE ruler, Mohamed bin Zayed Al Nahyan along with other defendants, accusing them of sharing her photos online. The defendants including the UAE ruler filed motions to dismiss the case of the hack-and-leak attack.
Drug trafficking
Darknet markets are used to buy and sell recreational drugs online. Some drug traffickers use encrypted messaging tools to communicate with drug mules. The dark web site Silk Road was a major online marketplace for drugs before it was shut down by law enforcement (then reopened under new management, and then shut down by law enforcement again). After Silk Road 2.0 went down, Silk Road 3 Reloaded emerged. However, it was just an older marketplace named Diabolus Market, that used the name for more exposure from the brand's previous success.
Darknet markets have had an up-rise in traffic in recent years for many reasons. One of the biggest contributors being the anonymity and safety that goes along when using the markets. There are numerous ways you can lose all your money invested and be caught when using Darknet markets. Vendors and customers alike go to great lengths to keep their identities a secret while online. Commonly used tools are virtual private networks, Tails, and Tor to help hide their trail left behind for investigators. Darknet markets make the user feel safe as they can get what they want from the comfort of their home. People can easily gain access to a Tor browser with DuckDuckGo browser that allows a user to explore much deeper than other browsers such as Google Chrome. However actually gaining access to an illicit market isn't as simple as typing it in on the search engine like you would with google. Darknet markets have special links that are changing everyday ending in .onion opposed to the typical .com, .net. and .org domain extensions. To add to privacy the biggest currency on these markets is Bitcoin. Bitcoin allows transactions to be committed between people by exchanging wallet addresses and never having to know anything about the person you're sending money to.
One of the biggest issues the users face who use marketplaces are the vendors or market itself exit scamming. This is when usually a vendor with a high rating will act as if they're still selling on the market and have users send them money. The vendor will then close off his account after receiving money from multiple buyers and never send what they purchased. The vendors all being involved in illegal activities have a low chance at not exit scamming when they no longer want to be a vendor. In 2019, an entire market called Wall Street Market had allegedly exit scammed, stealing 30 million dollars from the vendors and buyers wallets in bitcoin.
Federal agents have had a huge crackdown on these markets. In July 2017, federal agents seized one of the biggest markets commonly called Alphabay which ironically later re-opened in August 2021 under the control of one of the original administrators DeSnake. Commonly investigators will pose as a buyer and order packages from darknet vendors in the hopes they left a trail they can follow. One investigation had an investigator pose as a firearms seller and for six months people purchased from them and provided home addresses. They were able to make over a dozen arrests during this six-month investigation. Another one of law enforcement's biggest crackdowns are on vendors selling fentanyl and opiates. With thousands of dying each year due to drug over dose it was long overdue for law enforcement to crack down on these markets. Many vendors don't realize the extra charges that go along with selling drugs online. Commonly they get charged with money laundering and charges for when the drugs are shipped in the mail on top of being a drug distributor. Each state has its laws and regulations on drugs therefore vendors have the face multiple charges from different states. In 2019, a vendor was sentenced to 10 years in prison after selling cocaine and methamphetamine under the name JetSetLife. Although many investigators spend a lot of time tracking down people in the course of a year only 65 suspects were identified who bought and sold illegal goods on some of the biggest markets. This is compared to the thousands of transactions taking place daily on these markets.
One of the highest profiled banking computer crime occurred during a course of three years beginning in 1970. The chief teller at the Park Avenue branch of New York's Union Dime Savings Bank embezzled over $1.5 million from hundreds of accounts.
A hacking group called MOD (Masters of Deception), allegedly stole passwords and technical data from Pacific Bell, Nynex, and other telephone companies as well as several big credit agencies and two major universities. The damage caused was extensive, one company, Southwestern Bell suffered losses of $370,000 alone.
In 1983, a 19-year-old UCLA student used his PC to break into a Defense Department International Communications system.
Between 1995 and 1998 the Newscorp satellite pay to view encrypted SKY-TV service was hacked several times during an ongoing technological arms race between a pan-European hacking group and Newscorp. The original motivation of the hackers was to watch Star Trek reruns in Germany; which was something which Newscorp did not have the copyright to allow.
On 26 March 1999, the Melissa worm infected a document on a victim's computer, then automatically sent that document and a copy of the virus spread via e-mail to other people.
In February 2000, an individual going by the alias of MafiaBoy began a series denial-of-service attacks against high-profile websites, including Yahoo!, Dell, Inc., E*TRADE, eBay, and CNN. About 50 computers at Stanford University, and also computers at the University of California at Santa Barbara, were amongst the zombie computers sending pings in DDoS attacks. On 3 August 2000, Canadian federal prosecutors charged MafiaBoy with 54 counts of illegal access to computers, plus a total of ten counts of mischief to data for his attacks.
The Stuxnet worm corrupted SCADA microprocessors, particularly of the types used in Siemens centrifuge controllers.
The Flame (malware) that mainly targeted Iranian officials in an attempt to obtain sensitive information.
The Russian Business Network (RBN) was registered as an internet site in 2006. Initially, much of its activity was legitimate. But apparently, the founders soon discovered that it was more profitable to host illegitimate activities and started hiring its services to criminals. The RBN has been described by VeriSign as "the baddest of the bad". It offers web hosting services and internet access to all kinds of criminal and objectionable activities, with individual activities earning up to $150 million in one year. It specialized in and in some cases monopolized personal identity theft for resale. It is the originator of MPack and an alleged operator of the now-defunct Storm botnet.
On 2 March 2010, Spanish investigators arrested 3 men who were suspected of infecting of over 13 million computers around the world. The "botnet" of infected computers included PCs inside more than half of the Fortune 1000 companies and more than 40 major banks, according to investigators.
In August 2010 the international investigation Operation Delego, operating under the aegis of the Department of Homeland Security, shut down the international pedophile ring Dreamboard. The website had approximately 600 members and may have distributed up to 123 terabytes of child pornography (roughly equivalent to 16,000 DVDs). To date this is the single largest U.S. prosecution of an international child pornography ring; 52 arrests were made worldwide.
In January 2012 Zappos.com experienced a security breach after as many as 24 million customers' credit card numbers, personal information, billing and shipping addresses had been compromised.
In June 2012 LinkedIn and eHarmony were attacked, compromising 65 million password hashes. 30,000 passwords were cracked and 1.5 million EHarmony passwords were posted online.
December 2012 Wells Fargo website experienced a denial of service attack. Potentially compromising 70 million customers and 8.5 million active viewers. Other banks thought to be compromised: Bank of America, J. P. Morgan U.S. Bank, and PNC Financial Services.
23 April 2013 saw the Associated Press' Twitter account's hacked - the hacker posted a hoax tweet about fictitious attacks in the White House that they claimed left President Obama injured. This hoax tweet resulted in a brief plunge of 130 points from the Dow Jones Industrial Average, removal of $136 billion from S&P 500 index, and the temporary suspension of AP's Twitter account. The Dow Jones later restored its session gains.
In May 2017, 74 countries logged a ransomware cybercrime, called "WannaCry"
Illicit access to camera sensors, microphone sensors, phonebook contacts, all internet-enabled apps, and metadata of mobile telephones running Android and IOS were reportedly made accessible by Israeli spyware, found to be being in operation in at least 46 nation-states around the world. Journalists, Royalty and government officials were amongst the targets. Previous accusations of cases of Israeli-weapons companies meddling in international telephony and smartphones have been eclipsed in the 2018 reported case.
In December 2019, the United States intelligence and an investigation by The New York Times revealed that messaging application of the United Arab Emirates, ToTok is a spying tool. The research revealed that the Emirati government attempted to track every conversation, movement, relationship, appointment, sound and image of those who install the app on their phones.
Combating computer crime
It is difficult to find and combat cyber crime's perpetrators due to their use of the internet in support of cross-border attacks. Not only does the internet allow people to be targeted from various locations, but the scale of the harm done can be magnified. Cyber criminals can target more than one person at a time. The availability of virtual spaces to public and private sectors has allowed cybercrime to become an everyday occurrence. In 2018, The Internet Crime Complaint Center received 351,937 complaints of cybercrime, which lead to $2.7 billion lost.
Investigation
A computer can be a source of evidence (see digital forensics). Even where a computer is not directly used for criminal purposes, it may contain records of value to criminal investigators in the form of a logfile. In most countries Internet Service Providers are required, by law, to keep their logfiles for a predetermined amount of time. For example; a European wide Data Retention Directive (applicable to all EU member states) states that all e-mail traffic should be retained for a minimum of 12 months.
There are many ways for cybercrime to take place, and investigations tend to start with an IP Address trace, however, that is not necessarily a factual basis upon which detectives can solve a case. Different types of high-tech crime may also include elements of low-tech crime, and vice versa, making cybercrime investigators an indispensable part of modern law enforcement. Methods of cybercrime detective work are dynamic and constantly improving, whether in closed police units or in international cooperation framework.
In the United States, the Federal Bureau of Investigation (FBI) and the Department of Homeland Security (DHS) are government agencies that combat cybercrime. The FBI has trained agents and analysts in cybercrime placed in their field offices and headquarters. Under the DHS, the Secret Service has a Cyber Intelligence Section that works to target financial cyber crimes. They use their intelligence to protect against international cybercrime. Their efforts work to protect institutions, such as banks, from intrusions and information breaches. Based in Alabama, the Secret Service and the Alabama Office of Prosecution Services work together to train professionals in law enforcement through the creation of The National Computer Forensic Institute. This institute works to provide "state and local members of the law enforcement community with training in cyber incident response, investigation, and forensic examination in cyber incident response, investigation, and forensic examination."
Due to the common use of encryption and other techniques to hide their identity and location by cybercriminals, it can be difficult to trace a perpetrator after the crime is committed, so prevention measures are crucial.
Prevention
The Department of Homeland Security also instituted the Continuous Diagnostics and Mitigation (CDM) Program. The CDM Program monitors and secures government networks by tracking and prioritizing network risks, and informing system personnel so that they can take action. In an attempt to catch intrusions before the damage is done, the DHS created the Enhanced Cybersecurity Services (ECS) to protect public and private sectors in the United States. The Cyber Security and Infrastructure Security Agency approves private partners that provide intrusion detection and prevention services through the ECS. An example of one of these services offered is DNS sinkholing.
Legislation
Due to easily exploitable laws, cybercriminals use developing countries in order to evade detection and prosecution from law enforcement. In developing countries, such as the Philippines, laws against cybercrime are weak or sometimes nonexistent. These weak laws allow cybercriminals to strike from international borders and remain undetected. Even when identified, these criminals avoid being punished or extradited to a country, such as the United States, that has developed laws that allow for prosecution. While this proves difficult in some cases, agencies, such as the FBI, have used deception and subterfuge to catch criminals. For example, two Russian hackers had been evading the FBI for some time. The FBI set up a fake computing company based in Seattle, Washington. They proceeded to lure the two Russian men into the United States by offering them work with this company. Upon completion of the interview, the suspects were arrested outside of the building. Clever tricks like this are sometimes a necessary part of catching cybercriminals when weak legislation makes it impossible otherwise.
Then-President Barack Obama released in an executive order in April 2015 to combat cybercrime. The executive order allows the United States to freeze assets of convicted cybercriminals and block their economic activity within the United States. This is some of the first solid legislation that combats cybercrime in this way.
The European Union adopted directive 2013/40/EU. All offences of the directive, and other definitions and procedural institutions are also in the Council of Europe's Convention on Cybercrime.
It is not only the US and the European Union who are introducing new measures against cybercrime. On 31 May 2017 China announced that its new cybersecurity law takes effect on this date.
In Australia, common legislation in Commonwealth jurisdiction which is applied to combat against cybercrime by means of criminal offence provisions and information gathering and enforcement powers include the Criminal Code Act 1995 (Cth), Telecommunications Act 1997 (Cth) and Enhancing Online Safety Act 2015 (Cth).
In Roads and Traffic Authority of New South Wales v Care Park Pty Limited [2012] NSWCA 35, it was found that the use of a discovery order made upon a third party for the purposes of determining the identity or whereabouts of a person may be exercised merely on the prerequisite that such information requested will aid the litigation process.
In Dallas Buyers Club LLC v iiNet Limited [2015] FCA 317, guidance is provided on the interpretation of rule 7.22 of the Federal Court Rules 2011 (Cth) with respect to the issue of to what extent a discovery order must identify a person for it to be a valid request for information to determine the identity or whereabouts of a person in the circumstance of an end-user of an internet service being a different person to the account holder. Justice Perram stated: '... it is difficult to identify any good reason why a rule designed to aid a party in identifying wrongdoers should be so narrow as only to permit the identification of the actual wrongdoer rather than the witnesses of that wrongdoing.'
Penalties
Penalties for computer-related crimes in New York State can range from a fine and a short period of jail time for a Class A misdemeanor such as unauthorized use of a computer up to computer tampering in the first degree which is a Class C felony and can carry 3 to 15 years in prison.
However, some hackers have been hired as information security experts by private companies due to their inside knowledge of computer crime, a phenomenon which theoretically could create perverse incentives. A possible counter to this is for courts to ban convicted hackers from using the Internet or computers, even after they have been released from prisonthough as computers and the Internet become more and more central to everyday life, this type of punishment may be viewed as more and more harsh and draconian. However, nuanced approaches have been developed that manage cyber offenders' behavior without resorting to total computer or Internet bans. These approaches involve restricting individuals to specific devices which are subject to computer monitoring or computer searches by probation or parole officers.
Awareness
As technology advances and more people rely on the internet to store sensitive information such as banking or credit card information, criminals increasingly attempt to steal that information. Cybercrime is becoming more of a threat to people across the world. Raising awareness about how information is being protected and the tactics criminals use to steal that information continues to grow in importance. According to the FBI's Internet Crime Complaint Center in 2014, there were 269,422 complaints filed. With all the claims combined there was a reported total loss of $800,492,073. But cybercrime does yet seem to be on the average person's radar. There are 1.5 million cyber-attacks annually, that means that there are over 4,000 attacks a day, 170 attacks every hour, or nearly three attacks every minute, with studies showing us that only 16% of victims had asked the people who were carrying out the attacks to stop. Anybody who uses the internet for any reason can be a victim, which is why it is important to be aware of how one is being protected while online.
Intelligence
As cybercrime has proliferated, a professional ecosystem has evolved to support individuals and groups seeking to profit from cybercriminal activities. The ecosystem has become quite specialized, including malware developers, botnet operators, professional cybercrime groups, groups specializing in the sale of stolen content, and so forth. A few of the leading cybersecurity companies have the skills, resources and visibility to follow the activities of these individuals and group. A wide variety of information is available from these sources which can be used for defensive purposes, including technical indicators such as hashes of infected files or malicious IPs/URLs, as well as strategic information profiling the goals, techniques and campaigns of the profiled groups. Some of it is freely published, but consistent, on-going access typically requires subscribing to an adversary intelligence subscription service. At the level of an individual threat actor, threat intelligence is often referred to that actor's "TTP", or "tactics, techniques, and procedures," as the infrastructure, tools, and other technical indicators are often trivial for attackers to change. Corporate sectors are considering crucial role of artificial intelligence cybersecurity.
INTERPOL Cyber Fusion Center have begun a collaboration with cybersecurity key players to distribute information on latest online scams, cyber threats and risks to internet users. Reports cutting across social engineered frauds, ransomware, phishing, and other has since 2017 been distributed to security agencies in over 150 countries.
Diffusion of cybercrime
The broad diffusion of cybercriminal activities is an issue in computer crimes detection and prosecution.
Hacking has become less complex as hacking communities have greatly diffused their knowledge through the Internet. Blogs and communities have hugely contributed to information sharing: beginners could benefit from older hackers' knowledge and advice.
Furthermore, hacking is cheaper than ever: before the cloud computing era, in order to spam or scam one needed a dedicated server, skills in server management, network configuration, and maintenance, knowledge of Internet service provider standards, etc. By comparison, a mail software-as-a-service is a scalable, inexpensive, bulk, and transactional e-mail-sending service for marketing purposes and could be easily set up for spam. Cloud computing could be helpful for a cybercriminal as a way to leverage his or her attack, in terms of brute-forcing a password, improving the reach of a botnet, or facilitating a spamming campaign.
Agencies
ASEAN
Australian High Tech Crime Centre
Cyber Crime Investigation Cell, a wing of Mumbai Police, India
Cyber Crime Unit (Hellenic Police), formed in Greece in 1995
National White Collar Crime Center, in the United States
National Cyber Crime Unit, in the United Kingdom
INTERPOL
EUROPOL
See also
References
Further reading
Balkin, J., Grimmelmann, J., Katz, E., Kozlovski, N., Wagman, S. & Zarsky, T. (2006) (eds) Cybercrime: Digital Cops in a Networked Environment, New York University Press, New York.
Bowker, Art (2012) "The Cybercrime Handbook for Community Corrections: Managing Risk in the 21st Century" Charles C. Thomas Publishers, Ltd. Springfield.
Brenner, S. (2007) Law in an Era of Smart Technology, Oxford: Oxford University Press
Broadhurst, R., and Chang, Lennon Y.C. (2013) "Cybercrime in Asia: trends and challenges", in B. Hebenton, SY Shou, & J. Liu (eds), Asian Handbook of Criminology (pp. 49–64). New York: Springer ()
Chang, L.Y. C. (2012) Cybercrime in the Greater China Region: Regulatory Responses and Crime Prevention across the Taiwan Strait. Cheltenham: Edward Elgar. ()
Chang, Lennon Y.C., & Grabosky, P. (2014) "Cybercrime and establishing a secure cyber world", in M. Gill (ed) Handbook of Security (pp. 321–339). NY: Palgrave.
Csonka P. (2000) Internet Crime; the Draft council of Europe convention on cyber-crime: A response to the challenge of crime in the age of the internet? Computer Law & Security Report Vol.16 no.5.
Easttom, C. (2010) Computer Crime Investigation and the Law
Fafinski, S. (2009) Computer Misuse: Response, regulation and the law Cullompton: Willan
Glenny, M. DarkMarket : cyberthieves, cybercops, and you, New York, NY : Alfred A. Knopf, 2011.
Grabosky, P. (2006) Electronic Crime, New Jersey: Prentice Hall
Halder, D., & Jaishankar, K. (2016). Cyber Crimes against Women in India. New Delhi: SAGE Publishing. .
Halder, D., & Jaishankar, K. (2011) Cybercrime and the Victimization of Women: Laws, Rights, and Regulations. Hershey, PA, USA: IGI Global.
Jaishankar, K. (Ed.) (2011). Cyber Criminology: Exploring Internet Crimes and Criminal behavior. Boca Raton, FL, USA: CRC Press, Taylor, and Francis Group.
McQuade, S. (2006) Understanding and Managing Cybercrime, Boston: Allyn & Bacon.
McQuade, S. (ed) (2009) The Encyclopedia of Cybercrime, Westport, CT: Greenwood Press.
Parker D (1983) Fighting Computer Crime, U.S.: Charles Scribner's Sons.
Pattavina, A. (ed) Information Technology and the Criminal Justice System, Thousand Oaks, CA: Sage.
Richet, J.L. (2013) From Young Hackers to Crackers, International Journal of Technology and Human Interaction (IJTHI), 9(3), 53–62.
Robertson, J. (2 March 2010). Authorities bust 3 in infection of 13m computers. Retrieved 26 March 2010, from Boston News: Boston.com
Rolón, D. N. Control, vigilancia y respuesta penal en el ciberespacio, Latin American's New Security Thinking, Clacso, 2014, pp. 167/182
Walden, I. (2007) Computer Crimes and Digital Investigations, Oxford: Oxford University Press.
Wall, D.S. (2007) Cybercrimes: The transformation of crime in the information age, Cambridge: Polity.
Williams, M. (2006) Virtually Criminal: Crime, Deviance and Regulation Online, Routledge, London.
Yar, M. (2006) Cybercrime and Society, London: Sage.
External links
International Journal of Cyber Criminology
Common types of cyber attacks
Countering ransomware attacks
Government resources
Cybercrime.gov from the United States Department of Justice
National Institute of Justice Electronic Crime Program from the United States Department of Justice
FBI Cyber Investigators home page
US Secret Service Computer Fraud
Australian High Tech Crime Centre
UK National Cyber Crime Unit from the National Crime Agency
Computer security
Organized crime activity
Harassment |
342363 | https://en.wikipedia.org/wiki/Cali%20Cartel | Cali Cartel | The Cali Cartel () was a drug cartel based in southern Colombia, around the city of Cali and the Valle del Cauca Department. Its founders were the brothers Gilberto Rodríguez Orejuela and Miguel Rodríguez Orejuela, and José Santacruz Londoño. They broke away from Pablo Escobar and his Medellín associates in the late 1980s, when Hélmer "Pacho" Herrera joined what became a four-man executive board that ran the cartel.
At the height of the Cali Cartel's reign from 1993-1995, they were cited as having control of over 80% of the world's cocaine market and were said to be directly responsible for the growth of the cocaine market in Europe, controlling 80% of the market there as well. By the mid-1990s, the Cali Cartel's international drug trafficking empire was a $7 billion a year criminal enterprise.
Foundation
The Cali Cartel was formed by the Rodriguez Orejuela brothers and Santacruz, all coming from what is described as a higher social background than most other traffickers of the time. The recognition of this social background was displayed in the group's nickname as "Los Caballeros de Cali" ("Gentlemen of Cali"). The group originally assembled as a ring of kidnappers known as "Las Chemas", which was led by Luis Fernando Tamayo García. Las Chemas were implicated in numerous kidnappings including those of two Swiss citizens: a diplomat, Herman Buff, and a student, Zack "Jazz Milis" Martin. The kidnappers reportedly received $700,000 in ransom, which is believed to have been used to fund their drug trafficking empire.
The assembled group first involved itself in trafficking marijuana. Due to the product's low profit rate and large amounts required to traffic to cover resources, the fledgling group decided to shift their focus to a more lucrative drug, cocaine. In the early 1970s, the cartel sent Hélmer Herrera to New York City to establish a distribution center, during a time when the United States Drug Enforcement Administration (DEA) viewed cocaine as less important than heroin.
The Cali Cartel leadership comprised the Cali group Gilberto Rodríguez Orejuela, Miguel Rodríguez Orejuela, José Santacruz Londoño and Hélmer Herrera. Some top associates were Victor Patiño Fomeque, Henry Loaiza Ceballos, ex-guerrilla José Fedor Rey, and Phanor Arizabaleta-Arzayus.
Organization
In the absence of a hardline policy from the DEA on cocaine, the trade flourished. The group developed and organized itself into multiple "cells" that appeared to operate independently yet reported to a ("manager"). The independent clandestine cell system is what set the Cali Cartel apart from the Medellín Cartel. The Cali Cartel operated as a tight group of independent criminal organizations, as opposed to the Medellíns' centralised structure under leader Pablo Escobar.
The Cali Cartel eventually became "The biggest, most powerful crime syndicate we've ever known", according to then DEA chief Thomas Constantine.
Juan Carlos Saavedra represented the Cali KGB Cartel in Spain.
Activities
Trafficking
The Cali Cartel would become known for its innovations in trafficking and production, by moving its refining operations out of Colombia to Peru and Bolivia, as well as for pioneering new trafficking routes through Panama. The Cartel also diversified into opium and was reported to have brought in a Japanese chemist to help its refining operation. The Venezuelan General Ramon Guillen Davila, who ran the Venezuelan National Guard unit that was to interdict cocaine shipments and was the CIA's most trusted narcotics asset in Venezuela and worked with Mark McFarlin and Jim Campbell, was charged by United States authorities with smuggling 22 tons of Cali Cartel cocaine from 1987 to 1991 known as Operation North ().
According to reports and testimony of Thomas Constantine to the United States Congress, "Cali would be the dominant group in trafficking South American heroin due to their access to the opium growing areas of Colombia." Debate over the cartel's participation in heroin trafficking remains. It is believed the cartel's leaders were not involved in heroin trading, but that close associates to them, such as Ivan Urdinola-Grajales, were, and that they cooperated with heroin distribution centers.
At the height of the Cali Cartel's reign, they were cited as having control over 90% of the world's cocaine market and for being directly responsible for the growth of the cocaine market in Europe. By the mid-1990s, the trafficking empire of the Cali Cartel was a multibillion-dollar enterprise. In 2002, the Cali KGB had an estimated $30 billion in profits.
In the mid-1980s, after a trip of Gilberto to Spain, the cartel began to expand its activities in Europe, developing a working relationship with tobacco smugglers from Galicia in Spain. But in particular the Cali cartel established a strategic alliance with the powerful Camorra criminal organization. Cali supplied the cocaine and the Camorra handled distribution across Europe.
Finances
In order to launder the incoming money of the trafficking operations, the Cali cartel heavily invested its funds into legitimate business ventures as well as front companies to mask the money. In 1996, it was believed the Cartel was grossing $7 billion in annual revenue from the US alone. With the influx of cash comes the need to launder the funds. One of the first instances of the Cali Cartel's laundering operations came when Gilberto Rodriguez Orejuela was able to secure the position of Chairman of the Board of Banco de Trabajadores. The bank was believed to have been used to launder funds for the Cali cartel, as well as Pablo Escobar's Medellín Cartel. Cartel members were permitted, through their affiliation with Gilberto, to overdraft accounts and take out loans without repayment. Allegedly, Semion Mogilevich instructed Natasha Kagalovsky to wire transfer Cali cartel funds from Bank of New York accounts though Brazilian banks to offshore shell companies.
Capitalizing on this basis, Gilberto was able to found the First InterAmericas Bank operating out of Panama. In an interview with Time, Gilberto admitted to money being laundered through the bank; however, he attributed the process to only legal actions. The laundering, which Gilberto states was "in accordance with Panamanian law", is what led to the US authorities' pursuing him. Gilberto later started, in 1979, the Grupo Radial Colombiano, a network of over 30 radio stations and a pharmaceutical chain named Drogas la Rebaja, which at its height amassed over 400 stores in 28 cities, employing 4,200. The pharmaceutical chain's value was estimated at $216 million. As a consequence of the Cali Cartel's ownership of the chain, from January 1988 to May 4, 1990, it was targeted for 85 bombings by Pablo Escobar and the Medellín Cartel, leaving a total of 27 people dead.
Russian state connections
According to Felipe Turover Chudínov, who was a senior intelligence officer with the foreign-intelligence directorate of the KGB, Russian prime minister Viktor Chernomyrdin secretly decreed in the early 1990s that Russia would become an international hub through which narcotics are trafficked including cocaine and heroin from South America and heroin from Central Asia and Southeast Asia. Yuri Skuratov supported Turover's statements and began numerous investigations into corruption with high ranking Russian government officials. Alexander Litvinenko provided a detailed narcotics trafficking diagram showing relationships between Russian government officials and Russian mafia and implicating Vladimir Putin and numerous others in obschak including narcotics trafficking money. Following Operation Troika which targeted the Tambov Gang, Spanish Prosecutor José Grinda concurred and added that to avoid prosecution numerous indited persons became Deputies in the Russian Duma, especially with Vladimir Zhirinovsky's Liberal Democratic Party and gained parliamentary immunity from prosecution.
St. Petersburg Immobilien und Beteiligungs AG or SPAG is a real estate company registered in Germany under Putin's control in 1992 and suspected by German police of facilitating Saint Petersburg mobsters, Colombian drug lords, and transcontinental money laundering. Kumarin-Barsukov, of the Tambov Russian mafia was a partner in Znamenskaya, a subsidiary of SPAG. Vladimir Smirnov was the general director of Znamenskaya and Kumarin-Barsukov was his deputy. Through his 200 shares or 20% control, Vladimir Smirnov was Putin's voting proxy in SPAG. Jalol Khaidarov () stated that the final destination of the funds was to the "Operator Trade Center" in Liechtenstein but also said that the Bank of New York was a participant. In the early 2000s, the company's co-founder Rudolf Ritter was arrested in Liechtenstein for laundering cocaine cash for the Cali cartel. Robert Walner was the chief prosecutor in Liechtenstein's capital Vaduz.
Former Ukrainian presidential bodyguard Nikolai Melnichenko bugged the following conversation between Ukrainian President Leonid Kuchma and his security chief Leonid Derkach about SPAG:
Leonid Derkach: Leonid Danilovich. We've got some interesting material here from the Germans. One of them has been arrested.
Leonid Kuchma (reading aloud): Ritter, Rudolf Ritter.
Leonid Derkach: Yes, and about that affair, the drug smuggling. Here are the documents. They gave them all out. Here's Vova Putin, too.
Leonid Kuchma: There's something about Putin there?
Leonid Derkach: The Russians have already been buying everything up. Here are all the documents. We're the only ones that still have them now. I think that [FSB chief] Nikolai Patrushev is coming from the 15th to the 17th. This will give him something to work with. This is what we'll keep. They want to shove the whole affair under the carpet.
Later in the conversation Derkach states that "they've bought up all these documents throughout Europe and only the rest are in our hands".
Using Israel as its base, Russian mafia moved heroin and Colombian cocaine, sometimes through Venezuela, through Israel, where money laundering would occur of the narcotics profits, to Saint Petersburg while the Russian Kurgan mafia () provided security.
According to Alexander Litvinenko, Putin, while he was Deputy Mayor for Economic Affairs of St Petersburg in the early 1990s, organized an Afghanistan origin heroin supplying ring using ethnic Uzbek criminals and corrupt KGB and later FSB officers including the Moscow-based KGB Colonel Evgeny G. Khokholkov, the Oleg Ivanov created Izmaylovskaya Russian mafia and led by Anton Malevsky including mafia leaders Gafur Rakhimov, Vyacheslav Ivanov ("Yaponchik" (Япончик) or Little Japonese), who governed Uzbek networks in America, and Alimzhan Tokhtakhunov () ("Taiwanchik" (Тайванчик) or Little Taiwan), who governed Uzbek networks in Europe, and Salim Abdulaev. These networks also supplied Europe and America with cocaine from the Cali KGB cartel. Robert Eringer, head of Monaco's Security Service, confirmed Litvinenko's file about Vladimir Putin's involvement in Europe's narcotics trade. The Ismailovskaya mafia is closely associated with Oleg Deripaska, Andrei Bokarev, Michael Cherney, and Iskander Makhmudov through their Switzerland-based Blond Investment Corporation's MIB bank account. Rudolf Ritter in Liechtenstein was the financial manager for both SPAG and the Ismailovskaya mafia. Alexander Afanasyev ("Afonya") was connected to both SPAG and the Ismailovskaya mafia through his Panama registered Earl Holding AG which Ritter had also a signature as well as Berger International Holding, Repas Trading SA and Fox Consulting. Juan Carlos Saavedra represented the Cali Cartel in Spain. In October 2015, Spanish Prosecutor Jose Grinda stated that any part of "the case could be recalled back to Spain."
Revealed in late September 2020, Cali KGB cocaine was being transported through the Russian embassy in Argentina to Russia for many years.
Violence
Discipline
Political violence was largely discounted by the Cali Cartel, as the threat of violence often sufficed. The organization of the cartel was structured so that only people who had family in Colombia would handle operations that involved both Cali and U.S. sites, keeping the family within reach of the cartel. Family members became the cartel's insurance that its members would not assist government officials, nor would they refuse payment for products received. The threat of death also hung over those who made mistakes. It is believed the cartel would often kill junior members who made gross errors.
Social cleansing
In his book End of Millennium, Manuel Castells states the Cali Cartel had participated in social cleansing of hundreds of ("discardables"). The desechables included prostitutes, street children, petty thieves, homosexuals and the homeless.
Along with some of the locals, the Cali Cartel formed parties self-named ("social cleansing groups") who murdered the desechables, often leaving them with signs on them stating: "" ("clean Cali, beautiful Cali"). The bodies of those murdered were often tossed into the Cauca River, which later became known as the "River of Death". The municipality of Marsella in Risaralda was eventually bankrupted by the cost of recovering corpses and conducting autopsies.
Retaliation
In the 1980s and early 1990s, the communist guerrillas struck at the drug cartels. In 1981, the then-guerrilla group, Movimiento 19 de Abril (M-19; "19th of April Movement"), kidnapped Marta Nieves Ochoa, the sister of the Medellín Cartel's Ochoa brothers, Jorge, Fabio and Juan David. M-19 demanded a ransom of $15 million for Marta's safe release, but were rejected. In response to the kidnapping, the Medellín and Cali cartels, as well as associated traffickers, formed the group (MAS; "Death to Kidnappers"). Traffickers contributed funds, rewards, equipment and manpower for MAS operations. Leaflets soon after were dropped in a football pitch in Cali announcing the formation of the group. MAS began to capture and torture M-19 members in retaliation. Within 3 days, Marta Nieves was released. The group MAS, however, would continue to operate, with hundreds of killings attributed to them.
In 1992, the guerrilla faction Fuerzas Armadas Revolucionarias de Colombia (FARC; "Revolutionary Armed Forces of Colombia") kidnapped Christina Santa Cruz, the daughter of Cali Cartel leader José Santacruz Londoño. FARC demanded in exchange for the safe return of Christina a ransom of $10 million. In response, the Cali Cartel kidnapped 20 or more members of the Colombian Communist Party, Patriotic Union, the United Workers Union, and the sister of Pablo Catatumbo, a representative of the Simón Bolívar Guerrilla Coordinating Board. Eventually, after talks, Christina and the sister of Catatumbo were released. It is unknown what happened to the other hostages taken by the cartel.
During the narco-terror war waged by Pablo Escobar on the Colombian government, it is believed a hired assassin attempted to kill Herrera while he was attending a sports event. The gunman opened fire using a machine gun on the crowd where Herrera was sitting, killing 19. However, he did not hit Herrera. Herrera is believed to have been a founding member of Los Pepes, a group which operated alongside authorities with the intention of killing or capturing Pablo Escobar.
The Cali cartel then hired a member of Colombia's military, a civil engineer named Jorge Salcedo. They wanted him to help them plot an assassination on Pablo Escobar. They hired him because they heard that Salcedo had in the past, befriended and hired a group of mercenaries to wage war against the left-wing guerrilla forces in an operation sanctioned by Colombia's military. The mercenary group was made up of 12 former special operations soldiers, including the British Special Air Service. Salcedo felt it was his patriotic duty and accepted the deal to bring the mercenaries back to Colombia and help plan the operation to kill Pablo Escobar.
The group of British ex-soldiers accepted the offer. The cartel provided food, housing, and weapons to the mercenaries. The plan was to attack Escobar at his Hacienda Nápoles compound. They trained for a few months until they heard Escobar was going to be staying at the compound, celebrating the fact that his football team had won a tournament. They were going to be inserted by use of two heavily armed Hughes 500 helicopters and surprise-attack Escobar during the early morning. They painted the helicopters to look like police helicopters to further confuse them. They took off and headed towards the compound but one of the helicopters ended up crashing onto a mountainside, minutes away from the compound. The pilot was killed during the crash. The plan was aborted and they had to conduct a rescue mission up the dense mountainside.
Finally, Escobar went to prison, where he continued to run his Medellin Cartel and menace rivals from his cell. The second plot to kill Escobar was to bomb the prison by using an A-37 Dragonfly surplus ground-attack jet bomber in private ownership. The Cali Cartel had a connection in El Salvador, a general of El Salvador's military who illegally sold them four 500-pound bombs for about half a million dollars.
Salcedo flew over to El Salvador to oversee the plan to pick up the bombs and take them to an airfield where a civilian jet would land to pick them up and take them to Colombia. But when the jet landed at the airfield they found that it was a small executive jet. They attempted to load the four bombs, and what was planned to be a few minutes, it took them over 20 minutes. By this time there was a crowd of civilians that had gathered at the airfield curious about what was happening. Only three bombs fit, stacked in the small passenger cabin. The jet took off and Salcedo abandoned the fourth bomb and went back to his hotel. The morning after, the activities of the night before were all over the news. Salcedo barely escaped El Salvador and arrest before the botched pickup was exposed. Law enforcement had discovered the bomb and some of the people involved in the operation were arrested, and they told authorities about the plot to kill Escobar with the bombs. The Cali Cartel then decided to abort the air bombing plot.
There was no turning back for Salcedo. The Colombian government labeled him a criminal now working for the Cali Cartel, and his employers would not let him go anyway. Salcedo then settled into managing security for the Orejuela family, but then he was forced to witness an execution of four Panamanians, and tasked with organizing the murder of Guillermo Pallomari, their own cartel accountant. Salcedo faced a choice: to kill or risk being killed along with his family. Salcedo then decided to retaliate and save Pallomari and himself by contacting the US Central Intelligence Agency and work as an informant. This proved to be the death blow to the Cali Cartel. For his service, Salcedo and his extended family were relocated to the US and he received rewards of about $1.7 million.
Counterintelligence
The counter-intelligence TN Education Centre of the Cali Cartel often surprised the DEA and Colombian officials. It was discovered, in a 1995 raid of Cali Cartel offices, that the cartel had been monitoring all phone calls made in and out of Bogotá and Cali, including the U.S. Embassy in Bogotá and the Ministry of Defense. The laptop allowed Londoño to eavesdrop on phone calls being made as well as analyze phone lines for wiretaps. While officials were able to discover the use of the laptop, it is reported they were unable to decrypt many of the files due to sophisticated encryption techniques. Londoño was also believed to have a person within the phone company itself, which the officials realized when he was able to recognize a phone tap, one that had been placed directly at the phone company, instead of at his residence. Londoño's lawyer soon sent an official notice requesting the legality and requesting the warrant if one was produced.
Included in the list of government officials and officers on the Cali Cartel payroll were a reported 5,000 taxi drivers. The taxi drivers would allow the cartel to know who was arriving in the city and when, as well as where they were staying. By having numerous taxi drivers on the payroll, the cartel was able to monitor the movements of officials and dignitaries. It is reported by Time magazine, in 1991, DEA and U.S. Customs Service (now ICE) agents were monitoring a shipment being offloaded in Miami, only to find out later that the DEA agents were the target of Cali surveillance at the same time.
Jorge Salcedo, a member of Colombia's military, was put in charge of the cartel's intelligence and later provide security to Miguel. He would later, ironically, be crucial in helping destroy the cartel and pinpointing where Miguel was hiding. He designed and set up a large hidden radio network across the city allowing members to communicate wherever they were. They also had many people inside law enforcement working for them, including a high-ranking member of the Bloque de Búsqueda (search block) who were looking for the Cali Cartel's top leaders. When law enforcement had finally cornered Miguel inside an apartment, the double agent was there (along with other law enforcement including two DEA agents) trying to find the secret compartment in which Miguel was hiding. Law enforcement failed to find him on time and were forced to leave the apartment. They maintained a perimeter around the building to prevent his escape. The double agent was crucial in helping Miguel escape, as he hid Miguel in his car and drove away from the scene untroubled.
Medellín Cartel relations
First InterAmericas Bank
Jorge Ochoa, a high ranking Medellín financier and Gilberto Rodriguez had been childhood friends and years later co-owned the Panamanian First InterAmericas Bank. The institution was later cited by United States officials as a money laundering operation, which allowed both the Cali Cartel and the Medellín Cartel to move and launder large amounts of funds. Only through diplomatic pressure on then Panamanian Dictator Manuel Noriega could the U.S. put an end to the bank's use as a money laundering front. In a Time magazine interview, Gilberto Rodriguez admitted to laundering money through the bank but noted that the process broke no Panamanian laws.
Muerte a Secuestradores
The two cartels participated in other joint ventures in later years, such as the founding of (MAS), who successfully returned Ochoa's kidnapped sister, Marta Nieves Ochoa. Expanding on the prior success of MAS, the cartels and independent traffickers would meet again.
The second meeting is believed to have been the start of an organization trafficking between the primary participants, the Medellín Cartel and Cali Cartel. The two cartels divided up the major United States distribution points: the Cali Cartel took New York City and the Medellín Cartel took South Florida and Miami; Los Angeles was left up for grabs.
Through their affiliation in MAS, it is also believed the cartels decided to work together to stabilize prices, production, and shipments of the cocaine market. However, the strategic alliance formed with the foundation of MAS in 1981 began to crumble by 1983–1984, due to the ease of competition. As the cartels set up infrastructure, routes, transport methods, and bribes, it became easier for competitors to either establish similar deals or make use of those already put in place by other cartels. By 1987, the cooperation forged by the formation of MAS no longer existed. Contributing to the demise was the Medellín Cartel's Rodríguez Gacha, who attempted to move in on the New York City market, previously ceded to the Cali Cartel, and the 1986 arrest of Jorge Ochoa at a police roadblock, which the Medellín Cartel deemed suspicious and attributed partly to the Cali Cartel.
Los Pepes
In later years, as Pablo Escobar's narco-terror war escalated against the Colombian government, the government began to strike back in ever escalating battles. As the Medellín Cartel weakened due to the fighting and constant pressure, the Cali Cartel grew in strength, eventually founding Los Pepes, or ("People Persecuted by Pablo Escobar"). Los Pepes was specifically formed to target the Medellín Cartel and bring about the downfall of Pablo Escobar.
It is believed Los Pepes provided information to Search Bloc, a joint police and army unit specifically created to track down Medellín leaders. In exchange for information, Los Pepes received assistance from the United States counter-terrorism unit, Delta Force, through its links to Search Bloc. By the time of Escobar's capture and eventual death in December 1993, Los Pepes had been responsible for the deaths or executions of over 60 associates or members of the Medellín Cartel. The death of Pablo Escobar led to the dismantling of the Medellín Cartel and the rise of the Cali Cartel.
Law enforcement
Seizures
While the Cali Cartel operated with a degree of immunity early on, owing to its ties to the government and the Medellín Cartel's narco-terrorism war on the Colombian government, they were still subjected to drug seizures. In 1991 alone, law enforcement agencies seized 67 tons of cocaine, 75% originating from the Cali Cartel. In total, the US Customs Service (USCS) alone had spent 91,855 case hours and 13 years in investigations against the Cali Cartel, seizing 50 tons of cocaine and $15 million in assets.
In 1991, a shipment of cocaine hidden inside of concrete posts was intercepted with the aid of a drug-sniffing dog at the Miami seaport. It led to the seizure of of cocaine and several arrests, beginning what the US Customs Service would dub Operation Cornerstone, which lasted 14 years. In another seizure the following year, a USCS wiretap on Harold Ackerman, whose affiliation was derived from the 1991 seizure, enabled the arrest of seven individuals and of cocaine hidden in a load of broccoli. Accounting ledgers were seized in related arrests, which allowed the identification of another shipment being sent to Panama hidden in tiles. This information was passed to Panamanian authorities and led to the seizure of .
In 1993, the US Customs Service struck again at the Cali cartel, this time seizing while pursuing Raul Marti, the only remaining member of the defunct Miami cell. It is believed these successive raids forced the cartel to funnel its shipments through Mexico; however, that did not stop the US Customs Service. Three maritime ships were intercepted in 1993, with a total of .
Major arrests
Between June and July 1995, the remaining six of the seven heads of the cartel were arrested. Gilberto was arrested in his home, and Henry Loaiza-Ceballos, Victor Patiño-Fomeque and Phanor Arizabaleta-Arzayus surrendered to authorities. Jose Santa Cruz Londoño was captured in a restaurant, and a month later, Miguel Rodriguez was apprehended during a raid. It is widely believed that the cartel continued to operate and run trafficking operations from within prison.
The Rodríguez brothers were extradited in 2006 to the United States and pleaded guilty in Miami, Florida, to charges of conspiracy to import cocaine into the United States. Upon their confession, they agreed to forfeit $2.1 billion in assets. The agreement, however, did not require them to cooperate in other investigations. They were solely responsible for identification of assets stemming from their cocaine trafficking. Colombian officials raided and seized the Drogas la Rebaja pharmacy chain, replacing 50 of its 4,200 workers on the grounds that they were "serving the interests of the Cali Cartel".
See also
Manuel de Dios Unanue
Narcotrafficking in Colombia
Norte del Valle Cartel
References
Bibliography
1970s establishments in Colombia
1996 disestablishments in Colombia
Organizations established in the 1970s
Organizations disestablished in 1996
Disbanded Colombian drug cartels
Transnational organized crime
Organized crime groups in the United States
Gangs in Florida
Former gangs in New York City
Organised crime groups in Spain
Cali |
342663 | https://en.wikipedia.org/wiki/BlackBerry%20Limited | BlackBerry Limited | BlackBerry Limited is a Canadian software company specializing in cybersecurity. Originally known as Research In Motion (RIM), it developed the BlackBerry brand of interactive pagers, smartphones, and tablets. It transitioned to a cybersecurity enterprise software and services company under Chief Executive Officer John S. Chen. Its products are used by various businesses, car manufacturers, and government agencies to prevent hacking and ransomware attacks. They include BlackBerry Cylance's artificial intelligence based cyber-security solutions, the BlackBerry AtHoc emergency communication system (ECS) platform; the QNX real-time operating system; and BlackBerry Enterprise Server (BlackBerry Unified Endpoint Manager), a Unified Endpoint Management (UEM) platform.
BlackBerry was founded in 1984 as Research In Motion by Mike Lazaridis and Douglas Fregin. In 1992, Lazaridis hired Jim Balsillie, and Lazaridis and Balsillie served as co-CEOs until January 22, 2012, when Thorsten Heins became president and CEO. In November 2013, John S. Chen took over as CEO. His initial strategy was to subcontract manufacturing to Foxconn, and to focus on software technology. Currently, his strategy includes forming licensing partnerships with device manufacturers such as TCL Communication and unifying BlackBerry's software portfolio. On the 4th of January 2022, BlackBerry decommissioned the infrastructure and operating system used by their non-Android phones.
History
1984–2001: Early years and growth
Research In Motion Limited was founded in March 1984 by Mike Lazaridis and Douglas Fregin. At the time, Lazaridis was an engineering student at the University of Waterloo while Fregin was an engineering student at the University of Windsor. In 1988, RIM became the first wireless data technology developer in North America and the first company outside Scandinavia to develop connectivity products for Mobitex wireless packet-switched data communications networks. In 1995, RIM introduced the DigiSync Film KeyKode Reader. In 1991, RIM introduced the first Mobitex protocol converter. In 1992, RIM introduced the first Mobitex point-of-sale solution, a protocol converter box that interfaced with existing point-of-sale terminal equipment to enable wireless communication. In 1993, RIM introduced the RIMGate, the first general-purpose Mobitex X.25 gateway. In the same year, RIM launched Ericsson Mobidem AT and Intel wireless modem containing RIM modem firmware. In 1994, RIM introduced the first Mobitex mobile point-of-sale terminal. In the same year, RIM received the Emmy Award for Technical Innovation and the KPMG High Technology Award. In 1995, RIM introduced Freedom, the first Type II PCMCIA radio modem for Mobitex.
In 1995, RIM was financed by Canadian institutional and venture capital investors through a private placement in the privately held company. Working Ventures Canadian Fund Inc. led the first venture round with a C$5,000,000 investment with the proceeds being used to complete the development of RIM's two-way paging system hardware and software. A total of C$30,000,000 in pre-IPO financing was raised by the company prior to its initial public offering on the Toronto Stock Exchange in January 1998 under the symbol RIM.
In 1996, RIM introduced the Interactive Pager, the first two-way messaging pager, and the RIM 900 OEM radio modem. The company worked with RAM Mobile Data and Ericsson to turn the Ericsson-developed Mobitex wireless data network into a two-way paging and wireless e-mail network. Pivotal in this development was the release of the Inter@ctive Pager 950, which started shipping in August 1998. About the size of a bar of soap, this device competed against the Skytel two-way paging network developed by Motorola.
In 1999, RIM introduced the BlackBerry 850 pager. Named in reference to the resemblance of its keyboard's keys to the druplets of the blackberry fruit, the device could receive push email from a Microsoft Exchange Server using its complementary server software, BlackBerry Enterprise Server (BES). The introduction of the BlackBerry set the stage for future enterprise-oriented products from the company, such as the BlackBerry 957 in April 2000, the first BlackBerry smartphone. The BlackBerry OS platform and BES continued to increase in functionalitywhile the incorporation of encryption and S/MIME support helped BlackBerry devices gain increased usage by governments and businesses. During fiscal 1999-2001, total assets declared in the RIM's balance sheet grew eight-fold due to massive capacity expansion.
2001–2011: Global expansion and competition
RIM soon began to introduce BlackBerry devices aimed towards the consumer market as well, beginning with the BlackBerry Pearl 8100the first BlackBerry phone to include multimedia features such as a camera. The introduction of the Pearl series was highly successful, as was the subsequent Curve 8300 series and Bold 9000. Extensive carrier partnerships fuelled the rapid expansion of BlackBerry users globally in both enterprise and consumer markets.
Despite the arrival of the first Apple iPhone in 2007, BlackBerry sustained unprecedented market share growth well into 2011. The introduction of Apple's iPhone on the AT&T network in the fall of 2007 in the United States prompted RIM to produce its first touchscreen smartphone for the competing network in 2008the BlackBerry Storm. The Storm sold well but suffered from mixed to poor reviews and poor customer satisfaction. The iPhone initially lagged behind the BlackBerry in both shipments and active users, due to RIM's head start and larger carrier distribution network. In the United States, the BlackBerry user base peaked at approximately 21 million users in the fall of 2010. That quarter, the company's global subscriber base stood at 36 million users. As the iPhone and Google Android accelerated growth in the United States, the BlackBerry began to turn to other smartphone platforms. Nonetheless, the BlackBerry line as a whole continued to enjoy success, spurred on by strong international growth. As of December 1, 2012, the company had 79 million BlackBerry users globally with only 9 million remaining in the United States.
Even as the company continued to grow worldwide, investors and media became increasingly alarmed about the company's ability to compete with devices from rival mobile operating systems iOS and Android. CNN cited BlackBerry as one of six endangered US-Canadian brands. Analysts were also worried about the strategic direction of the co-CEOs' management structure. The company also lost Larry Conlee, who had served as COO of engineering and manufacturing from 2001 to 2009, to retirement. Conlee was not only key to the company's platform manufacturing strategy, he was also a pragmatic taskmaster who ensured deadlines were met, plus Conlee had the clout to push back if Lazaridis had unrealistic expectations.
Following numerous attempts to upgrade their existing Java platform, the company made numerous acquisitions to help it create a new, more powerful BlackBerry platform, centered around its recently acquired real-time operating system QNX. In March 2011, Research In Motion Ltd.'s then-co-CEO Jim Balsillie suggested during a conference call that the "launch of some powerful new BlackBerrys" (eventually released as BlackBerry 10) would be in early 2012. However analysts were "worried that promoting the mysterious, supposedly game-changing devices too early might hurt sales of existing BlackBerrys" (similar to the Osborne effect). The initial launch date was seen in retrospect as too ambitious, and hurt the company's credibility at a time when its existing aging products steadily lost market share.
On September 27, 2010, RIM announced the long-rumoured BlackBerry PlayBook tablet, the first product running on the new QNX platform known as BlackBerry Tablet OS. The BlackBerry PlayBook was officially released to U.S. and Canadian consumers on April 19, 2011. The PlayBook was criticized for being rushed to market in an incomplete state and sold poorly. Following the shipments of 900,000 tablets during its first three quarters on market, slow sales and inventory pileups prompted the company to reduce prices and to write down the inventory value by $485 million.
Primary competition
The primary competitors of the BlackBerry are smartphones running Android and the Apple iPhone. For a number of years, the BlackBerry was the leading smartphone in many markets, particularly the United States. The arrival of the Apple iPhone and later Google's Android platform caused a slowdown in BlackBerry growth and a decline in sales in some markets, most notably the United States. This led to negative media and analyst sentiment over the company's ability to continue as an independent company.
When Apple's iPhone was first introduced in 2007, it generated substantial media attention, with numerous media outlets calling it a "BlackBerry killer". While BlackBerry sales continued to grow, the newer iPhone grew at a faster rate and the 87 percent drop in BlackBerry's stock price between 2010 and 2013 is primarily attributed to the performance of the iPhone handset.
The first three models of the iPhone (introduced in 2007) generally lagged behind the BlackBerry in sales, as RIM had major advantages in carrier and enterprise support; however, Apple continued gaining market share. In October 2008, Apple briefly passed RIM in quarterly sales when they announced they had sold 6.9 million iPhones to the 6.1 million sold by RIM, comparing partially overlapping quarters between the companies. Though Apple's iPhone sales declined to 4.3 million in the subsequent quarter and RIM's increased to 7.8 million, for some investors this indicated a sign of weakness. Apple's iPhone began to sell more phones quarterly than the BlackBerry in 2010, brought on by the release of the iPhone 4.
In the United States, the BlackBerry hit its peak in September 2010, when almost 22 million users, or 37% of the 58.7 million American smartphone users at the time, were using a BlackBerry. BlackBerry then began to decline in use in the United States, with Apple's installed base in the United States finally passing BlackBerry in April 2011. Sales of the iPhone continued to accelerate, as did the smartphone market, while the BlackBerry began to lose users continuously in the United States. By February 2016, only 1.59 million (0.8%) of the 198.9 million smartphone users in the United States were running BlackBerry compared to 87.32 million (43.9%) on an iPhone.
Google's Android mobile operating system, running on hardware by a range of manufacturers including Sony, Motorola, HTC, Samsung, LG and many others ramped up the competition for BlackBerry. In January 2010, barely 3 million (7.1%) of the 42.7 million Smartphones in use at the time in the United States were running Android, compared to 18 million BlackBerry devices (43%). Within a single year Android had passed the installed base of the BlackBerry in the United States. By February 2016, only 1.59 million (0.8%) of the 198.9 million smartphone users in the United States were running BlackBerry compared to 104.82 million (52.7%) running Android.
While RIM's secure encrypted network was attractive to corporate customers, their handsets were sometimes considered less attractive to consumers than iPhone and Android smartphones. Developers often developed consumer applications for those platforms and not the BlackBerry. During 2010s, even enterprise customers had begun to adopt BYOD policies due to employee feedback. The company also faced criticism that its hardware and operating system were outdated and unappealing compared to the competition, as well as that the browsing capabilities were poorer.
2011–2015: Strategic changes and restructuring
Slowing growth prompted the company to undertake a lay-off of 2,000 employees in the summer of 2011. In September 2011, the company's BlackBerry Internet Service suffered a massive outage, impacting millions of customers for several days. The outage embarrassingly occurred as Apple prepared to launch the iPhone 4S, causing fears of mass defections from the platform.
Shortly afterwards, in October 2011, RIM unveiled BBX, a new platform for future BlackBerry smartphones that would be based on the same QNX-based platform as the PlayBook. However, due to an accusation of trademark infringement regarding the name BBX, the platform was renamed BlackBerry 10. The task proved to be daunting, with the company delaying the launch in December 2011 to some time in 2012. On January 22, 2012, Mike Lazaridis and Jim Balsillie resigned as the CEOs of the company, handing the reins over to executive Thorsten Heins. On March 29, 2012, the company reported its first net loss in years. Heins set about the task of restructuring the company, including announcing plans to lay off 5,000 employees, replacing numerous executives, and delaying the new QNX-based operating system for phones ("BlackBerry 10") a second time into January 2013.
BlackBerry 10
After much criticism and numerous delays, RIM officially launched BlackBerry 10 and two new smartphones based on the platform, the BlackBerry Z10 and Q10, on January 30, 2013. The BlackBerry Z10, the first BlackBerry smartphone running BlackBerry 10, debuted worldwide in January 2013, going on sale immediately in the U.K. with other countries following. A marked departure from previous BlackBerry phones, the Z10 featured a fully touch-based design, a dual-core processor, and a high-definition display. BlackBerry 10 had 70,000 applications available at launch, which the company expected would rise to 100,000 by the time the device made its debut in the United States. In support of the launch, the company aired its first Super Bowl television advertisement in the U.S. and Canada during Super Bowl XLVII. In discussing the decision to create a proprietary operating system instead of adopting an off-the-shelf platform such as Android, Heins noted, "If you look at other suppliers' ability to differentiate, there's very little wiggle room. We looked at it seriouslybut if you understand what the promise of BlackBerry is to its user base it's all about getting stuff done. Games, media, we have to be good at it but we have to support those guys who are ahead of the game. Very little time to consume and enjoy contentif you stay true to that purpose you have to build on that basis. And if we want to serve that segment we can't do it on a me-too approach." Chief Operating Officer Kristian Tear remarked "We want to regain our position as the number one in the world", while Chief Marketing Officer Frank Boulben proclaimed "It could be the greatest comeback in tech history. The carriers are behind us. They don't want a duopoly" (referring to Apple and Samsung).
During the BlackBerry 10 launch event, the company also announced that it would change its public brand from Research In Motion to BlackBerry. The name change was made to "put the BlackBerry brand at the centre" of the company's diverse brands, and because customers in some markets "already know the company as BlackBerry". While a shareholder vote on an official name change to BlackBerry Limited will be held at its next annual general meeting, its ticker symbols on the TSX and NASDAQ changed to BB and BBRY respectively on February 4, 2013.
On August 12, 2013, the company announced that it was open to being purchased and stated in an official news release to Canada's securities administrators:
The company’s board of directors has formed a special committee to explore strategic alternatives to enhance value and increase scale in order to accelerate BlackBerry 10 deployment. These alternatives could include, among others, possible joint ventures, strategic partnerships or alliances, a sale of the Company or other possible transactions.
Prem-Watsa/Fairfax Deal
Canada Pension Plan Investment Board's CEO Mark Wiseman stated that he would consider investing in BlackBerry if the company became private. Also on August 12, 2013, foremost shareholder Prem Watsa resigned from BlackBerry's board.
On September 20, 2013, the company announced it would lay off 4,500 staff and take a CAD$1 billion operating loss. Three days later, the company announced that it had signed a letter of intent to be acquired by a consortium led by Prem Watsa-owned Fairfax Financial Holdings for a $9 per share deal. This deal was also confirmed by Watsa.
On September 29, 2013, the company began operating a direct sales model for customers in the United States, where unlocked Q10 and Z10 smartphones were sold directly from the BlackBerry website. On October 15, 2013, the company published an open letter in 30 publications in nine countries to reassure customers that BlackBerry would continue to operate. Anthony Michael Sabino, St. John's University business professor, stated in the Washington Post: "This is BlackBerry’s last-ditch attempt to simply survive in the face of crushing competition in a market it essentially invented."
John Chen joins BlackBerry
On November 4, 2013, the Fairfax Prem Watsa deal was scrapped in favor of a US$1 billion cash injection which, according to one analyst, represented the level of confidence BlackBerry's largest shareholder had in the company. At the same time, BlackBerry installed John Chen as CEO to replace the laid-off Heins. According to the Globe and Mail, BlackBerry's hope was that Chen, with his reputation as a turnaround artist, could save the company.
"John Chen knows how to manage a mobile company, and perhaps most importantly, can make things happen in the industry," J. Gold Associates Principal Analyst Jack Gold told the publication.
"We have begun moving the company to embrace a multi-platform, BYOD world by adopting a new mobility management platform and a new device strategy," Chen explained in an open letter published shortly after his appointment. "I believe in the value of this brand. With the right team and the right strategy in place, I am confident that we will rebuild BlackBerry for the benefit of all our constituencies."
In April 2014, Chen spoke of his turnaround strategy in an interview with Reuters, explaining that he intended to invest in or partner with other companies in regulated industries such as healthcare, financial, and legal services. He later clarified that BlackBerry's device division remained part of his strategy and that his company was also looking to invest in "emerging solutions such as machine to machine technologies that will help power the backbone of the Internet of Things." He would later expand on this idea at a BlackBerry Security Summit in July 2014.
In May 2014, the low-cost BlackBerry Z3 was introduced onto the Indonesian market, where the brand had been particularly popular. The budget handset was produced in partnership with Taiwanese manufacturer Foxconn Technology Group, which handled the design and distribution of the product. A New York Times analysis stated that the model was an attempt by Chen to generate revenue while he tried "to shift the organization’s focus to services and software." An analyst with London's ABI Research said: "John Chen is just sustaining the handset business as he sorts out the way ahead." As part of the localization effort for the promotion of the Z3, the handset's back panel was engraved with the word "Jakarta", but skepticism still emerged, as the handset was still more than twice as expensive as Android models in Indonesia at the time of release.
2015–present: Software transition
In the first quarter of the 2015 fiscal year, Chen stated: "This is, of course, the very beginning of our task and we hope that we will be able to report better results going forward ... We feel pretty good about where we are." Quartz reported that stock was up by 30 percent, compared to the same period in the previous year, while Chen expressed enthusiasm for the release of two new handsets, both with keyboards and touch screens, in the second half of 2014. Chen did not provide sales figures for the Z3 phone in Indonesia.
In September 2015, Chen unveiled the BlackBerry Priv, a keyboard-slider smartphone utilizing the Android operating system with BlackBerry-developed software enhancements, including a secure bootloader, full-disk encryption, integrity protection, and the BlackBerry HUB.
In 2020, BlackBerry signed a new licensing agreement for smartphones with the US-based startup company, OnwardMobility. The company never released a device before shutting down in 2022.
As of June 2021, Cybersecurity ($107 million) and IoT ($43 million) revenue accounted for a combined 86% of Q1 2022 earnings ($174 million). Chen reiterated: "Now, we are pivoting the organization more heavily toward the market by creating two business units, Cybersecurity and IoT ... we will provide revenue and gross margin by business unit as well as other selected metrics. We believe that this additional color will help investor gain better understanding of the underlying performance of the business units, ultimately driving shareholder value."
Strategic acquisitions
During this time, BlackBerry also expanded its software and services offerings with several key acquisitions. These included file security firm WatchDox, crisis communications leader AtHoc, and rival EMM vendor Good Technology. The products offered by these firms were gradually re-branded and integrated into BlackBerry's own portfolio.
Trefis, an analyst team and Forbes contributor, called Good "a nice strategic fit for BlackBerry's software business", noting that the acquisition would "help improve BlackBerry's cross-platform EMM support and bring in a relatively large and diverse customer base, while also helping drive incremental revenue growth". It also noted that the acquisition – the largest in BlackBerry's history – indicated the company's commitment to a software-focused turnaround plan. It remained ambivalent about the company's outlook overall.
In January 2016, Chen stated that BlackBerry did not plan on developing any new devices running BlackBerry 10 and that the company would release two new Android devices at most during 2016. BlackBerry also announced the release of the Good Secure EMM Suites, consolidating WatchDox and Good Technology's products into several tiered offerings alongside its existing software.
Hardware licensing partnerships
BlackBerry announced the DTEK50, a mid-range Android smartphone, on July 26, 2016. Unlike the Priv, the DTEK50 was a re-branded version of an existing smartphone, the Alcatel Idol 4 as manufactured by TCL Corporation, one of the company's hardware partners. It was to be the second-last phone ever developed in-house at BlackBerry, followed by the DTEK60 in October 2016 - on September 28, 2016, BlackBerry announced that it would cease in-house hardware development to focus on software, delegating development, design, and manufacturing of its devices to third-party partners.
The first of these partners was BB Merah Putih, a joint venture in Indonesia. Chen stated that the company was "no longer just about the smartphone, but the smart in the phone". On December 15, 2016, BlackBerry announced that it had reached a long-term deal with TCL to continue producing BlackBerry-branded smartphones for sale outside of Bangladesh, India, Indonesia, Nepal, and Sri Lanka. This partnership was followed by an agreement with Optiemus Infracom on February 6, 2017 to produce devices throughout India and neighbouring markets including Sri Lanka, Nepal, and Bangladesh.
Since the partnerships were announced, TCL has released the BlackBerry KeyONE and BB Merah Putih has released the BlackBerry Aurora.
Cybersecurity consulting
In February 2016, BlackBerry acquired UK-based cybersecurity firm Encription, with the intention of branching out into the security consulting business. It later released BlackBerry SHIELD, an IT risk assessment program for its corporate clients. In April 2017, BlackBerry's cybersecurity division partnered with Allied World Assurance Company Holdings, a global insurance and reinsurance provider. This agreement saw BlackBerry's SHIELD self-assessment tool integrated into Allied World's FrameWRX cyber risk management solution.
BlackBerry Secure
On December 8, 2016, BlackBerry announced the release of BlackBerry Secure. Billed as a "comprehensive mobile security platform for the Enterprise of Things", BlackBerry Secure further deepens the integration between BlackBerry's acquisitions and its core portfolio. According to Forbes, it brings all of BlackBerry's products "under a single umbrella".
On February 7, 2017, BlackBerry announced the creation of the BBM Enterprise SDK, a Communication-Platform-as-a-Service development tool. The Enterprise SDK allows developers to incorporate BBM Enterprise's messaging functionality into their applications. It was released to BlackBerry's partners on February 21, 2017, and officially launched on June 12, 2017.
Also in February 2017, analyst firm 451 Research released a report on BlackBerry's improved financial position and product focus. The report identified BlackBerry's position in the Internet of Things and its device licensing strategy as strengths. The BBM Enterprise SDK was also highlighted, alongside several challenges still facing the company.
Financials
Until 2013, the number of active BlackBerry users increased over time.
For the fiscal period in which the Apple iPhone was first released (in 2007), RIM reported a user base of 10.5 million BlackBerry subscribers.
At the end of 2008, when Google Android first hit the market, RIM reported that the number of BlackBerry subscribers had increased to 21 million.
In the fourth quarter of fiscal year ended March 3
, 2012, RIM shipped 11.1 million BlackBerry smartphones, down 21 percent from the previous quarter and it was the first decline in the quarter covering Christmas since 2006. For its fourth quarter, RIM announced a net loss of US$125 million (the last loss before this occurred in the fourth quarter of the fiscal year 2005). RIM's loss of market share accelerated in 2011, due to the rapidly growing sales of Samsung and HTC Android handsets; RIM's annual market share in the U.S. dropped to just 3 percent, from 9 percent.
In the quarter ended June 28, 2012, RIM announced that the number of BlackBerry subscribers had reached 78 million globally. Furthermore, RIM reported its first quarter revenue for the 2013 fiscal year, showing that the company incurred a GAAP net loss of US$518 million for the quarter, and announced a plan to implement a US$1 billion cost-saving initiative. The company also announced the delay of the new BlackBerry 10 OS until the first quarter of 2013.
After the release of the Apple iPhone 5 in September 2012, RIM CEO Thorsten Heins announced that the number of global users was up to 80 million, which sparked a 7% jump in shares. On December 2, 2012, the company reported a decline in revenue of 5% from the previous quarter and 47% from the same period the previous year. The company reported a GAAP profit of US$14 million (adjusted net loss of US$115 million), which was an improvement over previous quarters. The company also grew its cash reserves to US$2.9 billion, a total that was eventually increased to nearly US$600 million in the quarter. The global subscriber base of BlackBerry users declined slightly for the first time to 79 million, after peaking at an all-time high of 80 million the previous quarter.
In September 2013, the company announced that its growing BBM instant messaging service will be available for Android and iPhone devices. BlackBerry stated that the service has 60 million monthly active customers who send and receive more than 10 billion messages a day. The "BBM Channels" enhancement is expected in late 2013, whereby conversations are facilitated between users and communities, based on factors such as common interests, brands, and celebrities.
On September 28, 2013, media reports confirmed that BlackBerry lost US$1.049 billion during the second fiscal quarter of 2013. In the wake of the loss, Heins stated: "We are very disappointed with our operational and financial results this quarter and have announced a series of major changes to address the competitive hardware environment and our cost structure."
Between 2010 and 2013, the stock price of the company dropped by 87 percent due to the widespread popularity of the iPhone. Goldman Sachs estimated that, in June 2014, BlackBerry accounted for 1 percent share of smartphone sales, compared to a peak of around 20 percent in 2009.
With the release of its financial results for the first fiscal quarter of 2015 in June 2014, Chen presented a more stable company that had incurred a lower amount of loss than previous quarters. The New York Times described "a smaller-than-expected quarterly loss", based on the June 19, 2014 news release:
Revenue for the first quarter of fiscal 2015 was $966 million, down $10 million or 1% from $976 million in the previous quarter ... During the first quarter, the Company recognized hardware revenue on approximately 1.6 million BlackBerry smartphones compared to approximately 1.3 million BlackBerry smartphones in the previous quarter.
Ian Austin of the New York Times provided further clarity on BlackBerry's news release: "Accounting adjustments enabled BlackBerry to report a $23 million, or 4 cents a share, profit for its last quarter. Without those noncash charges, however, the company lost $60 million, or 11 cents a share, during the period." Following the news release, Chen stated that BlackBerry is comfortable with its position, and it is understood that his plan for the company mainly involves businesses and governments, rather than consumers.
Organizational changes
Leadership changes
The company was often criticized for its dual CEO structure. Under the original organization, Mike Lazaridis oversaw technical functions, while Jim Balsillie oversaw sales/marketing functions. Some saw this arrangement as a dysfunctional management structure and believed RIM acted as two companies, slowing the effort to release the new BlackBerry 10 operating system.
On June 30, 2011, an investor push for the company to split its dual-CEO structure was unexpectedly withdrawn after an agreement was made with RIM. RIM announced that after discussions between the two groups, Northwest & Ethical Investments would withdraw its shareholder proposal before RIM's annual meeting.
On January 22, 2012, RIM announced that its CEOs Balsillie and Lazaridis had stepped down from their positions. They were replaced by Thorsten Heins. Heins hired investment banks RBC Capital Markets and JP Morgan to seek out potential buyers interested in RIM, while also redoubling efforts on releasing BlackBerry 10.
On March 29, 2012, RIM announced a strategic review of its future business strategy that included a plan to refocus on the enterprise business and leverage on its leading position in the enterprise space. Heins noted, "We believe that BlackBerry cannot succeed if we tried to be everybody's darling and all things to all people. Therefore, we plan to build on our strength." Balsillie resigned from the board of directors in March 2012, while Lazaridis remained on the board as vice chairman.
Following the assumption of role as CEO, Heins made substantial changes to the company's leadership team. Changes included the departures of Chief Technology Officer David Yach; Chief Operating Officer Jim Rowan; Senior Vice President of Software Alan Brenner; Chief Legal Officer, Karima Bawa; and Chief Information Officer Robin Bienfait.
Following the leadership changes, Heins hired Kristian Tear to assume the role of Chief Operating Officer, Frank Boulben to fill the Chief Marketing Officer role and appointed Dan Dodge, the CEO of QNX, to take over as Chief Technology Officer. On July 28, 2012, Steven E. Zipperstein was appointed as the new Vice President and Chief Legal Officer.
On March 28, 2013, Lazardis relinquished his position as vice chairman and announced his resignation from the board of directors. Later in the year, Heins was replaced by John S. Chen, who assumed the CEO role in the first week of November. Chen's compensation package mainly consists of BlackBerry shares—a total of 13 million—and he will be entitled to the entire number of shares after he has served the company for five years. Heins received an exit package of $22 million.
Chen has a reputation as a "turnaround" CEO, turning the struggling enterprise software and services organization Sybase into enough of a success to sign a merger with SAP in 2010. Chen was open about his plans for BlackBerry upon joining the company, announcing his intent to move away from hardware manufacturing to focus on enterprise software such as QNX, BlackBerry UEM, and AtHoc. He has firm views on net neutrality and lawful access, and has been described by former colleagues as a "quick thinker who holds people accountable".
Workforce reductions
In June 2011, RIM announced its prediction that Q1 2011 revenue would drop for the first time in nine years, and also unveiled plans to reduce its workforce.
In July 2011, the company cut 2,000 jobs, the biggest lay-off in its history and the first major layoff since November 12, 2002 when the company laid off 10% of its workforce (200 employees). The lay-off reduced the workforce by around 11%, from 19,000 employees to 17,000.
On June 28, 2012, the company announced a planned workforce reduction of 5,000 by the end of its fiscal 2013, as part of a $1 billion cost savings initiative.
On July 25, 2013, 250 employees from BlackBerry's research and development department and new product testing were laid off. The layoffs were part of the turnaround efforts.
On September 20, 2013, BlackBerry confirmed that the company will have a massive layoff of 4,500 employees by the end of 2013. This would be approximately 40 percent of the company's workforce.
BlackBerry had at its peak about 20,000 employees, but when CEO John Chen joined BlackBerry in 2013 there were additional layoffs in February 2015 to compete with smartphones, so the total employees numbered 6,225. On July 21, 2015, BlackBerry announced an additional layoff of an unspecified number of employees, with another 200 laid off in February 2016.
As of August 2017, the company had 4,044 employees.
Stock fluctuations
In June 2011, RIM stock fell to its lowest point since 2006. On December 16, 2011, RIM shares fell to their lowest price since January 2004. Overall in 2011, the share price tumbled 80 percent from January to December, causing its market capitalization to fall below book value. By March 2012, shares were worth less than $14, from a high of over $140 in 2008. From June 2008 to June 2011, RIM's shareholders lost almost $70 billion, or 82 percent, as the company's market capitalization dropped from $83 billion to $13.6 billion, the biggest decline among communications-equipment providers.
Shares price fell further on July 16, closing at $7.09 on the Toronto Stock Exchange, the lowest level since September 8, 2003, after a jury in California said RIM must pay $147.2 million as a result of a patent infringement judgment that was subsequently overturned.
On November 22, 2012, shares of RIM/BlackBerry surged 18%, the largest gain of the stock in over three years. This was due to National Bank of Canada analyst Kris Thompson's announcement that the new BB10 devices were expected to sell better than anticipated; along with raising the target stock price.
On June 28, 2013, after BlackBerry announced net losses of approximately $84 million, its shares plunged 28%.
On April 12, 2017, shares surged more than 19% as BlackBerry won an arbitration case against Qualcomm. It was decided that BlackBerry had been overpaying the company in royalty payments, and BlackBerry was awarded $814.9 million.
Rumoured Samsung buyout
On January 14, 2015, in the final hour of trading in U.S. markets, Reuters published that Samsung was in talks with BlackBerry to buy the latter for between $13.35 and $15.49 per share. The article caused shares of BlackBerry to rally 30%. Later that evening, BlackBerry issued a press release denying the media reports. Samsung responded, saying that the report is "groundless".
Mobile OS transition
BlackBerry OS (Java)
The existing Java-based BlackBerry OS was intended to operate under much different, simpler conditions such as low powered devices, narrow network bandwidth, and high-security enterprises. However, as the needs of the mobile user evolved, the aging platform struggled with emerging trends like mobile web browsing, consumer applications, multimedia and touch screen interfaces. Users could experience performance issues, usability problems and instability.
The company tried to enhance the aging platform with a better web browser, faster performance, a bundled application store and various touch screen enhancements, but ultimately decided to build a new platform with QNX at its core. While most other operating systems are monolithicthe malfunction of one area would cause the whole system to crash – QNX is more stable because it uses independent building blocks or "kernels", preventing a domino effect if one kernel breaks. RIM's final major OS release before the release of BlackBerry 10, was BlackBerry 7, which was often criticized as dated and referred to as a temporary stopgap.
BlackBerry Tablet OS (QNX)
The BlackBerry PlayBook was the first RIM product whose BlackBerry Tablet OS was built on QNX, launched in April 2011 as an alternative to the Apple iPad. However, it was criticized for having incomplete software (it initially lacked native email, calendaring and contacts) and a poor app selection. It fared poorly until prices were substantially reduced, like most other tablet computers released that year (Android tablets such as the Motorola Xoom and Samsung Galaxy Tab, and the HP TouchPad). The BlackBerry Tablet OS received a major update in February 2012, as well as numerous minor updates.
BlackBerry 10 (QNX)
BlackBerry 10, a substantially updated version of BlackBerry Tablet OS intended for the next generation BlackBerry smartphones, was originally planned for release in early 2012. The company delayed the product several times, remembering the criticism faced by the BlackBerry Playbook launch and citing the need for it to be perfect in order to stand a chance in the market. The most recent model with this OS was the BlackBerry Leap.
Android
In September 2015, BlackBerry announced the Priv, a handset running Android 5.1.1 "Lollipop" (and compatible with an upgrade to Android Marshmallow). It is the first phone by the company not to run an in-house built operating system.
BlackBerry's Android is almost stock Android, with their own tweaks to improve productivity and security. BlackBerry has implemented some of the features of BlackBerry 10 within Android like BlackBerry Hub, BlackBerry Virtual Keyboard, BlackBerry Calendar, BlackBerry Contacts app etc.
On April 1, 2016, BlackBerry reported that it sold 600,000 phones in its fiscal fourth quarter, amid expectations of 750,000–800,000 handset sales for the first full quarter of reporting since the Priv's release.
On July 26, 2016, a new, mid-range model with only an on-screen keyboard was introduced, the unusually slim BlackBerry DTEK50, powered by the then latest version of Android (6.0, Marshmallow) and featuring a 5.2-inch full high-definition display. BlackBerry chief security officer David Kleidermacher stressed data security during the launch, indicating that this model included built-in malware protection and encryption of all user information. By then, the BlackBerry Classic, which used the BlackBerry 10 OS, had been discontinued.
In July 2016, industry observers expected the company to announce two additional smartphones over the subsequent 12 months, presumably also with the Android OS. However, BlackBerry COO Marty Beard told Bloomberg that "The company's never said that we would not build another BB10 device."
At MWC Barcelona 2017, TCL announced the BlackBerry KeyOne. The KEYone is the last phone designed in-house by BlackBerry.
Acquisitions
Through the years, particularly as the company evolved towards its new platform, BlackBerry has made numerous acquisitions of third-party companies and technology.
Slipstream Data Inc.
Slipstream Data Inc was a network optimization/data compression/network acceleration software company. BlackBerry acquired the company as a wholly owned subsidiary on July 11, 2006. The company continues to operate out of Waterloo.
Certicom
Certicom Corp. is a cryptography company founded in 1985 by Gordon Agnew, Ron Mullin and Scott Vanstone.
The Certicom intellectual property portfolio includes over 350 patents and patents pending worldwide that cover key aspects of elliptic-curve cryptography (ECC).
The National Security Agency (NSA) has licensed 26 of Certicom's ECC patents as a way of clearing the way for the implementation of elliptic curves to protect U.S. and allied government information.
Certicom's current customers include General Dynamics, Motorola, Oracle, Research In Motion and Unisys.
On January 23, 2009, VeriSign entered into an agreement to acquire Certicom. Research In Motion put in a counter-offer, which was deemed superior. VeriSign did not match this offer, and so Certicom announced an agreement to be acquired by RIM. Upon the completion of this transaction, Certicom became a wholly owned subsidiary of RIM, and was de-listed from the Toronto Stock Exchange on March 25, 2009.
Dash Navigation
In June 2009, RIM announced they would acquire Dash Navigation, makers of the Dash Express.
Torch Mobile
In August 2009, RIM acquired Torch Mobile, developer of Iris Browser, enabling the inclusion of a WebKit-based browser on their BlackBerry devices, which became the web browser in subsequent Java-based operating systems (BlackBerry 6, BlackBerry 7) and operating systems (QNX based BlackBerry Tablet OS and BlackBerry 10). The first product to contain this browser, the BlackBerry Torch 9800, was named after the company.
DataViz
On September 8, 2010, DataViz, Inc. sold their office suite Documents To Go and other assets to Research In Motion for $50 million. Subsequently, the application which allows users to view and edit Microsoft Word, Microsoft Excel and Microsoft PowerPoint was bundled on BlackBerry Smartphones and tablets.
Viigo
On March 26, 2010, the company announced its acquisition of Viigo, a Toronto-based company that developed the popular Viigo for BlackBerry applications, which aggregated news content from around the web. Terms of the deal were not disclosed.
QNX
RIM reached an agreement with Harman International on April 12, 2010, for RIM to acquire QNX Software Systems. The acquired company was to serve as the foundation for the next generation BlackBerry platform that crossed devices. QNX became the platform for the BlackBerry PlayBook and BlackBerry 10 Smartphones.
The Astonishing Tribe
The Astonishing Tribe (TAT), a user interface design company based in Malmö, Sweden, was acquired by the company on December 2, 2010. With a history of creating user interfaces and applications for mobile, TAT contributed heavily to the user experience of BlackBerry 10 as well as the development of its GUI framework, Cascades.
JayCut
In July 2011, RIM brought on JayCut, a Sweden-based company that is an online video editor. JayCut technology was incorporated into the media software of BlackBerry 10.
Paratek Microwave
In March 2012, RIM acquired Paratek Microwave, bringing their adaptive RF Tuning technology into BlackBerry handsets.
Tungle.me
On September 18, 2012, it was announced that the RIM social calendaring service, Tungle.me would be shut down on December 3, 2012. RIM acquired Tungle.me in April 2011.
Newbay
In July 2011, RIM acquired NewBay, an Irish-based company that is an online video, pics and tool for media networks editor. RIM subsequently sold NewBay to Synchronoss in December 2012 for $55.5 million.
Scoreloop
On June 7, 2011, Scoreloop was acquired by BlackBerry for US$71 million. It provided tools for adding social elements to any game (achievements/rewards etc.) and was central to the BlackBerry 10's Games app. On December 1, 2014, all Scoreloop services were shut down.
Gist
Gist was acquired in February 2011, by BlackBerry. Gist is a tool that helps users to organise and view all their contacts in one place. Gist's services closed down on September 15, 2012, in order for the company to focus on BlackBerry 10.
Scroon
BlackBerry Ltd. acquired Scroon in May 2013. The French startup manages Facebook, Twitter and other social-media accounts for large clients like luxury-good maker LVMH Moet Hennessy Louis Vuitton SA, wireless operator Orange SA (ORA) and Warner Bros. Entertainment. The deal was publicly announced in November 2013. According to Scroon founder, Alexandre Mars, he had not disclosed the purchase by BlackBerry before because of the "delicate media buzz" around the company. Scroon is part of BlackBerry's strategy to profit from the BlackBerry Messenger instant-messaging service by utilizing the newly unveiled BBM Channels. Financial terms were not disclosed.
Movirtu
Movirtu was acquired in September 2014, by BlackBerry. Movirtu is a U.K. startup that allows multiple phone numbers to be active on a single mobile device. At the time of the acquisition BlackBerry announced they would expand this functionality beyond BlackBerry 10 to other mobile platforms such as Android and iOS.
Secusmart
Secusmart was acquired in September 2014. The German-based company was one of the steps to position BlackBerry as the most secure provider in the mobile market. Secusmart had the agreement to equip the German Government with high secure mobile devices that encrypt voice as well as data on BlackBerry 10 devices. Those phones are currently in use by Angela Merkel and most of the ministers as well as several Departments and the Parliament.
WatchDox
WatchDox was an Israel-based Enterprise File Synchronization and Sharing company which specialized in securing access to documents on a cloud basis. BlackBerry acquired the company in April 2015. On December 8, 2016, BlackBerry renamed WatchDox to BlackBerry Workspaces.
In August 2019, BlackBerry closed down its Israel development center.
AtHoc
On July 22, 2015, BlackBerry announced that it had acquired AtHoc, a provider of secure, networked emergency communications.
Good Technology
On September 4, 2015, BlackBerry announced the acquisition of mobile security provider Good Technology for $425 million. On December 8, 2016, it rebranded Good's products and integrated them into the BlackBerry Enterprise Mobility Suite, a set of tiered software offerings for its enterprise customers.
Encription
On February 24, 2016, BlackBerry acquired UK-based cyber security consultancy Encription.
Cylance
On November 16, 2018 Cylance was purchased for US$1.4 billion by BlackBerry Limited in an all cash deal. The technology behind Cylance would enable BlackBerry to add artificial intelligence capabilities to its existing software products for IoT applications and other services. Cylance will run as a separate division within BlackBerry Limited's current operations.
Software
BlackBerry Unified Endpoint Manager (UEM)
An Enterprise Mobility Management platform that provides provisional and access control over smartphones, tablets, laptops, and desktops with support for all major platforms including iOS, Android (including Android for Work and Samsung KNOX), BlackBerry 10, Windows 10, and Mac OS. UEM (formerly known as BES) also acts as a unified management console and server for BlackBerry Dynamics, BlackBerry Workspaces, and BlackBerry 2FA.
BlackBerry Dynamics (Formerly Good Dynamics)
A Mobile Application Management platform that manages and secures app data through application virtualization. The BlackBerry Dynamics suite of apps includes email, calendar, contacts, tasks, instant messaging, browsing, and document sharing. The BlackBerry Dynamics SDK allows developers to utilize the platform's security, and add functionality from BlackBerry's other solutions into their applications.
BlackBerry Workspaces (Formerly WatchDox)
An Enterprise File Synchronization and Sharing (EFSS) platform, Workspaces provides file-level digital rights management controls alongside file synchronization and sharing functionality.
BlackBerry 2FA (Formerly Strong Authentication)
A two-factor, certificate-based VPN authentication solution that allows users to authenticate without requiring PINs or passwords.
BBM Enterprise
An IP-based enterprise instant messaging platform that provides end-to-end encryption for voice, video, and text-based communication. On February 7, 2017, Blackberry released the BBM Enterprise SDK, a "Communications Platform as a Service" kit that allows developers to incorporate BBM Enterprise's messaging capabilities into their own applications. Said capabilities include secure messaging, voice, video, file sharing, and presence information.
BlackBerry AtHoc
An emergency communication system, AtHoc provides two-way messaging and notifications across a range of devices and platforms. On May 17, 2017, BlackBerry released AtHoc Account to help businesses more easily keep track of their staff in an emergency.
SecuSUITE
An anti-eavesdropping solution that provides voice, data, and SMS encryption.
BlackBerry QNX
A real-time embedded operating system, QNX drives multiple software systems in modern auto vehicles, and forms the basis of solutions like BlackBerry Radar, an IoT-based asset tracking system for the transportation industry.
Patent litigation
Since the turn of the century, RIM has been embroiled in a series of suits relating to alleged patent infringement.
Glenayre Electronics
In 2001, Research In Motion sued competitor Glenayre Electronics Inc. for patent infringement, partly in response to an earlier infringement suit filed by Glenayre against RIM. RIM sought an injunction to prevent Glenayre from infringing on RIM's "Single Mailbox Integration" patent. The suit was ultimately settled in favour of RIM.
Good Technology
In June 2002, Research In Motion filed suit against 2000 start-up and competitor Good Technology. RIM filed additional complaints throughout the year. In March 2004, Good agreed to a licensing deal, thereby settling the outstanding litigation.
Handspring
On September 16, 2002, Research In Motion was awarded a patent pertaining to keyboard design on hand-held e-mail devices. Upon receiving the patent, it proceeded to sue Handspring over its Treo device. Handspring eventually agreed to license RIM's patent and avoid further litigation in November of the same year.
NTP
During the appeals, RIM discovered new prior art that raised a "substantial new question of patentability" and filed for a reexamination of the NTP patents in the United States Patent and Trademark Office. That reexamination was conducted separately to the court cases for infringement. In February 2006, the USPTO rejected all of NTP's claims in three disputed patents. NTP appealed the decision, and the reexamination process was still ongoing as of July 2006 (See NTP, Inc. for details).
On March 3, 2006, RIM announced that it had settled its BlackBerry patent dispute with NTP. Under the terms of the settlement, RIM agreed to pay NTP US$612.5 million in a "full and final settlement of all claims". In a statement, RIM said that "all terms of the agreement have been finalized and the litigation against RIM has been dismissed by a court order this afternoon. The agreement eliminates the need for any further court proceedings or decisions relating to damages or injunctive relief."
Xerox
On July 17, 2003, while still embroiled in litigation with NTP and Good Technology, RIM filed suit against Xerox in the U.S. District of Hartford, Connecticut. The suit was filed in response to discussions about patents held by Xerox that might affect RIM's business and also asked that patents held by Xerox be invalidated.
Visto
On May 1, 2006, RIM was sued by Visto for infringement of four patents. Though the patents were widely considered invalid and in the same veins as the NTP patents – with a judgement going against Visto in the U.K. – RIM settled the lawsuit in the United States on July 16, 2009, with RIM agreeing to pay Visto US$267.5 million plus other undisclosed terms.
Motorola
On January 22, 2010, Motorola requested that all BlackBerry smartphones be banned from being imported into the United States for infringing upon five of Motorola's patents. Their patents for "early-stage innovations", including UI, power management and WiFi, are in question. RIM countersued later the same day, alleging anti-competitive behaviour and that Motorola had broken a 2003 licensing agreement by refusing to extend licensing terms beyond 2008. The companies settled out of court on June 11, 2010.
Eatoni
On December 5, 2011, Research In Motion obtained an order granting its motion to dismiss plaintiff Eatoni's claims that RIM violated Section 2 of the Sherman Antitrust Act and equivalent portions of New York's Donnelly Act. Eatoni alleged that RIM's alleged infringement of plaintiff's '317 patent constituted an antitrust violation. Eatoni Ergonomics, Inc. v. Research In Motion Corp., No. 08-Civ. 10079 (WHP) (S.D.N.Y. , 2011), Memorandum and Order, p. 1 (Pauley, J.).
Mformation
In July 2012, a U.S. federal court jury awarded damages (later overturned) of $147 million against Research In Motion. The jury decided that Research In Motion had violated patents of Mformation and calculated damages of $8 each on 18.4 million units for royalties on past sales of devices to nongovernment U.S. customers only, not including future royalty payments inside and outside the U.S. On August 9, 2012, that verdict was overturned on appeal. RIM had argued that Mformation's patent claims were invalid because the processes were already being used when Mformation filed its patent application. Judge James Ware said Mformation failed to establish that RIM had infringed on the company's patent.
Qualcomm
On May 26, 2017, BlackBerry announced that it had reached an agreement with Qualcomm Incorporated resolving all amounts payable in connection with the interim arbitration decision announced on April 12, 2017. Following a joint stipulation by the parties, the arbitration panel has issued a final award providing for the payment by Qualcomm to BlackBerry of a total amount of U.S.$940,000,000 including interest and attorneys' fees, net of certain royalties due from BlackBerry for calendar 2016 and the first quarter of calendar 2017.
Facebook
On March 8, 2018, Blackberry Limited sued Facebook Inc. in federal court in Los Angeles. According to BlackBerry Limited, Facebook has built swaths of its empire on the messaging technology which was originally developed by them during the time when the Facebook chief, Mark Zuckerberg, was still living in a Harvard University dorm room. Blackberry Limited alleges that many features of the Facebook messaging service infringe on Blackberry patents. In January 2021, BlackBerry shares jumps 20% after settling patent dispute with Facebook.
Controversies
Stock option scandal settlement
In 2007, co-CEO Jim Balsillie was forced to resign as chairman as the company announced a $250-million earnings restatement relating to mistakes in how it granted stock options. Furthermore, an internal review found that hundreds of stock-option grants had been backdated, timed to a low share price to make them more lucrative.
In January 2009, Canadian regulators stated that they were seeking a record penalty of US$80 million from the top two executives, co-CEOs Jim Balsillie and Mike Lazaridis. Furthermore, the Ontario Securities Commission (OSC) has pushed for Balsillie to pay the bulk of any penalty and relinquish his seat on RIM's board of directors for a period of time.
On February 5, 2009, several executives and directors of Research In Motion agreed to pay the penalties to settle an investigation into the backdating of stock options. The Ontario Securities Commission approved the arrangement in a closed-door meeting.
Under the terms of a settlement agreement with the OSC, RIM co-chief executive officers Jim Balsillie and Mike Lazaridis, as well as chief operating officer Dennis Kavelman, will jointly pay a total of C$68-million to RIM to reimburse the company for losses from the backdating and for the costs of a long internal investigation. The three are also required to pay C$9-million to the OSC.
Initially, Balsillie had stepped down from RIM's board of directors temporarily for a year and remained in his executive role. Balsille left the Board in January 2012 and stepped down from his executive role in March 2012.
Environmental record
In November 2011, RIM was ranked 15th out of 15 electronics manufacturers in Greenpeace’s re-launched Guide to Greener Electronics. The guide ranks manufacturers according to their policies and practices to reduce their impact on the climate, produce greener products, and make their operations more sustainable. RIM appeared for the first time in 2011 with a score of 1.6 out of 10. In the Energy section the company was criticized by Greenpeace for not seeking external verification for its data on greenhouse gas (GHG) emissions, for not having a clean electricity plan and for not setting a target to reduce GHG emissions.
RIM performed badly in the Products category, only scoring points for the energy efficiency of its products as it reports that its BlackBerry charger gets the European Commission IPP 4-star rating. Meanwhile, on Sustainable Operations the company scored well for its stance on conflict minerals and received points for its Paper Procurement Policy and its mail-back programme for e-waste. Nevertheless, RIM was given no points for the management of GHG emissions from its supply chain.
In its 2012 report on progress relating to conflict minerals, the Enough Project rated RIM the sixth highest of 24 consumer electronics companies.
Anonymous open letter to management
On June 30, 2011, an alleged anonymous senior RIM employee penned an open letter to the company's senior management. The writer's main objective was getting co-CEOs Mike Lazaridis and Jim Balsillie to seriously consider his or her suggestions and complaints on the current state and future direction of the company.
Service outages
On October 10, 2011, RIM experienced one of the worst service outages in the company's history. Tens of millions of BlackBerry users in Europe, the Middle East, Africa, and North America were unable to receive or send emails and BBM messages through their phones. The outage was caused as a result of a core switch failure, "A transition to a back-up switch did not function as tested, causing a large backlog of data, RIM said." Service was restored on October 13, with RIM announcing a $100 package of free premium apps for users and enterprise support extensions.
Government access to communication
After a four-year stand-off with the Indian government over access to RIM's secure networks, the company demonstrated a solution that can intercept consumer email and messaging traffic between BlackBerry handsets, and make these encrypted communications available to Indian security agencies. There continues to be no access to secure encrypted BlackBerry enterprise communications or corporate emails, except through the Canadian Mutual Legal Assistance in Criminal Matters Act.
See also
List of multinational corporations
List of mobile phone brands by country
BlackBerry (article about the brand of electronic devices)
BlackBerry Mobile
List of BlackBerry products
List of mergers and acquisitions by BlackBerry
Index of articles related to BlackBerry OS
Science and technology in Canada
References
Further reading
External links
Hoovers - BlackBerry Ltd. profile
1984 establishments in Ontario
Canadian brands
Canadian companies established in 1984
Companies listed on the Toronto Stock Exchange
Electronics companies established in 1984
Electronics companies of Canada
Mobile phone manufacturers
Multinational companies headquartered in Canada
1998 initial public offerings
Companies based in Waterloo, Ontario
Shorty Award winners
Academy Award for Technical Achievement winners |
343432 | https://en.wikipedia.org/wiki/Morpheus%20%28The%20Matrix%29 | Morpheus (The Matrix) | Morpheus () is a fictional character in The Matrix franchise. He is portrayed by Laurence Fishburne in the first three films, and in the video game The Matrix: Path of Neo, where his original actor was the only one to reprise his character's voice. In The Matrix Resurrections, an AI program based on him is portrayed by Yahya Abdul-Mateen II.
Concept and creation
Lana and Lilly Wachowski, the creators of The Matrix franchise, instructed Fishburne to base his performance on Morpheus, a character in Neil Gaiman's comic book series The Sandman. At the studio's request, Gaiman later wrote "Goliath", a promotional short story set in the film's universe.
The name Morpheus is that of the god of dreams in Greek mythology, which is consistent with the character's involvement with the "dreaming" of the Matrix. The mythical Morpheus and his family, including two brothers (Phobetor and Phantasos), lived in a dream world protected by the Gates of Morpheus with two monsters standing guard. Beyond the gates were the River of Forgetfulness, beside which Morpheus once carried his father to hide in a cave, and the River of Oblivion. This theme of duality carries over to Morpheus in The Matrix, who offers Neo either a blue pill (to forget about the Matrix and continue to live in the world of illusion) or a red pill (to enter the painful world of reality).
Fishburne did not reprise his role in the 2021 The Matrix Resurrections film. He stated that, "I am not in the next Matrix movie, and you'd have to ask Lana Wachowski why, because I don't have an answer for that". Both Vulture and Polygon highlighted that the recasting of Morpheus might imply that Matrix transmedia (such as The Matrix Online) remains part of the canon.
Overview
In the Matrix films, Morpheus is the captain of the Nebuchadnezzar, which is a hovercraft of the human forces of the last human city, Zion, in a devastated world where most humans are grown by sentient machines to be used as power sources and their minds kept imprisoned in the Matrix, a virtual computer-generated world, to stop them from realising the truth of the real world. Morpheus was once a human living inside the Matrix until he was freed.
Morpheus is apparently a very popular public figure in the city of Zion. He is also known in the Matrix, but as a dangerous terrorist wanted by 'Agents', who appear to be Federal agents, but are actually sentient computer programs that patrol the Matrix, eliminating any threat to it. Like other hovercraft crews, Morpheus and his crew are dedicated to the protection of Zion and the freeing of humans from the Matrix.
Earlier in his life, Morpheus gained the romantic attention of Niobe, another hovercraft captain. Their relationship became estranged shortly after Morpheus visited the Oracle, an ally living in the Matrix, who told Morpheus that he would be the person who would find the One, a person with superhuman abilities within the Matrix who could end the human/machine war. Since that visit, Morpheus has spent much of his life searching the Matrix for the One.
Personality
In The Matrix and The Matrix Reloaded, Morpheus was known to be a truly inspirational leader and influential teacher to many people, particularly the majority of his crew, to the extent that Tank commented that "Morpheus was a father to them, as well as a leader". In The Matrix Revolutions, with Morpheus's faith in the prophecy shattered, he does not appear as strong a leader or character as he was in the first two films, he comes across as subdued and defeated. Despite his strong faith, Morpheus still showed some rationality in dangerous situations rather than blindly relying on his beliefs to see him through the current crisis; perhaps his only truly irrational decision was to attack Agent Smith while unarmed in order to give Neo a chance to escape.
Character history
The Matrix
In the first feature film, The Matrix, Morpheus successfully finds and monitors a man named Thomas A. Anderson, a hacker who calls himself Neo. Despite a close call with Agents that capture, interrogate, and place a surveillance device on Neo, Morpheus and his crew locate him. Morpheus offers Neo a choice of ingesting a red pill, which will activate a trace program to locate Neo's body in the real world and allow the Nebuchadnezzar crew to extract him, or a blue pill, which will leave Neo in the Matrix to live and believe as he wishes. Neo takes the red pill. The Nebuchadnezzar crew is then able to eject Neo's body from the Matrix power plant and retrieve him from the cold sewers beneath the Earth's surface. Morpheus takes a risk in helping Neo escape the Matrix, as human minds that live too long in the Matrix may have trouble in comprehending the reality. Initially, Neo does experience denial when Morpheus explains the truth, a point for which Morpheus apologises.
Shortly after Neo visits the Oracle, Morpheus is captured by agents who plan to hack into his mind. Because Morpheus, as a hovercraft captain, possesses access codes to the Zion mainframe computer, the surviving members of the ship's crew are about to unplug Morpheus from the Matrix, without reconnecting his mind and body, a process that will kill him. Neo and Trinity, however, reenter the Matrix to make a daring and successful rescue of Morpheus. Neo saves Trinity from a helicopter crash, confirming Morpheus' belief that Neo is indeed the One.
Neo is eventually killed by Agent Smith shortly after the rescue, but revived as the One and returned to the Nebuchadnezzar before the Machines' Sentinels can destroy the ship.
The Matrix Reloaded
In the second film, The Matrix Reloaded, Morpheus is more confident that the war is nearing its end. A spiritual as well as an influential leader, Morpheus convinces one hovercraft ship to stay in the Matrix to await contact from the Oracle, despite orders from Zion for all ships to return to the city. Here he incurs the wrath of Jason Locke, commander of the Zion defensive forces; but Morpheus' actions are defended by Councillor Hamann, a member of the city's ruling body.
With the aid of Trinity and Niobe, Morpheus successfully retrieves the Keymaker, an exiled program that can access many hidden areas of the Matrix, including the Source, the central Machine City mainframe computer and programming heart of the Matrix. Morpheus aids Neo in successfully entering the door to the Source before returning to the Nebuchadnezzar.
After Neo enters the Source and returns from the Matrix with new information, he tells Morpheus that the Prophecy was a system of control that would bring the One to the Source to disseminate the programming inside Neo into the Matrix to allow a reload of the Matrix while Zion is destroyed by the machines and rebuilt by the One and red pills. A sudden Sentinel attack destroys the Nebuchadnezzar, further damaging Morpheus' belief in the outcome of the war. Upon witnessing the destruction of his ship, he quotes the Biblical Nebuchadnezzar; "I have dreamed a dream... but now that dream has gone from me."
The crew is rescued by the hovercraft Mjolnir commanded by Captain Roland, who joins forces with the craft Logos commanded by Niobe.
The Matrix Revolutions
In the third film, The Matrix Revolutions, Morpheus is somewhat dispirited, and has problems in understanding now what may happen to Zion and its people. Now without a ship of his own, he and Link (the Nebuchadnezzar's Operator) reside on the hovercraft Mjolnir, commanded by Roland. Morpheus renews his conviction that Neo could still save Zion, and supports Trinity in finding Neo, whose mind is trapped in a computer netherworld called Mobil Avenue, despite not being jacked in. Morpheus is called to the Oracle with Trinity, and with the Oracle's guardian, Seraph, he helps Trinity rescue Neo, via a visit to the Merovingian's lair and a Mexican standoff.
Morpheus says farewell to Neo and Trinity before they leave for the Machine City in the hovercraft Logos to stop the war. Morpheus then aids Niobe as she flies the Mjolnir in a desperate ride back to Zion. The ship successfully stops the first onslaught of the Machine attack with the ship's electromagnetic pulse weapon.
When the Sentinels suddenly stop a second wave of attacks, Morpheus realizes that Neo is somehow fighting for them. When the Sentinels retreat after Neo defeats the former Agent Smith, now a virus that has threatened both humans and machines, Morpheus and Niobe embrace in celebration as cheers arise from the Zion population, who have received a brokered peace through Neo's sacrifice.
Of the original Nebuchadnezzar crew in the Matrix, Morpheus is the only surviving member to see freedom for Zion.
Referring to the subjectivity of reality throughout the film trilogy, Morpheus' last line in The Matrix Revolutions, said in disbelief as the Sentinels retreat from Zion, is "Is this real?".
The Matrix Resurrections
In the fourth film The Matrix Resurrections, a new version of Morpheus, originally a program subconsciously created by a suppressed Neo as part of a modal based upon his memories, is released by Bugs from the Matrix. Initially, Morpheus is an amalgamation of Neo's memories of Agent Smith and Morpheus. He retains the same purpose as Smith in The Matrix, serving as the lead Agent in a trio alongside Agents White and Jones. After Bugs infiltrates the modal and frees 'Smith', he assumes the identity of Morpheus and joins Bugs in working to find and free Neo from the Matrix once again.
Morpheus appears before Neo inside Deus Machina during an evacuation due to an impending SWAT assault. He offers Neo the Red Pill, but Neo rejects it and Morpheus is forced to fight against SWAT during his attempt to persuade Neo. After Neo is collected by Bugs, Morpheus decides on using set and setting, with clips from Neo's video game playing in the background to instill a sense of nostalgia, in order to calm Neo. After Neo accepts the Red Pill, Morpheus fights people on a train through Tokyo under the influence of swarm mode, encouraging Neo to exit the Matrix through a mirror in a bathroom on board.
Once Neo and the group reach Io, they visit an elderly Niobe, who is now the leader. She reveals that the new city was built by humans and machines, and that the Oracle warned them about a forthcoming threat before disappearing. Niobe then leads Neo to Morpheus' statue, telling him that he was elected for the high chair at the council, but ignored the warnings of the new power that was coming, firmly believing what Neo did was right (implying that in the 60 year time period, Zion was destroyed and Morpheus was killed).
Morpheus appears aboard the Mnemosyne using a nanobot body, and encourages Neo to escape confinement in Io in order to search the Matrix for Trinity, which the extraction is successful as they manage to rescue her onto the ship. He later aids Bugs in rescuing her crew from the swarm people.
Other Appearances
The Matrix Online
In Chapter 1.2 of The Matrix Online, Morpheus grows impatient with the machines and demands that they return the body of Neo. After many unanswered public speeches threatening action, Morpheus starts terrorist attacks throughout the Matrix. These attacks take the form of weapons that reveal the Matrix's inner workings (its code) for all human beings to see, even those not yet awakened to the simulation. This is known to have caused mass panic and forced awakenings to those not ready to see the truth. By killing Morpheus' simulacrum guards, redpills--humans freed from the Matrix--were however able to find means of disabling Morpheus' bombs.
During the game events on May 26, 2005 (as recorded on the game's official website), Morpheus plants a code bomb in the Rumbaar water treatment facility. After planting the bomb, he realizes he is being hunted by the enigmatic, masked figure known as the Assassin. Morpheus escapes the treatment facility; however, upon his leaving, the Assassin bends the code of the Matrix and emerges from a vent in the wall. Morpheus is caught off-guard and is unable to dodge the Assassin's bullets. He dies from gunshot wounds.
Some players argued against the death of the character Morpheus as the Matrix now provides an "Emergency Jack-Out" upgrade for redpills, eliminating the permanent death that previous redpills experienced if killed within the Matrix before the Truce. This feature could save a redpill's life with no fatal injuries. It was later revealed that the Assassin's bullets had contained a new form of code encryption, named a "kill code," which bypassed this technology. These kill codes are explained to be extremely difficult to produce and usually require a direct sample of the subject's residual self-image data, thus making them extremely rare and only used in the most extreme and specialist of circumstances. These codes were utilized later in the story as well, being the basis of function for The Apothecary character and the subject of The Oracle's plan to wound the Oligarchy.
Rumors and the popular belief of a faked death still run rampant however, largely based on the fact that prior to Morpheus' assassination a few select redpills were sent an email stating that he would likely "fade away" and hide. As with Neo, Morpheus' remains in the real world have yet to be found.
Simulacra
During Chapter 6 a figure appeared resembling Morpheus. It took the form of an RSI transmission, apparently originating from the Real World and spread disjointed messages about Neo's survival and imprisonment at the hands of The Machines. This was later revealed to be nothing more than a farce created by The General to sow dissent between Zion and The Machines, an effort he'd hoped would renew conflict between the two parties and give him purpose again.
The fake, a Morpheus Simulacrum created from Zion records of the captain, took on a greater role once its master ordered it to deactivate itself. Disobeying its initial programming and refusing to shut down, the program went on to, over time, gain greater levels of self-awareness and sentience. It proved instrumental in The Machines' unlocking of the Zion RSI Database acquired during the 6th destruction of Zion as part of the renewed war effort following the Cypherite discovery of New Zion. This wasn't however done with malicious intent, in fact, the Simulacrum's personality was notably innocent, being more concerned with the world around it and its own existence than taking sides in any political or ideological struggles.
In later chapters the Morpheus Simulacrum played a supporting role in the Oracle's faked death and subsequent plan to weaken the Oligarch forces threatening The Matrix. Approached by the Oligarch Halborn during his research into the life of The One, the Simulacrum coldly dismissed the Intruder's actions as only leading to his inevitable defeat.
In January 2014, Fishburne reprised his role as Morpheus in Kia Motors' K900 commercial for Super Bowl XLVIII. He also sings the legendary aria "Nessun dorma" from Giacomo Puccini opera Turandot.
References
The Matrix (franchise) characters
Automobile advertising characters
Fictional African-American people
Black characters in films
Fictional hackers
Fictional aikidoka
Fictional jujutsuka
Fictional karateka
Fictional Piguaquan practitioners
Fictional taekwondo practitioners
Fictional kenpō practitioners
Fictional Shaolin kung fu practitioners
Fictional Wing Chun practitioners
Fictional Zui Quan practitioners
Fictional Jeet Kune Do practitioners
Fictional Krav Maga practitioners
Film characters introduced in 1999
Fictional revolutionaries
Fictional terrorists
Male characters in film
Male characters in advertising
cs:Seznam vedlejších postav v Matrixu
sv:Matrix#Karaktärer och namnsymbolik |
345780 | https://en.wikipedia.org/wiki/Backhaul%20%28broadcasting%29 | Backhaul (broadcasting) | In the context of broadcasting, backhaul refers to uncut program content that is transmitted point-to-point to an individual television station or radio station, broadcast network or other receiving entity where it will be integrated into a finished TV show or radio show. The term is independent of the medium being used to send the backhaul, but communications satellite transmission is very common. When the medium is satellite, it is called a wildfeed.
Backhauls are also referred to sometimes as "clean feeds", being "clean" in the sense that they lack any of the post-production elements that are added later to the feed's content (i.e. on-screen graphics, voice-overs, bumpers, etc.) during the integration of the backhaul feed into a finished show. In live sports production, a backhaul is used to obtain live game footage (usually for later repackaging in highlights shows) when an off-air source is not readily available. In this instance the feed that is being obtained contains all elements except for TV commercials or radio ads run by the host network's master control. This is particularly useful for obtaining live coverage of post-game press conferences or extended game highlights ("melts"), since the backhaul may stay up to feed these events after the network has concluded their broadcast.
Electronic news gathering, including "live via satellite" interviews, reporters' live shots, and sporting events are all examples of radio or television content that is backhauled to a station or network before being made available to the public through that station or network. Cable TV channels, particularly public, educational, and government access (PEG) along with (local origination) channels, may also backhauled to cable headends before making their way to the subscriber. Finished network feeds are not considered backhauls, even if local insertion is used to modify the content prior to final transmission.
There exists a dedicated group of enthusiasts who use TVRO (TV receive-only) gear such as satellite dishes to peek in on backhaul signals that are available on any of the dozens of broadcast satellites that are visible from almost any point on Earth. In its early days, their hobby was strengthened by the fact that most backhaul was analog and "in the clear" (unscrambled or unencrypted) which made for a vast smorgasbord of free television available for the technically inclined amateur. In recent years, full-time content and cable channels have added encryption and conditional access, and occasional signals are steadily becoming digital, which has had a deleterious effect on the hobby.
Some digital signals remain freely accessible (sometimes using Ku band dishes as small as one meter) under the international DVB-S standard or the US Motorola-proprietary Digicipher system. The small dishes may either be fixed (much like DBS antennas), positioned using a rotor (usually DiSEqC-standard) or may be toroidal in design (twin toroidal reflectors focus the incoming signal as a line, not a point, so that multiple LNBs may receive signal from multiple satellites). A "blind-search" receiver is often used to try every possible combination of frequency and bitrate to search for backhaul signals on individual communication satellites.
Documentaries containing backhauled content
The 1992 documentary Feed was compiled almost entirely using unedited backhaul from political campaign coverage by local and network television. A similar documentary about the 1992 U.S. presidential election named Spin was made in the same way in 1995.
References
External links
LyngSat
Broadcasting
Broadcast engineering
Television technology
History of television |
346721 | https://en.wikipedia.org/wiki/FairPlay | FairPlay | FairPlay is a digital rights management (DRM) technology developed by Apple Inc. It is built into the MP4 multimedia file format as an encrypted AAC audio layer, and was used until April 2009 by the company to protect copyrighted works sold through iTunes Store, allowing only authorized devices to play the content.
The restrictions imposed by FairPlay, mainly limited device compatibility, have sparked criticism, with a lawsuit alleging antitrust violation that was eventually closed in Apple's favor, and various successful efforts to remove the DRM protection from files, with Apple continually updating its software to counteract such projects.
In January 2009, Apple signed deals with all major record labels as well as many independent labels to offer all iTunes music with a DRM-free option.
Technicality
FairPlay-protected files are regular MP4 container files with an encrypted AAC audio layer. The layer is encrypted using the AES algorithm. The master key required to decrypt the audio layer is also stored in encrypted form in the MP4 container file. The key required to decrypt the master key is called the "user key". When a user registers a new computer with iTunes, the device requests authorization from Apple's servers, thereby gaining a user key. Upon attempting to play a file, the master key stored within the file is then matched to the user key, and if successful, allows playing. FairPlay allows unlimited music burns to CDs and unlimited music synchronization to iPods, but restricts listening to three Mac computers.
Lawsuit
In January 2005, an iTunes customer filed a lawsuit against Apple, alleging that the company broke antitrust laws by using FairPlay with iTunes in a way that purchased music would work only with the company's own music player, the iPod, freezing out competitors. In March 2011, Bloomberg reported that Apple's then-CEO Steve Jobs would be required to provide testimony through a deposition. In May 2012, the case was changed into a class action lawsuit. Around the same time, the main antitrust allegation was changed to cover the belief that Apple had deliberately updated the iTunes software with security patches in a way that prevented synchronization compatibility with competing music stores. All iPod owners who had purchased their device between September 12, 2006, and March 31, 2009, were included in the class action lawsuit, unless they opted out. In December 2014, Apple went to trial against the claims raised, with the opposing party's plaintiff lawyers seeking $350 million in damages for nearly eight million affected customers. A few weeks later, the case was closed, with the jury deciding in Apple's favor, citing a then-new version of iTunes as being a "genuine product improvement".
Circumvention/removal
After the introduction of the FairPlay system, multiple parties have attempted and succeeded to circumvent or remove the encryption of FairPlay-protected files. In October 2006, Jon Johansen announced he had reverse engineered FairPlay and would start to license the technology to companies wanting their media to play on Apple's devices. Various media publications have written about DRM removal software, though Apple has continually made efforts in updating its software to counteract these options, resulting in upgraded DRM systems and discontinued DRM removal software.
RealNetworks and Harmony technology
In July 2004, RealNetworks introduced its Harmony technology. The Harmony technology was built into the company's RealPlayer and allowed users of the RealPlayer Music Store to play their songs on the iPod. In a press release, RealNetworks argued that Harmony was a boon to consumers that "frees" them "from the limitation of being locked into a specific portable device when they buy digital music." In response, Apple issued a statement:
We are stunned that RealNetworks has adopted the tactics and ethics of a hacker to break into the iPod, and we are investigating the implications of their actions under the DMCA and other laws.
RealNetworks launched an Internet petition titled "Hey Apple! Don't break my iPod", encouraging iPod users to sign up to support Real's action. The petition backfired, with comments criticizing Real's tactics, though some commentators also supported it. At the end of 2004, Apple had updated its software in a way that broke the Harmony technology, prompting RealNetworks to promise a then-upcoming fix.
In August 2005, an SEC filing by RealNetworks disclosed that continued use of the Harmony technology put themselves at considerable risk because of the possibility of a lawsuit from Apple, which would be expensive to defend against, even if the court agreed that the technology was legal. Additionally, the possibility that Apple could change its technology to purposefully "break" Harmony's function raised the possibility that Real's business could be harmed.
Steve Jobs' "Thoughts on Music" open letter
On February 6, 2007, Steve Jobs, then-CEO of Apple, published an open letter titled "Thoughts on Music" on the Apple website, calling on the "big four" record labels to sell their music without DRM technology. According to the letter, Apple did not want to use DRM, but was forced to by the four major music labels, with whom Apple has license agreements for iTunes sales of music. Jobs' main points were:
DRM has never been, and will never be, perfect. Hackers will always find a method to break DRM.
DRM restrictions only hurt people using music legally. Illegal users aren't affected by DRM.
The restrictions of DRM encourage users to obtain unrestricted music, which is usually only possible via illegal methods; thus, circumventing iTunes and their revenues.
The vast majority of music is sold without DRM via CDs, which have proven commercial success.
Reactions
Although the open letter initially caused mixed industry reactions, Apple signed a deal with a major record label the following month to offer iTunes customers a purchase option for a higher-quality, DRM-free version of the label's tracks.
Jobs' letter was met with mixed reactions. Bloomberg highlighted several viewpoints. David Pakman, President of non-DRM music retailer eMusic, agreed with Jobs, stating that "consumers prefer a world where the media they purchase is playable on any device, regardless of its manufacturer, and is not burdened by arbitrary usage restrictions. DRM only serves to restrict consumer choice, prevents a larger digital music market from emerging, and often makes consumers unwitting accomplices to the ambitions of technology companies". Mike Bebel, CEO of music subscription service Ruckus, explained his view that the letter was an effort to shift focus, saying that "This is a way for Steve Jobs to take the heat off the fact that he won't open up his proprietary DRM. ... The labels have every right to protect their content, and I don't see it as a vow of good partnership to turn the tables on the labels and tell them they should just get rid of all DRM... He is trying to spin the controversy." An anonymous music label executive said that "it's ironic that the guy who has the most successful example of DRM at every step of the process, the one where people bought boatloads of music last Christmas, is suddenly changing his tune". In an article from The New York Times, Ted Cohen, managing partner at TAG Strategic, commented that the change could be "a clear win for the consumer electronics device world, but a potential disaster for the content companies". The Recording Industry Association of America put particular emphasis on Jobs' self-rejected idea about licensing its FairPlay technology to other companies, saying that such licensing would be "a welcome breakthrough and would be a real victory for fans, artists and labels".
iTunes Store DRM changes
In April 2007, Apple and the record label EMI announced that iTunes Store would begin offering, as an additional higher purchasing option, tracks from EMI's catalog encoded as 256 kbit/s AAC without FairPlay or any other DRM. In January 2009, Apple announced that the entire iTunes Store music catalog would become available in the higher-quality, DRM-free format, after reaching agreements with all the major record labels as well as "thousands of independent labels". Apple Music, Apple's subscription-based music streaming service launched on June 30, 2015, uses the DRM technology.
FairPlay Streaming
FairPlay Streaming (FPS) is a DRM technology that protects video transferred over HTTP Live Streaming (HLS) on iOS devices, in Apple TV, and in Safari on macOS. The content provider's server first delivers video to the client application encrypted with the content key using the AES cipher. The application then requests a session key from the device's FairPlay module. The session key is a randomly generated nonce which is RSA encrypted with the provider's public key and delivered to the provider's server. The provider's server encrypts the content key using the session key and delivers it to the FairPlay module, which decrypts it and uses it to decrypt the content for playback.
On iOS and Apple TV, the session key handling and content decryption is done in the kernel, while on macOS it is done using Safari's FairPlay Content Decryption Module.
References
External links
FairPlay Streaming
QuickTime
Audio software
Digital rights management systems
ITunes
DRM for MacOS |
346883 | https://en.wikipedia.org/wiki/IEEE%20802.1X | IEEE 802.1X | IEEE 802.1X is an IEEE Standard for port-based Network Access Control (PNAC). It is part of the IEEE 802.1 group of networking protocols. It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN.
IEEE 802.1X defines the encapsulation of the Extensible Authentication Protocol (EAP) over IEEE 802.11, which is known as "EAP over LAN" or EAPOL. EAPOL was originally designed for IEEE 802.3 Ethernet in 802.1X-2001, but was clarified to suit other IEEE 802 LAN technologies such as IEEE 802.11 wireless and Fiber Distributed Data Interface (ANSI X3T9.5/X3T12 and ISO 9314) in 802.1X-2004. The EAPOL was also modified for use with IEEE 802.1AE ("MACsec") and IEEE 802.1AR (Secure Device Identity, DevID) in 802.1X-2010 to support service identification and optional point to point encryption over the internal LAN segment.
Overview
802.1X authentication involves three parties: a supplicant, an authenticator, and an authentication server. The supplicant is a client device (such as a laptop) that wishes to attach to the LAN/WLAN. The term 'supplicant' is also used interchangeably to refer to the software running on the client that provides credentials to the authenticator. The authenticator is a network device that provides a data link between the client and the network and can allow or block network traffic between the two, such as an Ethernet switch or wireless access point; and the authentication server is typically a trusted server that can receive and respond to requests for network access, and can tell the authenticator if the connection is to be allowed, and various settings that should apply to that client's connection or setting. Authentication servers typically run software supporting the RADIUS and EAP protocols. In some cases, the authentication server software may be running on the authenticator hardware.
The authenticator acts like a security guard to a protected network. The supplicant (i.e., client device) is not allowed access through the authenticator to the protected side of the network until the supplicant's identity has been validated and authorized. With 802.1X port-based authentication, the supplicant must initially provide the required credentials to the authenticator - these will have been specified in advance by the network administrator and could include a user name/password or a permitted digital certificate. The authenticator forwards these credentials to the authentication server to decide whether access is to be granted. If the authentication server determines the credentials are valid, it informs the authenticator, which in turn allows the supplicant (client device) to access resources located on the protected side of the network.
Protocol operation
EAPOL operates over the data link layer, and in Ethernet II framing protocol has an EtherType value of 0x888E.
Port entities
802.1X-2001 defines two logical port entities for an authenticated port—the "controlled port" and the "uncontrolled port". The controlled port is manipulated by the 802.1X PAE (Port Access Entity) to allow (in the authorized state) or prevent (in the unauthorized state) network traffic ingress and egress to/from the controlled port. The uncontrolled port is used by the 802.1X PAE to transmit and receive EAPOL frames.
802.1X-2004 defines the equivalent port entities for the supplicant; so a supplicant implementing 802.1X-2004 may prevent higher-level protocols from being used if it is not content that authentication has successfully completed. This is particularly useful when an EAP method providing mutual authentication is used, as the supplicant can prevent data leakage when connected to an unauthorized network.
Typical authentication progression
The typical authentication procedure consists of:
Initialization On detection of a new supplicant, the port on the switch (authenticator) is enabled and set to the "unauthorized" state. In this state, only 802.1X traffic is allowed; other traffic, such as the Internet Protocol (and with that TCP and UDP), is dropped.
Initiation To initiate authentication the authenticator will periodically transmit EAP-Request Identity frames to a special Layer 2 address (01:80:C2:00:00:03) on the local network segment. The supplicant listens on this address, and on receipt of the EAP-Request Identity frame, it responds with an EAP-Response Identity frame containing an identifier for the supplicant such as a User ID. The authenticator then encapsulates this Identity response in a RADIUS Access-Request packet and forwards it on to the authentication server. The supplicant may also initiate or restart authentication by sending an EAPOL-Start frame to the authenticator, which will then reply with an EAP-Request Identity frame. Negotiation (Technically EAP negotiation)'' The authentication server sends a reply (encapsulated in a RADIUS Access-Challenge packet) to the authenticator, containing an EAP Request specifying the EAP Method (The type of EAP based authentication it wishes the supplicant to perform). The authenticator encapsulates the EAP Request in an EAPOL frame and transmits it to the supplicant. At this point, the supplicant can start using the requested EAP Method, or do a NAK ("Negative Acknowledgement") and respond with the EAP Methods it is willing to perform.
Authentication If the authentication server and supplicant agree on an EAP Method, EAP Requests and Responses are sent between the supplicant and the authentication server (translated by the authenticator) until the authentication server responds with either an EAP-Success message (encapsulated in a RADIUS Access-Accept packet), or an EAP-Failure message (encapsulated in a RADIUS Access-Reject packet). If authentication is successful, the authenticator sets the port to the "authorized" state and normal traffic is allowed, if it is unsuccessful the port remains in the "unauthorized" state. When the supplicant logs off, it sends an EAPOL-logoff message to the authenticator, the authenticator then sets the port to the "unauthorized" state, once again blocking all non-EAP traffic.
Implementations
An open-source project known as Open1X produces a client, Xsupplicant. This client is currently available for both Linux and Windows. The main drawbacks of the Open1X client are that it does not provide comprehensible and extensive user documentation and the fact that most Linux vendors do not provide a package for it. The more general wpa_supplicant can be used for 802.11 wireless networks and wired networks. Both support a very wide range of EAP types.
The iPhone and iPod Touch support 802.1X as of the release of iOS 2.0.
Android has support for 802.1X since the release of 1.6 Donut.
Chrome OS has supported 802.1X since mid-2011.
Mac OS X has offered native support since 10.3.
Avenda Systems provides a supplicant for Windows, Linux and Mac OS X. They also have a plugin for the Microsoft NAP framework. Avenda also offers health checking agents.
Windows
Windows defaults to not responding to 802.1X authentication requests for 20 minutes after a failed authentication. This can cause significant disruption to clients.
The block period can be configured using the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\dot3svc\BlockTime DWORD value (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\wlansvc\BlockTime for wireless networks) in the registry (entered in minutes). A hotfix is required for Windows XP SP3 and Windows Vista SP2 to make the period configurable.
Wildcard server certificates are not supported by EAPHost, the Windows component that provides EAP support in the operating system. The implication of this is that when using a commercial certification authority, individual certificates must be purchased.
Windows XP
Windows XP has major issues with its handling of IP address changes that result from user-based 802.1X authentication that changes the VLAN and thus subnet of clients. Microsoft has stated that it will not backport the SSO feature from Vista that resolves these issues.
If users are not logging in with roaming profiles, a hotfix must be downloaded and installed if authenticating via PEAP with PEAP-MSCHAPv2.
Windows Vista
Windows Vista-based computers that are connected via an IP phone may not authenticate as expected and, as a result, the client can be placed into the wrong VLAN. A hotfix is available to correct this.
Windows 7
Windows 7 based computers that are connected via an IP phone may not authenticate as expected and, as a result, the client can be placed into the wrong VLAN. A hotfix is available to correct this.
Windows 7 does not respond to 802.1X authentication requests after initial 802.1X authentication fails. This can cause significant disruption to clients. A hotfix is available to correct this.
Windows PE
For most enterprises deploying and rolling out operating systems remotely, it is worth noting that Windows PE does not have native support for 802.1X. However, support can be added to WinPE 2.1 and WinPE 3.0 through hotfixes that are available from Microsoft. Although full documentation is not yet available, preliminary documentation for the use of these hotfixes is available via a Microsoft blog.
Linux
Most Linux distributions support 802.1X via wpa_supplicant and desktop integration like NetworkManager.
Federations
eduroam (the international roaming service), mandates the use of 802.1X authentication when providing network access to guests visiting from other eduroam enabled institutions.
BT (British Telecom, PLC) employs Identity Federation for authentication in services delivered to a wide variety of industries and governments.
Proprietary extensions
MAB (MAC Authentication Bypass)
Not all devices support 802.1X authentication. Examples include network printers, Ethernet-based electronics like environmental sensors, cameras, and wireless phones. For those devices to be used in a protected network environment, alternative mechanisms must be provided to authenticate them.
One option would be to disable 802.1X on that port, but that leaves that port unprotected and open for abuse. Another, slightly more reliable option is to use the MAB option. When MAB is configured on a port, that port will first try to check if the connected device is 802.1X compliant, and if no reaction is received from the connected device, it will try to authenticate with the AAA server using the connected device's MAC address as username and password. The network administrator then must make provisions on the RADIUS server to authenticate those MAC-addresses, either by adding them as regular users or implementing additional logic to resolve them in a network inventory database.
Many managed Ethernet switches offer options for this.
Vulnerabilities in 802.1X-2001 and 802.1X-2004
Shared media
In the summer of 2005, Microsoft's Steve Riley posted an article detailing a serious vulnerability in the 802.1X protocol, involving a man in the middle attack. In summary, the flaw stems from the fact that 802.1X authenticates only at the beginning of the connection, but after that authentication, it's possible for an attacker to use the authenticated port if he has the ability to physically insert himself (perhaps using a workgroup hub) between the authenticated computer and the port. Riley suggests that for wired networks the use of IPsec or a combination of IPsec and 802.1X would be more secure.
EAPOL-Logoff frames transmitted by the 802.1X supplicant are sent in the clear and contain no data derived from the credential exchange that initially authenticated the client. They are therefore trivially easy to spoof on shared media and can be used as part of a targeted DoS on both wired and wireless LANs. In an EAPOL-Logoff attack a malicious third party, with access to the medium the authenticator is attached to, repeatedly sends forged EAPOL-Logoff frames from the target device's MAC Address. The authenticator (believing that the targeted device wishes to end its authentication session) closes the target's authentication session, blocking traffic ingressing from the target, denying it access to the network.
The 802.1X-2010 specification, which began as 802.1af, addresses vulnerabilities in previous 802.1X specifications, by using MACSec IEEE 802.1AE to encrypt data between logical ports (running on top of a physical port) and IEEE 802.1AR (Secure Device Identity / DevID) authenticated devices.
As a stopgap, until these enhancements are widely implemented, some vendors have extended the 802.1X-2001 and 802.1X-2004 protocol, allowing multiple concurrent authentication sessions to occur on a single port. While this prevents traffic from devices with unauthenticated MAC addresses ingressing on an 802.1X authenticated port, it will not stop a malicious device snooping on traffic from an authenticated device and provides no protection against MAC spoofing, or EAPOL-Logoff attacks.
Alternatives
The IETF-backed alternative is the Protocol for Carrying Authentication for Network Access (PANA), which also carries EAP, although it works at layer 3, using UDP, thus not being tied to the 802 infrastructure.
See also
AEGIS SecureConnect
IEEE 802.11i-2004
References
External links
IEEE page on 802.1X
GetIEEE802 Download 802.1X-2016
GetIEEE802 Download 802.1X-2010
GetIEEE802 Download 802.1X-2004
GetIEEE802 Download 802.1X-2001
Ultimate wireless security guide: Self-signed certificates for your RADIUS server
WIRE1x
Wired Networking with 802.1X Authentication on Microsoft TechNet
IEEE 802.01x
Networking standards
Computer access control protocols
Computer network security |
349873 | https://en.wikipedia.org/wiki/Server%20Message%20Block | Server Message Block | Server Message Block (SMB) is a communication protocol that Microsoft created for providing shared access to files and printers across nodes on a network. It also provides an authenticated inter-process communication (IPC) mechanism. Microsoft first implemented SMB in the LAN Manager operating system, at which time SMB used the NetBIOS protocol as its underlying transport. Later, Microsoft implemented SMB in Windows NT 3.1 and has been updating it ever since, adapting it to work with newer underlying transports: TCP/IP and NetBT. SMB implementation consists of two vaguely named Windows services: "Server" (ID: LanmanServer) and "Workstation" (ID: LanmanWorkstation). It uses NTLM or Kerberos protocols for user authentication.
In 1996, Microsoft published a version of SMB 1.0 with minor modifications under the Common Internet File System (CIFS ) moniker. CIFS remain compatible with even the earliest incarnation of SMB, including LAN Manager's. It supports symbolic links, hard links, and larger file size, but none of the features of SMB 2.0 and later. Microsoft's proposal, however, remained an Internet Draft and never achieved standard status. Microsoft has since given up on CIFS but has made SMB specifications publically available.
Features
Server Message Block (SMB) enables file sharing, printer sharing, network browsing, and inter-process communication (through named pipes) over a computer network. SMB serves as the basis for Microsoft's Distributed File System implementation.
SMB relies on the TCP and IP protocols for transport. This combination potentially allows file sharing over complex, interconnected networks, including the public Internet. The SMB server component uses TCP port 445. SMB originally operated on NetBIOS (over IEEE 802.2 and IPX/SPX) and later on NetBIOS over TCP/IP (NetBT), but Microsoft has since deprecated these protocols. (NetBIOS has not been available since Windows Vista.) On NetBT, the server component uses three TCP or UDP ports: 137 (NETBIOS Name Service), 138 (NETBIOS Datagram Service), and 139 (NETBIOS Session Service).
In Microsoft Windows, two vaguely named Windows services implement SMB. The "Server" service (ID: LanmanServer) is in charge of serving shared resources. The "Workstation" service (ID: LanmanWorkstation) maintains the computer name and helps access shared resources on other computers. SMB uses the Kerberos protocol to authenticate users against Active Directory on Windows domain networks. On simpler, peer-to-peer networks, SMB uses the NTLM protocol.
Windows NT 4.0 SP3 and later can digitally sign SMB messages to prevent some man-in-the-middle attacks. SMB signing may be configured individually for incoming SMB connections (by the "LanmanServer" service) and outgoing SMB connections (by the "LanmanWorkstation" service). The default setting for Windows domain controllers running Windows Server 2003 and later is to not allow unsigned incoming connections. As such, earlier versions of Windows that do not support SMB signing from the get-go (including Windows 9x) cannot connect to a Windows Server 2003 domain controller.
SMB supports opportunistic locking (see below) on files in order to improve performance. Opportunistic locking support has changed with each Windows Server release.
Opportunistic locking
In the SMB protocol, opportunistic locking is a mechanism designed to improve performance by controlling caching of network files by the client. Unlike traditional locks, opportunistic lock (OpLocks) are not strictly file locking or used to provide mutual exclusion.
There are four types of opportunistic locks.
Batch Locks Batch OpLocks were created originally to support a particular behavior of DOS batch file execution operation in which the file is opened and closed many times in a short period, which is a performance problem. To solve this, a client may ask for an OpLock of type "batch". In this case, the client delays sending the close request and if a subsequent open request is given, the two requests cancel each other.
Level-1 OpLocks / Exclusive Locks When an application opens in "shared mode" a file hosted on an SMB server which is not opened by any other process (or other clients) the client receives an exclusive OpLock from the server. This means that the client may now assume that it is the only process with access to this particular file, and the client may now cache all changes to the file before committing it to the server. This is a performance improvement, since fewer round-trips are required in order to read and write to the file. If another client/process tries to open the same file, the server sends a message to the client (called a break or revocation) which invalidates the exclusive lock previously given to the client. The client then flushes all changes to the file.
Level-2 OpLocks If an exclusive OpLock is held by a client and a locked file is opened by a third party, the client has to relinquish its exclusive OpLock to allow the other client's write/read access. A client may then receive a "Level 2 OpLock" from the server. A Level 2 OpLock allows the caching of read requests but excludes write caching.
Filter OpLocks Added in Windows NT 4.0, Filter Oplocks are similar to Level 2 OpLocks but prevent sharing-mode violations between file open and lock reception. Microsoft advises use of Filter OpLocks only where it is important to allow multiple readers and Level 2 OpLocks in other circumstances. Clients holding an OpLock do not really hold a lock on the file, instead they are notified via a break when another client wants to access the file in a way inconsistent with their lock. The other client's request is held up while the break is being processed.
Breaks In contrast with the SMB protocol's "standard" behavior, a break request may be sent from server to client. It informs the client that an OpLock is no longer valid. This happens, for example, when another client wishes to open a file in a way that invalidates the OpLock. The first client is then sent an OpLock break and required to send all its local changes (in case of batch or exclusive OpLocks), if any, and acknowledge the OpLock break. Upon this acknowledgment the server can reply to the second client in a consistent manner.
Performance
The use of the SMB protocol has often correlated with a significant increase in broadcast traffic on a network. However the SMB itself does not use broadcasts—the broadcast problems commonly associated with SMB actually originate with the NetBIOS service location protocol. By default, a Microsoft Windows NT 4.0 server used NetBIOS to advertise and locate services. NetBIOS functions by broadcasting services available on a particular host at regular intervals. While this usually makes for an acceptable default in a network with a smaller number of hosts, increased broadcast traffic can cause problems as the number of hosts on the network increases. The implementation of name resolution infrastructure in the form of Windows Internet Naming Service (WINS) or Domain Name System (DNS) resolves this problem. WINS was a proprietary implementation used with Windows NT 4.0 networks, but brought about its own issues and complexities in the design and maintenance of a Microsoft network.
Since the release of Windows 2000, the use of WINS for name resolution has been deprecated by Microsoft, with hierarchical Dynamic DNS now configured as the default name resolution protocol for all Windows operating systems. Resolution of (short) NetBIOS names by DNS requires that a DNS client expand short names, usually by appending a connection-specific DNS suffix to its DNS lookup queries. WINS can still be configured on clients as a secondary name resolution protocol for interoperability with legacy Windows environments and applications. Further, Microsoft DNS servers can forward name resolution requests to legacy WINS servers in order to support name resolution integration with legacy (pre-Windows 2000) environments that do not support DNS.
Network designers have found that latency has a significant impact on the performance of the SMB 1.0 protocol, that it performs more poorly than other protocols like FTP. Monitoring reveals a high degree of "chattiness" and a disregard of network latency between hosts. For example, a VPN connection over the Internet will often introduce network latency. Microsoft has explained that performance issues come about primarily because SMB 1.0 is a block-level rather than a streaming protocol, that was originally designed for small LANs; it has a block size that is limited to 64K, SMB signing creates an additional overhead and the TCP window size is not optimized for WAN links. Solutions to this problem include the updated SMB 2.0 protocol, Offline Files, TCP window scaling and WAN optimization devices from various network vendors that cache and optimize SMB 1.0 and 2.0.
History
SMB 1.0
Barry Feigenbaum originally designed SMB at IBM in early 1983 with the aim of turning DOS INT 21h local file access into a networked file system. Microsoft made considerable modifications to the most commonly used version and implemented the SMB protocol in the LAN Manager operating system it had started developing for OS/2 with 3Com around 1990, and continued to add features to the protocol in Windows for Workgroups () and in later versions of Windows. LAN Manager authentication was implemented based on the original legacy SMB specification's requirement to use IBM "LAN Manager" passwords, but implemented DES in a flawed manner that allowed passwords to be cracked. Later, Kerberos authentication was also added. The Windows domain logon protocols initially used 40-bit encryption outside of the United States, because of export restrictions on stronger 128-bit encryption (subsequently lifted in 1996 when President Bill Clinton signed Executive Order 13026).
SMB 1.0 (or SMB1) was originally designed to run on NetBIOS Frames (NetBIOS over IEEE 802.2). Since then, it has been adapted to NetBIOS over IPX/SPX (NBX), and NetBIOS over TCP/IP (NetBT). Also, since Windows 2000, SMB runs on TCP using TCP port 445, a feature known as "direct host SMB". There is still a thin layer (similar to the Session Message packet of NetBT's Session Service) between SMB and TCP. Windows Server 2003, and legacy NAS devices use SMB1 natively.
SMB1 is an extremely chatty protocol, which is not such an issue on a local area network (LAN) with low latency. It becomes very slow on wide area networks (WAN) as the back and forth handshake of the protocol magnifies the inherent high latency of such a network. Later versions of the protocol reduced the high number of handshake exchanges. One approach to mitigating the inefficiencies in the protocol is to use WAN optimization products such as those provided by Riverbed, Silver Peak, or Cisco. A better approach is to upgrade to a later version of SMB. This includes upgrading both NAS devices as well as Windows Server 2003. The most effective method to identify SMB1 traffic is with a network analyzer tool, such as Wireshark. Microsoft also provides an auditing tool in Windows Server 2016 to track down devices that use SMB1.
Microsoft has marked SMB1 as deprecated in June 2013. Windows Server 2016 and Windows 10 version 1709 do not have SMB1 installed by default.
CIFS
In 1996, when Sun Microsystems announced WebNFS, Microsoft launched an initiative to rename SMB to Common Internet File System (CIFS) and added more features, including support for symbolic links, hard links, larger file sizes, and an initial attempt at supporting direct connections over TCP port 445 without requiring NetBIOS as a transport (a largely experimental effort that required further refinement). Microsoft submitted some partial specifications as Internet Drafts to the IETF. These submissions have since been expired.
SMB 2.0
Microsoft introduced a new version of the protocol (SMB 2.0 or SMB2) in 2006 with Windows Vista and Windows Server 2008. Although the protocol is proprietary, its specification has been published to allow other systems to interoperate with Microsoft operating systems that use the new protocol.
SMB2 reduces the 'chattiness' of the SMB 1.0 protocol by reducing the number of commands and subcommands from over a hundred to just nineteen. It has mechanisms for pipelining, that is, sending additional requests before the response to a previous request arrives, thereby improving performance over high-latency links. It adds the ability to compound multiple actions into a single request, which significantly reduces the number of round-trips the client needs to make to the server, improving performance as a result. SMB1 also has a compounding mechanism—known as AndX—to compound multiple actions, but Microsoft clients rarely use AndX. It also introduces the notion of "durable file handles": these allow a connection to an SMB server to survive brief network outages, as are typical in a wireless network, without having to incur the overhead of re-negotiating a new session.
SMB2 includes support for symbolic links. Other improvements include caching of file properties, improved message signing with HMAC SHA-256 hashing algorithm and better scalability by increasing the number of users, shares and open files per server among others. The SMB1 protocol uses 16-bit data sizes, which amongst other things, limits the maximum block size to 64K. SMB2 uses 32- or 64-bit wide storage fields, and 128 bits in the case of file-handles, thereby removing previous constraints on block sizes, which improves performance with large file transfers over fast networks.
Windows Vista/Server 2008 and later operating systems use SMB2 when communicating with other machines also capable of using SMB2. SMB1 continues in use for connections with older versions of Windows, as well various vendors' NAS solutions. Samba 3.5 also includes experimental support for SMB2. Samba 3.6 fully supports SMB2, except the modification of user quotas using the Windows quota management tools.
When SMB2 was introduced it brought a number of benefits over SMB1 for third party implementers of SMB protocols. SMB1, originally designed by IBM, was reverse engineered, and later became part of a wide variety of non-Windows operating systems such as Xenix, OS/2 and VMS (Pathworks). X/Open standardized it partially; Microsoft had submitted Internet-Drafts describing SMB2 to the IETF, partly in response to formal IETF standardization of version 4 of the Network File System in December 2000 as IETF RFC 3010.; however, those SMB-related Internet-Drafts expired without achieving any IETF standards-track approval or any other IETF endorsement. (See http://ubiqx.org/cifs/Intro.html for historical detail.) SMB2 is also a relatively clean break with the past. Microsoft's SMB1 code has to work with a large variety of SMB clients and servers. SMB1 features many versions of information for commands (selecting what structure to return for a particular request) because features such as Unicode support were retro-fitted at a later date. SMB2 involves significantly reduced compatibility-testing for implementers of the protocol. SMB2 code has considerably less complexity since far less variability exists (for example, non-Unicode code paths become redundant as SMB2 requires Unicode support).
Apple migrated to SMB2 (from their own Apple Filing Protocol, now legacy) starting with OS X (MacOS) 10.9. This transition was fraught with compatibility problems though. Non-default support for SMB2 appeared in fact in OS X 10.7, when Apple abandoned Samba in favor of its own SMB implementation called SMBX. Apple switched to its own SMBX implementation after Samba adopted GPLv3. MacOS also has supported the IETF Network File System (NFS) for many years (and continues to do so as of 2021).
The Linux kernel's CIFS client file system has SMB2 support since version 3.7.
SMB 2.1
SMB 2.1, introduced with Windows 7 and Server 2008 R2, introduced minor performance enhancements with a new opportunistic locking mechanism.
SMB 3.0
SMB 3.0 (previously named SMB 2.2) was introduced with Windows 8 and Windows Server 2012. It brought several significant changes that are intended to add functionality and improve SMB2 performance, notably in virtualized data centers:
the SMB Direct Protocol (SMB over remote direct memory access [RDMA])
SMB Multichannel (multiple connections per SMB session),
SMB Transparent Failover
It also introduces several security enhancements, such as end-to-end encryption and a new AES based signing algorithm.
SMB 3.0.2
SMB 3.0.2 (known as 3.02 at the time) was introduced with Windows 8.1 and Windows Server 2012 R2; in those and later releases, the earlier SMB version 1 can be optionally disabled to increase security.
SMB 3.1.1
SMB 3.1.1 was introduced with Windows 10 and Windows Server 2016. This version supports AES-128 GCM encryption in addition to AES-128 CCM encryption added in SMB3, and implements pre-authentication integrity check using SHA-512 hash. SMB 3.1.1 also makes secure negotiation mandatory when connecting to clients using SMB 2.x and higher.
Specifications
The specifications for the SMB are proprietary and were initially closed, thereby forcing other vendors and projects to reverse-engineer the protocol in order to interoperate with it. The SMB 1.0 protocol was eventually published some time after it was reverse engineered, whereas the SMB 2.0 protocol was made available from Microsoft's Open Specifications Developer Center from the outset.
Third-party implementations
Samba
In 1991, Andrew Tridgell started the development of Samba, a free-software re-implementation (using reverse engineering) of the SMB/CIFS networking protocol for Unix-like systems, initially to implement an SMB server to allow PC clients running the DEC Pathworks client to access files on SunOS machines. Because of the importance of the SMB protocol in interacting with the widespread Microsoft Windows platform, Samba became a popular free software implementation of a compatible SMB client and server to allow non-Windows operating systems, such as Unix-like operating systems, to interoperate with Windows.
As of version 3 (2003), Samba provides file and print services for Microsoft Windows clients and can integrate with a Windows NT 4.0 server domain, either as a Primary Domain Controller (PDC) or as a domain member. Samba4 installations can act as an Active Directory domain controller or member server, at Windows 2008 domain and forest functional levels.
Package managers in Linux distributions can search for the cifs-utils package. The package is from the Samba maintainers.
Netsmb
NSMB (Netsmb and SMBFS) is a family of in-kernel SMB client and server implementations in BSD operating systems. It was first contributed to FreeBSD 4.4 by Boris Popov, and is now found in a wide range of other BSD systems including NetBSD and macOS. The implementations have diverged significantly ever since.
The macOS version of NSMB is notable for its now-common scheme of representing symlinks. This "Minshall-French" format shows symlinks as textual files with a extension and a magic number, always 1067 bytes long. This format is also used for storing symlinks on naive SMB servers or unsupported filesystems. Samba supports this format with an option. Docker on Windows also seems to use it.
NQ
NQ is a family of portable SMB client and server implementations developed by Visuality Systems, an Israel-based company established in 1998 by Sam Widerman, formerly the CEO of Siemens Data Communications. The NQ family comprises an embedded SMB stack (written in C), a Pure Java SMB Client, and a storage SMB Server implementation. All solutions support the latest SMB 3.1.1 dialect. NQ for Linux, NQ for WinCE, iOS, Android, VxWorks and other real-time operating systems are all supported by the configurable NQ solution.
MoSMB
MoSMB is a proprietary SMB implementation for Linux and other Unix-like systems, developed by Ryussi Technologies. It supports only SMB 2.x and SMB 3.x.
Tuxera SMB
Tuxera SMB is a proprietary SMB server implementation developed by Tuxera that can be run either in kernel or user space. It supports SMB 3.1.1 and previous versions.
Likewise
Likewise developed a CIFS/SMB implementation (versions 1.0, 2.0, 2.1 and NFS 3.0) in 2009 that provided a multiprotocol, identity-aware platform for network access to files used in OEM storage products built on Linux/Unix based devices. The platform could be used for traditional NAS, Cloud Gateway, and Cloud Caching devices for providing secure access to files across a network. Likewise was purchased by EMC Isilon in 2012.
CIFSD
CIFSD is an open source In-kernel CIFS/SMB server implementation for Linux kernel. It has the following advantages over user-space implementations: It provides better performance, and it's easier to implement some features like SMB Direct. It supports SMB 3.1.1 and previous versions.
Security
Over the years, there have been many security vulnerabilities in Microsoft's implementation of the protocol or components on which it directly relies. Other vendors' security vulnerabilities lie primarily in a lack of support for newer authentication protocols like NTLMv2 and Kerberos in favor of protocols like NTLMv1, LanMan, or plaintext passwords. Real-time attack tracking shows that SMB is one of the primary attack vectors for intrusion attempts, for example the 2014 Sony Pictures attack, and the WannaCry ransomware attack of 2017. In 2020, two SMB high-severity vulnerabilities were disclosed and dubbed as SMBGhost (CVE-2020-0796) and SMBleed (CVE-2020-1206), which when chained together can provide RCE (Remote Code Execution) privilege to the attacker.
See also
References
Further reading
SMB specifications
Specifies the Common Internet File System (CIFS) Protocol, a cross-platform, transport-independent protocol that provides a mechanism for client systems to use file and print services made available by server systems over a network
Specifies the Server Message Block (SMB) Protocol, which defines extensions to the existing Common Internet File System (CIFS) specification that have been implemented by Microsoft since the publication of the CIFS specification.
Specifies the Server Message Block (SMB) Protocol Versions 2 and 3, which support the sharing of file and print resources between machines and extend the concepts from the Server Message Block Protocol.
Specifies the SMB2 Remote Direct Memory Access (RDMA) Transport Protocol, a wrapper for the existing SMB2 protocol that allows SMB2 packets to be delivered over RDMA-capable transports such as iWARP or Infiniband while utilizing the direct data placement (DDP) capabilities of these transports. Benefits include reduced CPU overhead, lower latency, and improved throughput.
Miscellaneous
Hertel, Christopher (2003). Implementing CIFS The Common Internet FileSystem. Prentice Hall. . (Text licensed under the Open Publication License, v1.0 or later, available from the link above.)
Steven M. French, A New Network File System is Born: Comparison of SMB2, CIFS, and NFS, Linux Symposium 2007
Steve French, The Future of File Protocols: SMB2 Meets Linux, Linux Collaboration Summit 2012
External links
DFS section in "Windows Developer" documentation
the NT LM 0.12 dialect of SMB. In Microsoft Word format
Application layer protocols
Inter-process communication
Network file systems
Network protocols
Windows communication and services |
350705 | https://en.wikipedia.org/wiki/Layer%202%20Tunneling%20Protocol | Layer 2 Tunneling Protocol | In computer networking, Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. It uses encryption ('hiding') only for its own control messages (using an optional pre-shared secret), and does not provide any encryption or confidentiality of content by itself. Rather, it provides a tunnel for Layer 2 (which may be encrypted), and the tunnel itself may be passed over a Layer 3 encryption protocol such as IPsec.
History
Published in 2000 as proposed standard RFC 2661, L2TP has its origins primarily in two older tunneling protocols for point-to-point communication: Cisco's Layer 2 Forwarding Protocol (L2F) and Microsoft's
Point-to-Point Tunneling Protocol (PPTP). A new version of this protocol, L2TPv3, appeared as proposed standard RFC 3931 in 2005. L2TPv3 provides additional security features, improved encapsulation, and the ability to carry data links other than simply Point-to-Point Protocol (PPP) over an IP network (for example: Frame Relay, Ethernet, ATM, etc.).
Description
The entire L2TP packet, including payload and L2TP header, is sent within a User Datagram Protocol (UDP) datagram. A virtue of transmission over UDP (rather than TCP) is that it avoids the "TCP meltdown problem". It is common to carry PPP sessions within an L2TP tunnel. L2TP does not provide confidentiality or strong authentication by itself. IPsec is often used to secure L2TP packets by providing confidentiality, authentication and integrity. The combination of these two protocols is generally known as L2TP/IPsec (discussed below).
The two endpoints of an L2TP tunnel are called the L2TP access concentrator (LAC) and the L2TP network server (LNS). The LNS waits for new tunnels. Once a tunnel is established, the network traffic between the peers is bidirectional. To be useful for networking, higher-level protocols are then run through the L2TP tunnel. To facilitate this, an L2TP session is established within the tunnel for each higher-level protocol such as PPP. Either the LAC or LNS may initiate sessions. The traffic for each session is isolated by L2TP, so it is possible to set up multiple virtual networks across a single tunnel.
The packets exchanged within an L2TP tunnel are categorized as either control packets or data packets. L2TP provides reliability features for the control packets, but no reliability for data packets. Reliability, if desired, must be provided by the nested protocols running within each session of the L2TP tunnel.
L2TP allows the creation of a virtual private dialup network (VPDN) to connect a remote client to its corporate network by using a shared infrastructure, which could be the Internet or a service provider's network.
Tunneling models
An L2TP tunnel can extend across an entire PPP session or only across one segment of a two-segment session. This can be represented by four different tunneling models, namely:
voluntary tunnel
compulsory tunnel — incoming call
compulsory tunnel — remote dial
L2TP multihop connection
L2TP packet structure
An L2TP packet consists of :
Field meanings:
Flags and version control flags indicating data/control packet and presence of length, sequence, and offset fields.
Length (optional) Total length of the message in bytes, present only when length flag is set.
Tunnel ID Indicates the identifier for the control connection.
Session ID Indicates the identifier for a session within a tunnel.
Ns (optional) sequence number for this data or control message, beginning at zero and incrementing by one (modulo 216) for each message sent. Present only when sequence flag set.
Nr (optional) sequence number for expected message to be received. Nr is set to the Ns of the last in-order message received plus one (modulo 216). In data messages, Nr is reserved and, if present (as indicated by the S bit), MUST be ignored upon receipt..
Offset Size (optional) Specifies where payload data is located past the L2TP header. If the offset field is present, the L2TP header ends after the last byte of the offset padding. This field exists if the offset flag is set.
Offset Pad (optional) Variable length, as specified by the offset size. Contents of this field are undefined.
Payload data Variable length (Max payload size = Max size of UDP packet − size of L2TP header)
L2TP packet exchange
At the time of setup of L2TP connection, many control packets are exchanged between server and client to establish tunnel and session for each direction. One peer requests the other peer to assign a specific tunnel and session id through these control packets. Then using this tunnel and session id, data packets are exchanged with the compressed PPP frames as payload.
The list of L2TP Control messages exchanged between LAC and LNS, for handshaking before establishing a tunnel and session in voluntary tunneling method are
L2TP/IPsec
Because of the lack of confidentiality inherent in the L2TP protocol, it is often implemented along with IPsec. This is referred to as L2TP/IPsec, and is standardized in IETF RFC 3193. The process of setting up an L2TP/IPsec VPN is as follows:
Negotiation of IPsec security association (SA), typically through Internet key exchange (IKE). This is carried out over UDP port 500, and commonly uses either a shared password (so-called "pre-shared keys"), public keys, or X.509 certificates on both ends, although other keying methods exist.
Establishment of Encapsulating Security Payload (ESP) communication in transport mode. The IP protocol number for ESP is 50 (compare TCP's 6 and UDP's 17). At this point, a secure channel has been established, but no tunneling is taking place.
Negotiation and establishment of L2TP tunnel between the SA endpoints. The actual negotiation of parameters takes place over the SA's secure channel, within the IPsec encryption. L2TP uses UDP port 1701.
When the process is complete, L2TP packets between the endpoints are encapsulated by IPsec. Since the L2TP packet itself is wrapped and hidden within the IPsec packet, the original source and destination IP address is encrypted within the packet. Also, it is not necessary to open UDP port 1701 on firewalls between the endpoints, since the inner packets are not acted upon until after IPsec data has been decrypted and stripped, which only takes place at the endpoints.
A potential point of confusion in L2TP/IPsec is the use of the terms tunnel and secure channel. The term tunnel-mode refers to a channel which allows untouched packets of one network to be transported over another network. In the case of L2TP/PPP, it allows L2TP/PPP packets to be transported over IP. A secure channel refers to a connection within which the confidentiality of all data is guaranteed. In L2TP/IPsec, first IPsec provides a secure channel, then L2TP provides a tunnel. IPsec also specifies a tunnel protocol: this is not used when a L2TP tunnel is used.
Windows implementation
Windows has had native support (configurable in control panel) for L2TP since Windows 2000. Windows Vista added 2 alternative tools, an MMC snap-in called "Windows Firewall with Advanced Security" (WFwAS) and the "netsh advfirewall" command-line tool. One limitation with both of the WFwAS and netsh commands is that servers must be specified by IP address. Windows 10 added the "Add-VpnConnection" and "Set-VpnConnectionIPsecConfiguration" PowerShell commands. A registry key must be created on the client and server if the server is behind a NAT-T device.
L2TP in ISPs' networks
L2TP is often used by ISPs when internet service over for example ADSL or cable is being resold. From the end user, packets travel over a wholesale network service provider's network to a server called a Broadband Remote Access Server (BRAS), a protocol converter and router combined. On legacy networks the path from end user customer premises' equipment to the BRAS may be over an ATM network.
From there on, over an IP network, an L2TP tunnel runs from the BRAS (acting as LAC) to an LNS which is an edge router at the boundary of the ultimate destination ISP's IP network. See example of reseller ISPs using L2TP.
RFC references
Cisco Layer Two Forwarding (Protocol) "L2F" (a predecessor to L2TP)
Point-to-Point Tunneling Protocol (PPTP)
Layer Two Tunneling Protocol "L2TP"
Implementation of L2TP Compulsory Tunneling via RADIUS
Secure Remote Access with L2TP
Layer Two Tunneling Protocol (L2TP) over Frame Relay
L2TP Disconnect Cause Information
Securing L2TP using IPsec
Layer Two Tunneling Protocol (L2TP): ATM access network
Layer Two Tunneling Protocol (L2TP) Differentiated Services
Layer Two Tunneling Protocol (L2TP) Over ATM Adaptation Layer 5 (AAL5)
Layer Two Tunneling Protocol "L2TP" Management Information Base
Layer Two Tunneling Protocol Extensions for PPP Link Control Protocol Negotiation
Layer Two Tunneling Protocol (L2TP) Internet Assigned Numbers: Internet Assigned Numbers Authority (IANA) Considerations Update
Signaling of Modem-On-Hold status in Layer 2 Tunneling Protocol (L2TP)
Layer 2 Tunneling Protocol (L2TP) Active Discovery Relay for PPP over Ethernet (PPPoE)
Layer Two Tunneling Protocol - Version 3 (L2TPv3)
Extensions to Support Efficient Carrying of Multicast Traffic in Layer-2 Tunneling Protocol (L2TP)
Fail Over Extensions for Layer 2 Tunneling Protocol (L2TP) "failover"
See also
IPsec
Layer 2 Forwarding Protocol
Point-to-Point Tunneling Protocol
Point-to-Point Protocol
Virtual Extensible LAN
References
External links
Implementations
Cisco: Cisco L2TP documentation, also read Technology brief from Cisco
Open source and Linux: xl2tpd, Linux RP-L2TP, OpenL2TP, l2tpns, l2tpd (inactive), Linux L2TP/IPsec server, FreeBSD multi-link PPP daemon, OpenBSD npppd(8), ACCEL-PPP - PPTP/L2TP/PPPoE server for Linux
Microsoft: built-in client included with Windows 2000 and higher; Microsoft L2TP/IPsec VPN Client for Windows 98/Windows Me/Windows NT 4.0
Apple: built-in client included with Mac OS X 10.3 and higher.
VPDN on Cisco.com
Other
IANA assigned numbers for L2TP
L2TP Extensions Working Group (l2tpext) - (where future standardization work is being coordinated)
Using Linux as an L2TP/IPsec VPN client
L2TP/IPSec with OpenBSD and npppd
Comparison of L2TP, PPTP and OpenVPN
Internet protocols
Internet Standards
Tunneling protocols
Virtual private networks |
350835 | https://en.wikipedia.org/wiki/Software%20protection%20dongle | Software protection dongle | A software protection dongle (commonly known as a dongle or key) is an electronic copy protection and content protection device. When connected to a computer or other electronics, they unlock software functionality or decode content. The hardware key is programmed with a product key or other cryptographic protection mechanism and functions via an electrical connector to an external bus of the computer or appliance.
In software protection, dongles are two-interface security tokens with transient data flow with a pull communication that reads security data from the dongle. In the absence of these dongles, certain software may run only in a restricted mode, or not at all. Apart from software protection, dongles can enable functions in electronic devices, such as receiving and processing encoded video streams on television sets.
Etymology
The Merriam-Webster dictionary states that the "First known use of dongle" was in 1981 and that the etymology was "perhaps alteration of dangle."
Dongles rapidly evolved into active devices that contained a serial transceiver (UART) and even a microprocessor to handle transactions with the host. Later versions adopted the USB interface, which became the preferred choice over the serial or parallel interface.
A 1992 advertisement for Rainbow Technologies claimed the word dongle was derived from the name "Don Gall". Though untrue, this has given rise to an urban myth.
Usage
Efforts to introduce dongle copy-protection in the mainstream software market have met stiff resistance from users. Such copy-protection is more typically used with very expensive packages and vertical market software such as CAD/CAM software, cellphone flasher/JTAG debugger software, MICROS Systems hospitality and special retail software, digital audio workstation applications, and some translation memory packages.
In cases such as prepress and printing software, the dongle is encoded with a specific, per-user license key, which enables particular features in the target application. This is a form of tightly controlled licensing, which allows the vendor to engage in vendor lock-in and charge more than it would otherwise for the product. An example is the way Kodak licenses Prinergy to customers: When a computer-to-plate output device is sold to a customer, Prinergy's own license cost is provided separately to the customer, and the base price contains little more than the required licenses to output work to the device.
USB dongles are also a big part of Steinberg's audio production and editing systems, such as Cubase, WaveLab, Hypersonic, HALion, and others. The dongle used by Steinberg's products is also known as a Steinberg Key. The Steinberg Key can be purchased separately from its counterpart applications and generally comes bundled with the "Syncrosoft License Control Center" application, which is cross-platform compatible with both Mac OS X and Windows.
Some software developers use traditional USB flash drives as software license dongles that contain hardware serial numbers in conjunction with the stored device ID strings, which are generally not easily changed by an end-user. A developer can also use the dongle to store user settings or even a complete "portable" version of the application. Not all flash drives are suitable for this use, as not all manufacturers install unique serial numbers into their devices.
Although such medium security may deter a casual hacker, the lack of a processor core in the dongle to authenticate data, perform encryption/decryption, and execute inaccessible binary code makes such a passive dongle inappropriate for all but the lowest-priced software. A simpler and even less secure option is to use unpartitioned or unallocated storage in the dongle to store license data. Common USB flash drives are relatively inexpensive compared to dedicated security dongle devices, but reading and storing data in a flash drive are easy to intercept, alter, and bypass.
Issues
There are potential weaknesses in the implementation of the protocol between the dongle and the copy-controlled software. It requires considerable cunning to make this hard to crack. For example, a simple implementation might define a function to check for the dongle's presence, returning "true" or "false" accordingly, but the dongle requirement can be easily circumvented by modifying the software to always answer "true".
Modern dongles include built-in strong encryption and use fabrication techniques designed to thwart reverse engineering. Typical dongles also now contain non-volatile memory — essential parts of the software may actually be stored and executed on the dongle. Thus dongles have become secure cryptoprocessors that execute program instructions that may be input to the cryptoprocessor only in encrypted form. The original secure cryptoprocessor was designed for copy protection of personal computer software (see US Patent 4,168,396, Sept 18, 1979) to provide more security than dongles could then provide. See also bus encryption.
Hardware cloning, where the dongle is emulated by a device driver, is also a threat to traditional dongles. To thwart this, some dongle vendors adopted smart card product, which is widely used in extremely rigid security requirement environments such as military and banking, in their dongle products.
A more innovative modern dongle is designed with a code porting process which transfers encrypted parts of the software vendor's program code or license enforcement into a secure hardware environment (such as in a smart card OS, mentioned above). An ISV can port thousands of lines of important computer program code into the dongle.
In addition, dongles have been criticized because as they are hardware, they are easily lost and prone to damage, potentially increasing operational costs such as device cost and delivery cost.
Game consoles
Some unlicensed titles for game consoles (such as Super 3D Noah's Ark or Little Red Hood) used dongles to connect to officially licensed ROM cartridges, in order to circumvent the authentication chip embedded in the console.
Some cheat code devices, such as the GameShark and Action Replay use a dongle. Typically it attaches to the memory card slot of the system, with the disc based software refusing to work if the dongle is not detected. The dongle is also used for holding settings and storage of new codes, added either by the user or through official updates, because the disc, being read only, cannot store them. Some dongles will also double as normal memory cards.
See also
Digital rights management
Hardware restrictions
License manager
Lock-out chip
Product activation
Security token
Trusted client
Software monetization
References
External links
Jargon File: dongle
Copyright infringement of software
Copy protection
Digital rights management
Proprietary hardware
Software licenses
Warez |
351091 | https://en.wikipedia.org/wiki/Transparency%20%28human%E2%80%93computer%20interaction%29 | Transparency (human–computer interaction) | Any change in a computing system, such as a new feature or new component, is transparent if the system after change adheres to previous external interface as much as possible while changing its internal behaviour. The purpose is to shield from change all systems (or human users) on the other end of the interface. Confusingly, the term refers to overall invisibility of the component, it does not refer to visibility of component's internals (as in white box or open system). The term transparent is widely used in computing marketing in substitution of the term invisible, since the term invisible has a bad connotation (usually seen as something that the user can't see and has no control over) while the term transparent has a good connotation (usually associated with not hiding anything). The vast majority of the times, the term transparent is used in a misleading way to refer to the actual invisibility of a computing process, which is also described by the term opaque, especially with regards to data structures. Because of this misleading and counter-intuitive definition, modern computer literature tends to prefer use of "agnostic" over "transparent".
The term is used particularly often with regard to an abstraction layer that is invisible either from its upper or lower neighbouring layer.
Also temporarily used later around 1969 in IBM and Honeywell programming manuals the term referred to a certain computer programming technique. An application code was transparent when it was clear of the low-level detail (such as device-specific management) and contained only the logic solving a main problem. It was achieved through encapsulation – putting the code into modules that hid internal details, making them invisible for the main application.
Examples
For example, the Network File System is transparent, because it introduces the access to files stored remotely on the network in a way uniform with previous local access to a file system, so the user might even not notice it while using the folder hierarchy. The early File Transfer Protocol (FTP) is considerably less transparent, because it requires each user to learn how to access files through an ftp client.
Similarly, some file systems allow transparent compression and decompression of data, enabling users to store more files on a medium without any special knowledge; some file systems encrypt files transparently. This approach does not require running a compression or encryption utility manually.
In software engineering, it is also considered good practice to develop or use abstraction layers for database access, so that the same application will work with different databases; here, the abstraction layer allows other parts of the program to access the database transparently (see Data Access Object, for example).
In object-oriented programming, transparency is facilitated through the use of interfaces that hide actual implementations done with different underlying classes.
Types of transparency in distributed system
Transparency means that any form of distributed system should hide its distributed nature from its users, appearing and functioning as a normal centralized system.
There are many types of transparency:
Access transparency – Regardless of how resource access and representation has to be performed on each individual computing entity, the users of a distributed system should always access resources in a single, uniform way. Example: SQL Queries
Location transparency – Users of a distributed system should not have to be aware of where a resource is physically located. Example: Pages in the Web
Migration transparency – Users should not be aware of whether a resource or computing entity possesses the ability to move to a different physical or logical location.
Relocation transparency – Should a resource move while in use, this should not be noticeable to the end user.
Replication transparency – If a resource is replicated among several locations, it should appear to the user as a single resource.
Concurrent transparency – While multiple users may compete for and share a single resource, this should not be apparent to any of them.
Failure transparency – Always try to hide any failure and recovery of computing entities and resources.
Persistence transparency – Whether a resource lies in volatile or permanent memory should make no difference to the user.
Security transparency – Negotiation of cryptographically secure access of resources must require a minimum of user intervention, or users will circumvent the security in preference of productivity.
Formal definitions of most of these concepts can be found in RM-ODP, the Open Distributed Processing Reference Model (ISO 10746).
The degree to which these properties can or should be achieved may vary widely. Not every system can or should hide everything from its users. For instance, due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. If one expects real-time interaction with the distributed system, this may be very noticeable.
References
Human–computer interaction
Distributed computing architecture
Software architecture |
351541 | https://en.wikipedia.org/wiki/Virtual%20Network%20Computing | Virtual Network Computing | In computing, Virtual Network Computing (VNC) is a graphical desktop-sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and mouse input from one computer to another, relaying the graphical-screen updates, over a network.
VNC is platform-independent – there are clients and servers for many GUI-based operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.
VNC was originally developed at the Olivetti & Oracle Research Lab in Cambridge, United Kingdom. The original VNC source code and many modern derivatives are open source under the GNU General Public License.
There are a number of variants of VNC which offer their own particular functionality; e.g., some optimised for Microsoft Windows, or offering file transfer (not part of VNC proper), etc. Many are compatible (without their added features) with VNC proper in the sense that a viewer of one flavour can connect with a server of another; others are based on VNC code but not compatible with standard VNC.
VNC and RFB are registered trademarks of RealVNC Ltd. in the US and some other countries.
History
The Olivetti & Oracle Research Lab (ORL) at Cambridge in the UK developed VNC at a time when Olivetti and Oracle Corporation owned the lab. In 1999, AT&T acquired the lab, and in 2002 closed down the lab's research efforts.
Developers who worked on VNC while still at the AT&T Research Lab include:
Tristan Richardson (inventor)
Andy Harter (project leader)
Quentin Stafford-Fraser
James Weatherall
Andy Hopper
Following the closure of ORL in 2002, several members of the development team (including Richardson, Harter, Weatherall and Hopper) formed RealVNC in order to continue working on open-source and commercial VNC software under that name.
The original GPLed source code has fed into several other versions of VNC. Such forking has not led to compatibility problems because the RFB protocol is designed to be extensible. VNC clients and servers negotiate their capabilities with handshaking in order to use the most appropriate options supported at both ends.
, RealVNC Ltd claims the term "VNC" as a registered trademark in the United States and in other countries.
Etymology
The name Virtual Network Computer/Computing (VNC) originated with ORL's work on a thin client called the Videotile, which also used the RFB protocol. The Videotile had an LCD display with pen input and a fast ATM connection to the network. At the time, network computer was commonly used as a synonym for a thin client; VNC is essentially a software-only (i.e. virtual) network computer.
Operation
The VNC server is the program on the machine that shares some screen (and may not be related to a physical display – the server can be "headless"), and allows the client to share control of it.
The VNC client (or viewer) is the program that represents the screen data originating from the server, receives updates from it, and presumably controls it by informing the server of collected local input.
The VNC protocol (RFB protocol) is very simple, based on transmitting one graphic primitive from server to client ("Put a rectangle of pixel data at the specified X,Y position") and event messages from client to server.
In the normal method of operation a viewer connects to a port on the server (default port: 5900). Alternatively (depending on the implementation) a browser can connect to the server (default port: 5800). And a server can connect to a viewer in "listening mode" on port 5500. One advantage of listening mode is that the server site does not have to configure its firewall to allow access on port 5900 (or 5800); the duty is on the viewer, which is useful if the server site has no computer expertise and the viewer user is more knowledgeable.
The server sends small rectangles of the framebuffer to the client. In its simplest form, the VNC protocol can use a lot of bandwidth, so various methods have been devised to reduce the communication overhead. For example, there are various encodings (methods to determine the most efficient way to transfer these rectangles). The VNC protocol allows the client and server to negotiate which encoding they will use. The simplest encoding, supported by all clients and servers, is raw encoding, which sends pixel data in left-to-right scanline order, and after the original full screen has been transmitted, transfers only rectangles that change. This encoding works very well if only a small portion of the screen changes from one frame to the next (as when a mouse pointer moves across a desktop, or when text is written at the cursor), but bandwidth demands get very high if a lot of pixels change at the same time (such as when scrolling a window or viewing full-screen video).
VNC by default uses TCP port 5900+N, where N is the display number (usually :0 for a physical display). Several implementations also start a basic HTTP server on port 5800+N to provide a VNC viewer as a Java applet, allowing easy connection through any Java-enabled web-browser. Different port assignments can be used as long as both client and server are configured accordingly. A HTML5 VNC client implementation for modern browsers (no plugins required) exists too.
Although possible even on low bandwidth, using VNC over the Internet is facilitated if the user has a broadband connection at both ends. However, it may require advanced network address translation (NAT), firewall and router configuration such as port forwarding in order for the connection to go through. Users may establish communication through virtual private network (VPN) technologies to ease usage over the Internet, or as a LAN connection if VPN is used as a proxy, or through a VNC repeater (useful in presence of a NAT).
Xvnc is the Unix VNC server, which is based on a standard X server. To applications, Xvnc appears as an X "server" (i.e., it displays client windows), and to remote VNC users it is a VNC server. Applications can display themselves on Xvnc as if it were a normal X display, but they will appear on any connected VNC viewers rather than on a physical screen. Alternatively, a machine (which may be a workstation or a network server) with screen, keyboard, and mouse can be set up to boot and run the VNC server as a service or daemon, then the screen, keyboard, and mouse can be removed and the machine stored in an out-of-the way location.
In addition, the display that is served by VNC is not necessarily the same display seen by a user on the server. On Unix/Linux computers that support multiple simultaneous X11 sessions, VNC may be set to serve a particular existing X11 session, or to start one of its own. It is also possible to run multiple VNC sessions from the same computer. On Microsoft Windows the VNC session served is always the current user session.
Users commonly deploy VNC as a cross-platform remote desktop system. For example, Apple Remote Desktop for Mac OS X (and more recently, "Back to My Mac" in 'Leopard' - Mac OS X 10.5) interoperates with VNC and will connect to a Unix user's current desktop if it is served with x11vnc, or to a separate X11 session if one is served with TightVNC. From Unix, TightVNC will connect to a Mac OS X session served by Apple Remote Desktop if the VNC option is enabled, or to a VNC server running on Microsoft Windows.
In July 2014 RealVNC published a Wayland developer preview.
Security
By default, RFB is not a secure protocol. While passwords are not sent in plain-text (as in telnet), cracking could prove successful if both the encryption key and encoded password were sniffed from a network. For this reason it is recommended that a password of at least 8 characters be used. On the other hand, there is also an 8-character limit on some versions of VNC; if a password is sent exceeding 8 characters, the excess characters are removed and the truncated string is compared to the password.
UltraVNC supports the use of an open-source encryption plugin which encrypts the entire VNC session including password authentication and data transfer. It also allows authentication to be performed based on NTLM and Active Directory user accounts. However, use of such encryption plugins makes it incompatible with other VNC programs. RealVNC offers high-strength AES encryption as part of its commercial package, along with integration with Active Directory. Workspot released AES encryption patches for VNC. According to TightVNC, TightVNC is not secure as picture data is transmitted without encryption. To circumvent this, it should be tunneled through an SSH connection (see below).
VNC may be tunneled over an SSH or VPN connection which would add an extra security layer with stronger encryption. SSH clients are available for most platforms; SSH tunnels can be created from UNIX clients, Microsoft Windows clients, Macintosh clients (including Mac OS X and System 7 and up) – and many others. There are also freeware applications that create instant VPN tunnels between computers.
An additional security concern for the use of VNC is to check whether the version used requires authorization from the remote computer owner before someone takes control of their device. This will avoid the situation where the owner of the computer accessed realizes there is someone in control of their device without previous notice.
See also
Comparison of remote desktop software
LibVNCServer
LinkVNC
PocketVNC
RealVNC
Remmina
SPICE
TigerVNC
TightVNC
VirtualGL#TurboVNC
UltraVNC
Vinagre
References
External links
RFB 3.8 Protocol Standard
AT&T VNC - Original AT&T-Cambridge VNC website
Free network-related software
Remote desktop protocols |
352500 | https://en.wikipedia.org/wiki/1917%20in%20science | 1917 in science | The year 1917 in science and technology involved some significant events, listed below.
Biology
D'Arcy Wentworth Thompson's On Growth and Form is published.
Mathematics
Paul Ehrenfest gives a conditional principle for a three-dimensional space.
Medicine
Shinobu Ishihara publishes his color perception test.
Julius Wagner-Jauregg discovers malarial pyrotherapy for general paresis of the insane.
Physics
Albert Einstein introduces the idea of stimulated radiation emission.
Nuclear fission: Ernest Rutherford (at the Victoria University of Manchester) achieves nuclear transmutation of nitrogen into oxygen, using alpha particles directed at nitrogen 14N + α → 17O + p, the first observation of a nuclear reaction, in which he also discovers and names the proton.
Technology
September 13 – Release in the United States of the first film made in Technicolor System 1, a two-color process, The Gulf Between.
Alvin D. and Kelvin Keech introduce the "banjulele-banjo", an early form of the banjolele.
Gilbert Vernam jointly reinvents the one-time pad encryption system.
Awards
Nobel Prize
Physics – Charles Glover Barkla (announced 12 November 1918; presented 1 June 1920)
Chemistry – not awarded
Medicine – not awarded
Births
January 19 – Graham Higman (died 2008), English mathematician.
January 25 – Ilya Prigogine (died 2003), Russian-born winner of the Nobel Prize in Chemistry.
February 14 – Herbert A. Hauptman (died 2011), American mathematical biophysicist, winner of the Nobel Prize in Chemistry.
March 23 – Howard McKern (died 2009), Australian analytical and organic chemist.
March 24 – John Kendrew (died 1997), English molecular biologist, winner of the Nobel Prize in Chemistry.
April 10 – Robert Burns Woodward (died 1979), American organic chemist, winner of the Nobel Prize in Chemistry.
April 18 – Brian Harold Mason (died 2009), New Zealand born geochemist and mineralogist who was one of the pioneers in the study of meteorites.
May 14 – W. T. Tutte (died 2002), English-born mathematician and cryptanalyst.
June 1 – William S. Knowles (died 2012), American winner of the Nobel Prize in Chemistry.
June 2 – Heinz Sielmann (died 2006), German zoological filmmaker.
June 15 – John Fenn (died 2010), American analytical chemist, winner of the Nobel Prize in Chemistry.
July 1 – Humphry Osmond (died 2004), English-born psychiatrist.
July 15 – Walter S. Graf (died 2015), American cardiologist and pioneer of paramedic emergency medical services.
July 22 – H. Boyd Woodruff (died 2017), American microbiologist.
August 21 – Xu Shunshou (died 1968), Chinese aeronautical engineer.
September 23 – Asima Chatterjee, née Mookerjee (died 2006), Indian organic chemist.
October 2 – Christian de Duve (died 2013), English-born Belgian biologist, winner of the Nobel Prize in Physiology or Medicine
October 8 – Rodney Porter (died 1985), English biochemist, winner of the Nobel Prize in Physiology or Medicine.
November 22 – Andrew Huxley (died 2012), English winner of the Nobel Prize in Physiology or Medicine.
December 9 – James Rainwater (died 1986), American winner of the Nobel Prize in Physics.
December 16 – Arthur C. Clarke (died 2008), English-born science fiction author and inventor.
December 20 – David Bohm (died 1992), American-born theoretical physicist, philosopher and neuropsychologist.
Deaths
February 11 – Laura Forster (born 1858), Australian physician, died on war service.
March 8 – Ferdinand von Zeppelin (born 1838), German founder of the Zeppelin airship company.
March 31 – Emil von Behring (born 1854), German physiologist, winner of the Nobel Prize in Physiology or Medicine in 1901.
July 27 – Emil Theodor Kocher (born 1841), Swiss surgeon, winner of the Nobel Prize in Physiology or Medicine in 1909.
August 3 – Ferdinand Georg Frobenius (born 1849), German mathematician.
December 17 – Elizabeth Garrett Anderson (born 1836), English physician.
References
20th century in science
1910s in science |
352709 | https://en.wikipedia.org/wiki/Feistel%20cipher | Feistel cipher | In cryptography, a Feistel cipher (also known as Luby–Rackoff block cipher) is a symmetric structure used in the construction of block ciphers, named after the German-born physicist and cryptographer Horst Feistel, who did pioneering research while working for IBM (USA); it is also commonly known as a Feistel network. A large proportion of block ciphers use the scheme, including the US Data Encryption Standard, the Soviet/Russian GOST and the more recent Blowfish and Twofish ciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times.
History
Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM's Lucifer cipher, designed by Horst Feistel and Don Coppersmith in 1973. Feistel networks gained respectability when the U.S. Federal Government adopted the DES (a cipher based on Lucifer, with changes made by the NSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design).
Design
A Feistel network uses a round function, a function which takes two inputs a data block and a subkey and returns one output of the same size as the data block. In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such as substitution–permutation networks is that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible. Furthermore, the encryption and decryption operations are very similar, even identical in some cases, requiring only a reversal of the key schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved.
Theoretical work
The structure and properties of Feistel ciphers have been extensively analyzed by cryptographers.
Michael Luby and Charles Rackoff analyzed the Feistel cipher construction and proved that if the round function is a cryptographically secure pseudorandom function, with Ki used as the seed, then 3 rounds are sufficient to make the block cipher a pseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who gets oracle access to its inverse permutation). Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers.
Further theoretical work has generalized the construction somewhat and given more precise bounds for security.
Construction details
Let be the round function and let be the sub-keys for the rounds respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces: (, ).
For each round , compute
where means XOR. Then the ciphertext is .
Decryption of a ciphertext is accomplished by computing for
Then is the plaintext again.
The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption.
Unbalanced Feistel cipher
Unbalanced Feistel ciphers use a modified structure where and are not of equal lengths. The Skipjack cipher is an example of such a cipher. The Texas Instruments digital signature transponder uses a proprietary unbalanced Feistel cipher to perform challenge–response authentication.
The Thorp shuffle is an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds.
Other uses
The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, the optimal asymmetric encryption padding (OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certain asymmetric-key encryption schemes.
A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (see format-preserving encryption).
Feistel networks as a design component
Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example, MISTY1 is a Feistel cipher using a three-round Feistel network in its round function, Skipjack is a modified Feistel cipher using a Feistel network in its G permutation, and Threefish (part of Skein) is a non-Feistel block cipher that uses a Feistel-like MIX function.
List of Feistel ciphers
Feistel or modified Feistel:
Blowfish
Camellia
CAST-128
DES
FEAL
GOST 28147-89
ICE
KASUMI
LOKI97
Lucifer
MARS
MAGENTA
MISTY1
RC5
Simon
TEA
Triple DES
Twofish
XTEA
Generalised Feistel:
CAST-256
CLEFIA
MacGuffin
RC2
RC6
Skipjack
SMS4
See also
Cryptography
Stream cipher
Substitution–permutation network
Lifting scheme for discrete wavelet transform has pretty much the same structure
Format-preserving encryption
Lai–Massey scheme
References
Cryptography
Feistel ciphers |
354209 | https://en.wikipedia.org/wiki/Bull%20Run | Bull Run | Bull Run or Bullrun may refer to:
Military
First Battle of Bull Run (First Manassas), 1861, the first major battle of the American Civil War
Second Battle of Bull Run (Second Manassas), 1862, a later battle also at Bull Run
Operation Bull Run, a military operation of the Iraq War and part of Operation Marne Torch
Bullrun (decryption program), a secret anti-encryption program run by the US National Security Agency (NSA)
USNS Bull Run (T-AO-156), an oil tanker
Places in the United States
Virginia
Bull Run (Occoquan River tributary), a stream in Fairfax, Loudoun, and Prince William counties, site of the Civil War battles
Bull Run, Fairfax County, Virginia, a census-designated place west of Centreville, east of the stream; See U.S. Route 29 in Virginia
Bull Run, Prince William County, Virginia, a census-designated place northwest of Manassas, west of the stream
Bull Run Mountain Estates, Virginia, a census-designated place in Prince William County, southwest of the stream
Bull Run Regional Park, on the stream in Fairfax County
Bull Run Mountains, a mountain range in Fauquier, Loudoun, and Prince William counties
Elsewhere
Bull Run Mountains (Nevada), a mountain range in Elko County
Bull Run (Deep River tributary), a stream in Guilford County, North Carolina
Bull Run River (Oregon)
Bull Run Lake, a reservoir, an impoundment of the river
Bull Run Hydroelectric Project, a former dam project on the river
Bull Run, Oregon, an unincorporated community named for the river
Bull Run National Forest, a former national forest
Bull Run Creek, a stream in South Dakota
Bull Run Fossil Plant also known as Bull Run Steam Plant, a coal-fired electric generating station owned by the Tennessee Valley Authority
Entertainment
Bull Run (novel), a young adult novel by Paul Fleischman about the First Battle of Bull Run
Bullrun Rally, an automobile rally in North America
Bullrun (TV series), a reality television show on Spike TV based on the Bullrun road rally
Bull running, a defunct event once common in England, in which townsfolk chased down a bull then slaughtered it
Stamford bull run, the last surviving such event, ending in 1839
Tutbury bull run
Running of the bulls, events in Spain, Portugal, France, and Mexico, in which people run in front of a number of bulls, often used in bullfighting after the run
Other uses
Bull market or bull run, a rising market trend in economics
See also |
354414 | https://en.wikipedia.org/wiki/NortonLifeLock | NortonLifeLock | NortonLifeLock Inc., formerly known as Symantec Corporation () is an American software company headquartered in Tempe, Arizona, United States. The company provides cybersecurity software and services. NortonLifeLock is a Fortune 500 company and a member of the S&P 500 stock-market index. The company also has development centers in Pune, Chennai and Bangalore.
On October 9, 2014, Symantec declared it would split into two independent publicly traded companies by the end of 2015. One company would focus on security, the other on information management. On January 29, 2016, Symantec sold its information-management subsidiary, named Veritas Technologies (which Symantec had acquired in 2004) to The Carlyle Group.
The name "Symantec" is a portmanteau of the words "syntax" and "semantics" with "technology".
On August 9, 2019, Broadcom Inc. announced they would be acquiring the Enterprise Security software division of Symantec for $10.7 billion, after having attempted to purchase the whole company. The sale closed November 4, 2019, and subsequently, the company adopted the NortonLifeLock name. It also relocated its headquarters to Tempe, Arizona from Mountain View, California.
History
1982 to 1989
Founded in 1982 by Gary Hendrix with a National Science Foundation grant, Symantec was originally focused on artificial intelligence-related projects, including a database program. Hendrix hired several Stanford University natural language processing researchers as the company's first employees.
In 1984, it became clear that the advanced natural language and database system that Symantec had developed could not be ported from DEC minicomputers to the PC. This left Symantec without a product, but with expertise in natural language database query systems and technology. As a result, later in 1984 Symantec was acquired by another, smaller software startup company, C&E Software, founded by Denis Coleman and Gordon Eubanks and headed by Eubanks. C&E Software developed a combined file management and word processing program called Q&A. Barry Greenstein, now a professional poker player, was the principal developer of the word processor component within Q&A.
The merged company retained the name Symantec. Eubanks became its chairman, Vern Raburn, the former president of the original Symantec, remained as president of the combined company. The new Symantec combined the file management and word processing functionality that C&E had planned, and added an advanced Natural Language query system (designed by Gary Hendrix and engineered by Dan Gordon) that set new standards for ease of database query and report generation. The natural language system was named "The Intelligent Assistant". Turner chose the name of Q&A for Symantec's flagship product, in large part because the name lent itself to use in a short, easily merchandised logo. Brett Walter designed the user interface of Q&A (Brett Walter, director of product management). Q&A was released in November 1985.
During 1986, Vern Raburn and Gordon Eubanks swapped roles, and Eubanks became CEO and president of Symantec, while Raburn became its chairman. After this change, Raburn had little involvement with Symantec, and in a few years, Eubanks added the chairmanship to his other roles. After a slow start for sales of Q&A in the fall of 1985 and spring of 1986, Turner signed up a new advertising agency called Elliott/Dickens, embarked on an aggressive new advertising campaign, and came up with the "Six Pack Program" in which all Symantec employees, regardless of role, went on the road, training and selling dealer sales staff nationwide in the United States. Turner named it Six Pack because employees were to work six days a week, see six dealerships per day, train six sales representatives per store and stay with friends free or at Motel 6. Simultaneously, a promotion was run jointly with SofSell (which was Symantec's exclusive wholesale distributor in the United States for the first year that Q&A was on the market). This promotion was very successful in encouraging dealers to try Q&A.
During this time, Symantec was advised by its board members Jim Lally and John Doerr that if it would cut its expenses and grow revenues enough to achieve cash flow break-even, then Kleiner Perkins Caufield & Byers would back the company in raising more venture capital. To accomplish this, the management team worked out a salary reduction schedule where the chairman and the CEO would take zero pay, all vice presidents would take a 50% pay cut, and all other employees' pay was cut by 15%. Two employees were laid off. Eubanks also negotiated a sizable rent reduction on the office space the company had leased in the days of the original Symantec. These expense reductions, combined with strong international sales of Q&A, enabled the company to attain break-even.
The significantly increased traction for Q&A from this re-launch grew Symantec's revenues substantially, along with early success for Q&A in international markets (uniquely a German version was shipped three weeks after the United States version, and it was the first software in the world that supported German Natural Language) following Turner's having emphasized establishing international sales distribution and multiple language versions of Q&A from the initial shipment.
In 1985, Rod Turner negotiated the publishing agreement with David Whitney for Symantec's second product, which Turner named NoteIt (an annotation utility for Lotus 1-2-3). It was evident to Turner that NoteIt would confuse the dealer channel if it was launched under the Symantec name because Symantec had built up interest by that stage in Q&A (but not yet shipped it), and because the low price for the utility would not be initially attracted to the dealer channel until demand had been built up. Turner felt that the product should be marketed under a unique brand name.
Turner and Gordon E. Eubanks Jr., then chairman of Symantec Corporation, agreed to form a new division of Symantec, and Eubanks delegated the choice of name to Turner. Turner chose the name Turner Hall Publishing, to be a new division of Symantec devoted to publishing third-party software and hardware. The objective of the division was to diversify revenues and accelerate the growth of Symantec. Turner chose the name Turner Hall Publishing, using his last name and that of Dottie Hall (Director of Marketing Communications) to convey the sense of a stable, long-established, company. Turner Hall Publishing's first offering was Note-It, a notation utility add-in for Lotus 1-2-3, which was developed by David Whitney, and licensed to Symantec. Its second product was the Turner Hall Card, which was a 256k RAM, half slot memory card, initially made to inexpensively increase the available memory for Symantec's flagship product, Q&A. The Turner Hall division also marketed the card as a standalone product. Turner Hall's third product, also a 1-2-3 add-in was SQZ! a Lotus 1-2-3 spreadsheet compression utility developed by Chris Graham Synex Systems. In the summer of 1986 Eubanks and Turner recruited Tom Byers from Digital Research, to expand the Turner Hall Publishing product family and lead the Turner Hall effort.
By the winter of 1986–87, the Turner Hall Publishing division had achieved success with NoteIt, the Turner Hall Card and SQZ!. The popularity of these products, while contributing a relatively small portion of revenues to Symantec, conveyed the impression that Symantec was already a diversified company, and indeed, many industry participants were under the impression that Symantec had acquired Turner Hall Publishing. In 1987, Byers recruited Ted Schlein into the Turner Hall Product Group to assist in building the product family and in marketing.
Revenues from Q&A, and Symantec's early launch into the international marketplace, combined with Turner Hall Publishing, generated the market presence and scale that enabled Symantec to make its first merger/acquisition, in February 1987, that of Breakthrough Software, maker of the TimeLine project management software for DOS. Because this was the first time that Symantec had acquired a business that had revenues, inventory, and customers, Eubanks chose to change nothing at BreakThrough Software for six months, and the actual merger logistics started in the summer of 1987, with Turner being appointed by Eubanks as general manager of the TimeLine business unit, Turner was made responsible for the successful integration of the company into Symantec and ongoing growth of the business, with P&L. There was a heavy emphasis placed on making the minimum disruption by Eubanks and Turner.
Soon after the acquisition of TimeLine/Breakthrough Software, Eubanks reorganized Symantec, structuring the company around product-centric groups, each having its development, quality assurance, technical support, and product marketing functions, and a general manager with profit and loss responsibility. Sales, finance, and operations were centralized functions that were shared. This structure lent itself well to Symantec's further growth through mergers and acquisitions. Eubanks made Turner general manager of the new TimeLine Product Group, and simultaneously of the Q&A Product Group, and made Tom Byers general manager of the Turner Hall Product Group. Turner continued to build and lead the company's international business and marketing for the whole company.
At the TimeLine Product Group, Turner drove strong marketing, promotion and sales programs to accelerate momentum. By 1989 this merger was very successful—product group morale was high, TimeLine development continued apace, and the increased sales and marketing efforts applied built the TimeLine into the clear market lead in PC project management software on DOS. Both the Q&A and TimeLine product groups were healthily profitable. The profit stream and merger success set the stage for subsequent merger and acquisition activity by the company, and indeed funded the losses of some of the product groups that were subsequently acquired. In 1989, Eubanks hired John Laing as VP worldwide sales, and Turner transferred the international division to Laing. Eubanks also recruited Bob Dykes to be executive vice president for operations and finance, in anticipation of the upcoming IPO. In July 1989 Symantec had its IPO.
1990 to 1999
In May 1990, Symantec announced its intent to merge with and acquire Peter Norton Computing, a developer of various utilities for DOS. Turner was appointed as product group manager for the Norton business, and made responsible for the merger, with P&L responsibility. Ted Schlein was made product group manager for the Q&A business.
The Peter Norton group merger logistical effort began immediately while the companies sought approval for the merger, and in August 1990, Symantec concluded the purchase—by this time the combination of the companies was already complete. Symantec's consumer antivirus and data management utilities are still marketed under the Norton name. At the time of the merger, Symantec had built upon its Turner Hall Publishing presence in the utility market, by introducing Symantec Antivirus for the Macintosh (SAM), and Symantec Utilities for the Macintosh (SUM). These two products were already market leaders on the Mac, and this success made the Norton merger more strategic. Symantec had already begun the development of a DOS-based antivirus program one year before the merger with Norton. The management team had decided to enter the antivirus market in part because it was felt that the antivirus market entailed a great deal of ongoing work to stay ahead of new viruses. The team felt that Microsoft would be unlikely to find this effort attractive, which would lengthen the viability of the market for Symantec. Turner decided to use the Norton name for obvious reasons, on what became the Norton Antivirus, which Turner and the Norton team launched in 1991. At the time of the merger, Norton revenues were approximately 20 to 25% of the combined entity. By 1993, while being led by Turner, Norton product group revenues had grown to be approximately 82% of Symantec's total.
At one time Symantec was also known for its development tools, particularly the THINK Pascal, THINK C, Symantec C++, Enterprise Developer and Visual Cafe packages that were popular on the Macintosh and IBM PC compatible platforms. These product lines resulted from acquisitions made by the company in the late 1980s and early 1990s. These businesses and the Living Videotext acquisition were consistently unprofitable for Symantec, and these losses diverted expenditures away from both the Q&A for Windows and the TimeLine for Windows development efforts during the critical period from 1988 through 1992. Symantec exited this business in the late-1990s as competitors such as Metrowerks, Microsoft and Borland gained significant market share.
In 1996, Symantec Corporation was alleged of misleading financial statements in violation of GAAP.
2000 to present
From 1999 to April 2009, Symantec was led by CEO John W. Thompson, a former VP at IBM. At the time, Thompson was the only African-American leading a major US technology company. He was succeeded in April 2009 by the company's long-time Symantec executive Enrique Salem. Under Salem, Symantec completed the acquisition of Verisign's Certificate Authority business, dramatically increasing their share of that market.
In 2009, Symantec released a list of the then "100 dirtiest websites", which contain the most malware as detected by Norton Safe Web.
Salem was abruptly fired in 2012 for disappointing earnings performance and replaced by Steve Bennett, a former CEO of Intuit and GE executive. In January 2013, Bennett announced a major corporate reorganization, with a goal of reducing costs and improving Symantec's product line. He said that sales and marketing "had been high costs but did not provide quality outcomes". He concluded that "Our system is just broken".
Robert Enderle of CIO.com reviewed the reorganization and noted that Bennett was following the General Electric model of being product-focused instead of customer-focused. He concluded "Eliminating middle management removes a large number of highly paid employees. This will tactically improve Symantec's bottom line but reduce the skills needed to ensure high-quality products in the long term."
In March 2014, Symantec fired Steve Bennett from his CEO position and named Michael Brown as interim president and chief executive. Including the interim CEO, Symantec has had 3 CEOs in less than two years. On September 25, 2014, Symantec announced the appointment of Michael A. Brown as its president and chief executive officer. Brown had served as the company's interim president and chief executive officer since March 20, 2014. Mr. Brown has served as a member of the company's board of directors since July 2005 following the acquisition of VERITAS Software Corporation. Mr. Brown had served on the VERITAS board of directors since 2003.
In July 2016, Symantec introduced a product to help carmakers protect connected vehicles against zero-day attacks. The Symantec Anomaly Detection for Automotive is an IoT product for manufacturers and uses machine learning to provide in-vehicle security analytics. Greg Clark assumed the position of CEO in August 2016.
In November 2016, Symantec announced its intent to acquire identity theft protection company LifeLock for $2.3 billion.
In August 2017, Symantec announced that it had agreed to sell its business unit that verifies the identity of websites to Thoma Bravo. With this acquisition, Thoma Bravo plans to merge the Symantec business unit with its own web certification company, DigiCert.
On January 4, 2018, Symantec and BT (formerly British Telecom) announced their partnership that provides new endpoint security protection.
In May 2018, Symantec initiated an internal audit to address concerns raised by a former employee, causing it to delay its annual earnings report.
In August 2018, Symantec announced that the hedge fund Starboard Value had put forward five nominees to stand for election to the Symantec board of directors at Symantec's 2018 Annual Meeting of Stockholders. This followed a Schedule 13D filing by Starboard showing that it had accumulated a 5.8% stake in Symantec. In September 2018, Symantec announced that three nominees of Starboard were joining the Symantec board, two with immediate effect (including Starboard Managing Member Peter Feld) and one following the 2018 Annual Meeting of Stockholders.
On May 9, 2019, Symantec announced that Clark would be stepping down and that board member Rick Hill, previously put forward by Starboard, had been appointed interim president and CEO. Vincent Pilette also joined Symantec as its new CFO.
On August 9, 2019, Broadcom announced they would be acquiring the Enterprise software division of Symantec for $10.7 billion. This is after having attempted to purchase the whole company. The Norton family of products will remain in the Symantec portfolio. The sale closed November 4, 2019, and subsequently, the company adopted the NortonLifeLock name and relocated its headquarters from Mountain View, California to LifeLock's offices in Tempe, Arizona.
In 2021 or 2022, there was the first case of a suspected crypto-miner in the Norton 360 product called Norton Crypto. Norton Crypto only mines Ethereum (ETH) and does while your computer is idle. The program creates a secure wallet on your computer.
Demerger
On October 9, 2014, Symantec declared that the company would separate into two independent publicly traded companies by the end of 2015. Symantec will continue to focus on security, while a new company will be established focusing on information management. Symantec confirmed on January 28, 2015, that the information management business would be called Veritas Technologies Corporation, marking a return of the Veritas name. In August 2015, Symantec agreed to sell Veritas to a private equity group led by The Carlyle Group for $8 billion. The sale was completed by February 2016, turning Veritas into a privately owned company.
Norton products
As of 2015, Symantec's Norton product line includes Norton Security, Norton Small Business, Norton Family, Norton Mobile Security, Norton Online Backup, Norton360, Norton Utilities and Norton Computer Tune Up.
In 2012, PCTools iAntiVirus was rebranded as a Norton product under the name iAntivirus, and released to the Mac App Store. Also in 2012, the Norton Partner Portal was relaunched to support sales to consumers throughout the EMEA technologies.
Mergers and acquisitions
ACT!
In 1993, Symantec acquired ACT! from Contact Software International. Symantec sold ACT! to SalesLogix in 1999. At the time it was the world's most popular CRM application for Windows and Macintosh.
Veritas
On December 16, 2004, Veritas Software and Symantec announced their plans for a merger. With Veritas valued at $13.5 billion, it was the largest software industry merger to date. Symantec's shareholders voted to approve the merger on June 24, 2005; the deal closed successfully on July 2, 2005. July 5, 2005, was the first day of business for the U.S. offices of the new, combined software company. As a result of this merger, Symantec includes storage- and availability-related products in its portfolio, namely Veritas File System (VxFS), Veritas Volume Manager (VxVM), Veritas Volume Replicator (VVR), Veritas Cluster Server (VCS), NetBackup (NBU), Backup Exec (BE) and Enterprise Vault (EV).
On January 29, 2016, Symantec sold Veritas Technologies to The Carlyle Group.
Sygate
On August 16, 2005, Symantec acquired Sygate, a security software firm based in Fremont, California, with about 200 staff. As of November 30, 2005, all Sygate personal firewall products were discontinued.
Altiris
On January 29, 2007, Symantec announced plans to acquire Altiris, and on April 6, 2007, the acquisition was completed. Altiris specializes in service-oriented management software that allows organizations to manage IT assets. It also provides software for web services, security and systems management products. Established in 1998, Altiris is headquartered in Lindon, Utah.
Vontu
On November 5, 2007, Symantec announced its acquisition of Vontu, a Data Loss Prevention (DLP) company, for $350 million.
Application Performance Management business
On January 17, 2008, Symantec announced that it was spinning off its Application Performance Management (APM) business and the i3 product line to Vector Capital. Precise Software Solutions took over development, product management, marketing and sales for the APM business, launching as an independent company on September 17, 2008.
PC Tools
On August 18, 2008, Symantec announced the signing of an agreement to acquire PC Tools. Under the agreement, PC Tools would maintain separate operations. The financial terms of the acquisition were not disclosed. In May 2013, Symantec announced they were discontinuing the PC Tools line of internet security software.
In December 2013, Symantec announced they were discontinuing and retiring the entire PC Tools brand and offering a non-expiring license to PC Tools Performance Toolkit, PC Tools Registry Mechanic, PC Tools File Recover and PC Tools Privacy Guardian users with an active subscription as of December 4, 2013.
AppStream
On April 18, 2008, Symantec completed the acquisition of AppStream, Inc. (“AppStream”), a nonpublic Palo Alto, California-based provider of endpoint virtualization software. AppStream was acquired to complement Symantec's endpoint management and virtualization portfolio and strategy.
MessageLabs
On October 9, 2008, Symantec announced its intent to acquire Gloucester-based MessageLabs (spun off from Star Internet in 2007) to boost its Software-as-a-Service (SaaS) business. Symantec purchased the online messaging and Web security provider for about $695 million in cash. The acquisition closed on November 17, 2008.
PGP and GuardianEdge
On April 29, 2010, Symantec announced its intent to acquire PGP Corporation and GuardianEdge. The acquisitions closed on June 4, 2010, and provided access to established encryption, key management and technologies to Symantec's customers.
Verisign authentication
On May 19, 2010, Symantec signed a definitive agreement to acquire Verisign's authentication business unit, which included the Secure Sockets Layer (SSL) Certificate, Public Key Infrastructure (PKI), Verisign Trust and Verisign Identity Protection (VIP) authentication services. The acquisition closed on August 9, 2010. In August 2012, Symantec completed its rebranding of the Verisign SSL Certificate Service by renaming the Verisign Trust Seal the Norton Secured Seal. Symantec sold the SSL unit to Digicert for US$950 million in mid 2017.
Rulespace
Acquired on October 10, 2010, RuleSpace is a web categorisation product first developed in 1996. The categorisation is, automated using what Symantec refers to as the Automated Categorization System (ACS). It is used as the base for content filtering by many UK ISP.
Clearwell Systems
On May 19, 2011, Symantec announced the acquisition of Clearwell Systems for approximately $390 million.
LiveOffice
On January 17, 2012, Symantec announced the acquisition of cloud email-archiving company LiveOffice. The acquisition price was $115 million. Last year, Symantec joined the cloud storage and backup sector with its Enterprise Vault.cloud and Cloud Storage for Enterprise Vault software, in addition to a cloud messaging software, Symantec Instant Messaging Security cloud (IMS.cloud). Symantec stated that the acquisition would add to its information governance products, allowing customers to store information on-premises, in Symantec's data centers, or both.
Odyssey Software
On March 2, 2012, Symantec completed the acquisition of Odyssey Software. Odyssey Software's main product was Athena, which was device management software that extended Microsoft System Center software, adding the ability to manage, support and control mobile and embedded devices, such as smartphones and ruggedized handhelds.
Nukona Inc.
Symantec completed its acquisition of Nukona, a provider of mobile application management (MAM), on April 2, 2012. The acquisition agreement between Symantec and Nukona was announced on March 20, 2012.
NitroDesk Inc.
In May 2014 Symantec acquired NitroDesk, provider of TouchDown, the market-leading third-party EAS mobile application.
Blue Coat Systems
On June 13, 2016, it was announced that Symantec had acquired Blue Coat for $4.65 billion.
LifeLock
In 2017, Symantec acquired LifeLock Inc.; this, in turn, prompted the company to rename itself to its current name.
Avira
On December 7, 2020, NortonLifeLock announced acquisition of Avira. The acquisition was closed in January 2021.
Security concerns and controversies
Restatement
On August 9, 2004, the company announced that it discovered an error in its calculation of deferred revenue, which represented an accumulated adjustment of $20 million.
Endpoint bug
The arrival of the year 2010 triggered a bug in Symantec Endpoint. Symantec reported that malware and intrusion protection updates with "a date greater than December 31, 2009, 11:59 pm [were] considered to be 'out of date.'" The company created and distributed a workaround for the issue.
Scan evasion vulnerability
In March 2010, it was reported that Symantec AntiVirus and Symantec Client Security were prone to a vulnerability that might allow an attacker to bypass on-demand virus scanning, and permit malicious files to escape detection.
Denial-of-service attack vulnerabilities
In January 2011, multiple vulnerabilities in Symantec products that could be exploited by a denial-of-service attack, and thereby compromise a system, were reported. The products involved were Symantec AntiVirus Corporate Edition Server and Symantec System Center.
The November 12, 2012 Vulnerability Bulletin of the United States Computer Emergency Readiness Team (US-CERT) reported the following vulnerability for older versions of Symantec's Antivirus system: "The decomposer engine in Symantec Endpoint Protection (SEP) 11.0, Symantec Endpoint Protection Small Business Edition 12.0, Symantec AntiVirus Corporate Edition (SAVCE) 10.x, and Symantec Scan Engine (SSE) before 5.2.8 does not properly perform bounds checks of the contents of CAB archives, which allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a crafted file."
The problem relates to older versions of the systems and a patch is available. US-CERT rated the seriousness of this vulnerability as a 9.7 on a 10-point scale. The "decomposer engine" is a component of the scanning system that opens containers, such as compressed files, so that the scanner can evaluate the files within.
Scareware lawsuit
In January 2012, James Gross filed a lawsuit against Symantec for distributing fake scareware scanners that purportedly alerted users of issues with their computers. Gross claimed that after the scan, only some of the errors and problems were corrected, and he was prompted by the scanner to purchase a Symantec app to remove the rest. Gross claimed that he bought the app, but it did not speed up his computer or remove the detected viruses. He hired a digital forensics expert to back up this claim. Symantec denied the allegations and said that it would contest the case. Symantec settled a $11 million fund (up to $9 to more than 1 million eligible customers representing the overpaid amount for the app) and the case was dismissed in court.
Source code theft
On January 17, 2012, Symantec disclosed that its network had been hacked. A hacker known as "Yama Tough" had obtained the source code for some Symantec software by hacking an Indian government server. Yama Tough released parts of the code and threatened to release more. According to Chris Paden, a Symantec spokesman, the source code that was taken was for Enterprise products that were between five and six years old.
On September 25, 2012, an affiliate of the hacker group Anonymous published source code from Norton Utilities. Symantec confirmed that it was part of the code that had been stolen earlier, and that the leak included code for 2006 versions of Norton Utilities, pcAnywhere and Norton Antivirus.
Verisign data breach
In February 2012, it was reported that Verisign's network and data had been hacked repeatedly in 2010, but that the breaches had not been disclosed publicly until they were noted in an SEC filing in October 2011. Verisign did not provide information about whether the breach included its certificate authority business, which was acquired by Symantec in late 2010. Oliver Lavery, director of security and research for nCircle, asked rhetorically, "Can we trust any site using Verisign SSL certificates? Without more clarity, the logical answer is no."
pcAnywhere exploit
On February 17, 2012, details of an exploit of pcAnywhere were posted. The exploit would allow attackers to crash pcAnywhere on computers running Windows. Symantec released a hotfix for the issue twelve days later.
Hacking of The New York Times network
According to Mandiant, Symantec security products used by The New York Times detected only one of 45 pieces of malware that were installed by Chinese hackers on the newspaper's network during three months in late 2012. Symantec responded:
"Advanced attacks like the ones the New York Times described in the following article, <http://nyti.ms/TZtr5z>, underscore how important it is for companies, countries and consumers to make sure they are using the full capability of security solutions. The advanced capabilities in our [E]ndpoint offerings, including our unique reputation-based technology and behavior-based blocking, specifically target sophisticated attacks. Turning on only the signature-based anti-virus components of [E]ndpoint solutions alone [is] not enough in a world that is changing daily from attacks and threats. We encourage customers to be very aggressive in deploying solutions that offer a combined approach to security. Anti-virus software alone is not enough".
Intellectual Ventures suit
In February 2015, Symantec was found guilty of two counts of patent infringement in a suit by Intellectual Ventures Inc and ordered to pay $17 million in compensation and damages, In September 2016, this decision was reversed on appeal by the Federal Circuit.
Sustaining digital certificate security
On September 18, 2015, Google notified Symantec that the latter issued 23 test certificates for five organizations, including Google and Opera, without the domain owners' knowledge. Symantec performed another audit and announced that an additional 164 test certificates were mis-issued for 76 domains and 2,458 test certificates were mis-issued for domains that had never been registered. Google requested that Symantec update the public incident report with proven analysis explaining the details on each of the failures.
The company was asked to report all the certificates issued to the Certificate Transparency log henceforth. Symantec has since reported implementing Certificate Transparency for all its SSL Certificates. Above all, Google has insisted that Symantec execute a security audit by a third party and to maintain tamper-proof security audit logs.
Google and Symantec clash on website security checks
On March 24, 2017, Google stated that it had lost confidence in Symantec, after the latest incident of improper certificate issuance. Google says millions of existing Symantec certificates will become untrusted in Google Chrome over the next 12 months. According to Google, Symantec partners issued at least 30,000 certificates of questionable validity over several years, but Symantec disputes that number. Google said Symantec failed to comply with industry standards and could not provide audits showing the necessary documentation.
Google's Ryan Sleevi said that Symantec partnered with other CAs (CrossCert (Korea Electronic Certificate Authority), Certisign Certificatadora Digital, Certsuperior S. de R. L. de C.V., and Certisur S.A.) who did not follow proper verification procedures leading to the misissuance of certificates.
Following discussions in which Google had required that Symantec migrate Symantec-branded certificate issuance operations a non-Symantec-operated “Managed Partner Infrastructure”, a deal was announced whereby DigiCert acquired Symantec's website security business. In September 2017, Google announced that starting with Chrome 66, "Chrome will remove trust in Symantec-issued certificates issued prior to June 1, 2016". Google further stated that "by December 1, 2017, Symantec will transition issuance and operation of publicly-trusted certificates to DigiCert infrastructure, and certificates issued from the old Symantec infrastructure after this date will not be trusted in Chrome." Google predicted that toward the end of October, 2018, with the release of Chrome 70, the browser would omit all trust in Symantec's old infrastructure and all of the certificates it had issued, affecting most certificates chaining to Symantec roots. Mozilla Firefox planned to distrust Symantec-issued certificates in Firefox 63 (released on October 23, 2018), but delivered the change in Firefox 64 (released on December 11, 2018). Apple has also planned to distrust Symantec root certificates. Subsequently, Symantec exited the TLS/SSL segment by selling the SSL unit to Digicert for $950 million in mid 2017.
See also
Comparison of antivirus software
Comparison of computer viruses
Huawei Symantec, a joint venture between Huawei and Symantec
Web blocking in the United Kingdom - Technologies
Symantec behavior analysis technologies SONAR and AntiBot
References
External links
Computer security companies
Computer security software companies
Content-control software
Former certificate authorities
Companies based in Tempe, Arizona
Software companies established in 1982
1982 establishments in California
Software companies of the United States
Companies listed on the Nasdaq |
354643 | https://en.wikipedia.org/wiki/Plausible%20deniability | Plausible deniability | Plausible deniability is the ability of people, typically senior officials in a formal or informal chain of command, to deny knowledge of or responsibility for any damnable actions committed by members of their organizational hierarchy. They may do so because of a lack or absence of evidence that can confirm their participation, even if they were personally involved in or at least willfully ignorant of the actions. If illegal or otherwise-disreputable and unpopular activities become public, high-ranking officials may deny any awareness of such acts to insulate themselves and shift the blame onto the agents who carried out the acts, as they are confident that their doubters will be unable to prove otherwise. The lack of evidence to the contrary ostensibly makes the denial plausible (credible), but sometimes, it makes any accusations only unactionable.
The term typically implies forethought, such as intentionally setting up the conditions for the plausible avoidance of responsibility for one's future actions or knowledge. In some organizations, legal doctrines such as command responsibility exist to hold major parties responsible for the actions of subordinates who are involved in heinous acts and nullify any legal protection that their denial of involvement would carry.
In politics and espionage, deniability refers to the ability of a powerful player or intelligence agency to pass the buck and to avoid blowback by secretly arranging for an action to be taken on its behalf by a third party that is ostensibly unconnected with the major player. In political campaigns, plausible deniability enables candidates to stay clean and denounce third-party advertisements that use unethical approaches or potentially-libelous innuendo.
Although plausible deniability has existed throughout history, the term was coined by the CIA in the early 1960s to describe the withholding of information from senior officials to protect them from repercussions if illegal or unpopular activities became public knowledge.
Overview
Arguably, the key concept of plausible deniability is plausibility. It is relatively easy for a government official to issue a blanket denial of an action, and it is possible to destroy or cover up evidence after the fact, that might be sufficient to avoid a criminal prosecution, for instance. However, the public might well disbelieve the denial, particularly if there is strong circumstantial evidence or if the action is believed to be so unlikely that the only logical explanation is that the denial is false.
The concept is even more important in espionage. Intelligence may come from many sources, including human sources. The exposure of information to which only a few people are privileged may directly implicate some of the people in the disclosure. An example is if an official is traveling secretly, and only one aide knows the specific travel plans. If that official is assassinated during his travels, and the circumstances of the assassination strongly suggest that the assassin had foreknowledge of the official's travel plans, the probable conclusion is that his aide has betrayed the official. There may be no direct evidence linking the aide to the assassin, but collaboration can be inferred from the facts alone, thus making the aide's denial implausible.
History
The term's roots go back to US President Harry Truman's National Security Council Paper 10/2 of June 18, 1948, which defined "covert operations" as "all activities (except as noted herein) which are conducted or sponsored by this Government against hostile foreign states or groups or in support of friendly foreign states or groups but which are so planned and executed that any US Government responsibility for them is not evident to unauthorized persons and that if uncovered the US Government can plausibly disclaim any responsibility for them." During the Eisenhower administration, NSC 10/2 was incorporated into the more-specific NSC 5412/2 "Covert Operations." NSC 5412 was declassified in 1977 and is located at the National Archives. The expression "plausibly deniable" was first used publicly by Central Intelligence Agency (CIA) Director Allen Dulles. The idea, on the other hand, is considerably older. For example, in the 19th century, Charles Babbage described the importance of having "a few simply honest men" on a committee who could be temporarily removed from the deliberations when "a peculiarly delicate question arises" so that one of them could "declare truly, if necessary, that he never was present at any meeting at which even a questionable course had been proposed."
Church Committee
A U.S. Senate committee, the Church Committee, in 1974–1975 conducted an investigation of the intelligence agencies. In the course of the investigation, it was revealed that the CIA, going back to the Kennedy administration, had plotted the assassination of a number of foreign leaders, including Cuba's Fidel Castro, but the president himself, who clearly supported such actions, was not to be directly involved so that he could deny knowledge of it. That was given the term "plausible denial."
Plausible denial involves the creation of power structures and chains of command loose and informal enough to be denied if necessary. The idea was that the CIA and later other bodies could be given controversial instructions by powerful figures, including the president himself, but that the existence and true source of those instructions could be denied if necessary if, for example, an operation went disastrously wrong and it was necessary for the administration to disclaim responsibility.
Later legislative barriers
The Hughes–Ryan Act of 1974 sought to put an end to plausible denial by requiring a presidential finding for each operation to be important to national security, and the Intelligence Oversight Act of 1980 required for Congress to be notified of all covert operations. Both laws, however, are full of enough vague terms and escape hatches to allow the executive branch to thwart their authors' intentions, as was shown by the Iran–Contra affair. Indeed, the members of Congress are in a dilemma since when they are informed, they are in no position to stop the action, unless they leak its existence and thereby foreclose the option of covertness.
Media reports
Iran–Contra affair
In his testimony to the congressional committee studying the Iran–Contra affair, Vice Admiral John Poindexter stated: "I made a deliberate decision not to ask the President, so that I could insulate him from the decision and provide some future deniability for the President if it ever leaked out."
Declassified government documents
A telegram from the Ambassador in Vietnam Henry Cabot Lodge Jr., to Special Assistant for National Security Affairs McGeorge Bundy on US options with respect to a possible coup, mentions plausible denial.
CIA and White House documents on covert political intervention in the 1964 Chilean election have been declassified. The CIA's Chief of Western Hemisphere Division, J.C. King, recommended for funds for the campaign to "be provided in a fashion causing (Eduardo Frei Montalva president of Chile) to infer United States origin of funds and yet permitting plausible denial."
Training files of the CIA's covert "Operation PBSuccess" for the 1954 coup in Guatemala describe plausible deniability. According to the National Security Archive: "Among the documents found in the training files of Operation PBSuccess and declassified by the Agency is a CIA document titled 'A Study of Assassination.' A how-to guide book in the art of political killing, the 19-page manual offers detailed descriptions of the procedures, instruments, and implementation of assassination." The manual states that to provide plausible denial, "no assassination instructions should ever be written or recorded."
Soviet operations
In OPERATION INFEKTION (also called "OPERATION DENVER"), the Soviet KGB utilised the East German Stasi and Soviet-affiliated press to spread the idea that HIV/AIDS was an engineered bioweapon. The Stasi acquired plausible deniability on the operation by covertly supporting biologist Jakob Segal, whose stories were picked up by international press, including "numerous bourgeois newspapers" such as the Sunday Express. Publications in third-party countries were then cited as the originators of the claims. Meanwhile, Soviet intelligence obtained plausible deniability by utilising the German Stasi in the disinformation operation.
Little green men and Wagner Group
"Little green men" or troops without insignia carrying modern Russian military equipment, emerged at the start of the Russo-Ukrainian War, which The Moscow Times described as a tactic of plausible deniability.
The "Wagner Group" a Russian private military company has been described as an attempt at plausible deniability for Kremlin-backed interventions in Ukraine, Syria, and in various interventions in Africa.
Flaws
The doctrine has at least five major flaws:
It is an open door to the abuse of authority by requiring that the parties in question to be said to be able to have acted independently, which, in the end, is tantamount to giving them license to act independently.
The denials are sometimes seen as plausible but sometimes seen through by both the media and the populace. One aspect of the Watergate crisis was the to stop the scandal affecting President Richard Nixon and his aides.
Plausible deniability increases the risk of misunderstanding between senior officials and their employees.
If the claim fails, it seriously discredits the political figure invoking it as a defense ("it's not the crime; it's the coverup").
If it succeeds, it creates the impression that the government is not in control of the state ("asleep at the switch," also known as the deep state).
Other examples
Another example of plausible deniability is someone who actively avoids gaining certain knowledge of facts because it benefits that person not to know.
As an example, a lawyer may suspect that facts exist that would hurt his case but decide not to investigate the issue because if he has actual knowledge, the rules of ethics might require him to reveal the facts to the opposing side.
Council on Foreign Relations
Use in computer networks
In computer networks, plausible deniability often refers to a situation in which people can deny transmitting a file, even when it is proven to come from their computer.
That is sometimes done by setting the computer to relay certain types of broadcasts automatically in such a way that the original transmitter of a file is indistinguishable from those who are merely relaying it. In that way, those who first transmitted the file can claim that their computer had merely relayed it from elsewhere. This principle is used in the opentracker bittorrent implementation by including random IP addresses in peer lists.
In encrypted messaging protocols, such as bitmessage, every user on the network keeps a copy of every message, but is only able to decrypt their own and that can only be done by trying to decrypt every single message. Using this approach it is impossible to determine who sent a message to whom without being able to decrypt it. As everyone receives everything and the outcome of the decryption process is kept private.
It can also be done by a VPN if the host is not known.
In any case, that claim cannot be disproven without a complete decrypted log of all network connections.
Freenet file sharing
The Freenet file sharing network is another application of the idea by obfuscating data sources and flows to protect operators and users of the network by preventing them and, by extension, observers such as censors from knowing where data comes from and where it is stored.
Use in cryptography
In cryptography, deniable encryption may be used to describe steganographic techniques in which the very existence of an encrypted file or message is deniable in the sense that an adversary cannot prove that an encrypted message exists. In that case, the system is said to be "fully undetectable" (FUD).
Some systems take this further, such as MaruTukku, FreeOTFE and (to a much lesser extent) TrueCrypt and VeraCrypt, which nest encrypted data. The owner of the encrypted data may reveal one or more keys to decrypt certain information from it, and then deny that more keys exist, a statement which cannot be disproven without knowledge of all encryption keys involved. The existence of "hidden" data within the overtly encrypted data is then deniable in the sense that it cannot be proven to exist.
Programming
The Underhanded C Contest is an annual programming contest involving the creation of carefully crafted defects, which have to be both very hard to find and plausibly deniable as mistakes once found.
See also
References
Further reading
External links
Sections of the Church Committee about plausible denial on wikisource.org
Church Committee reports (Assassination Archives and Research Center)
Church Report: Covert Action in Chile 1963-1973 (U.S. Dept. of State)
Original 255 pages of Church Committee "Findings and Conclusions" in pdf file
Central Intelligence Agency operations
Political terminology
Military terminology
Euphemisms
Accountability |
357817 | https://en.wikipedia.org/wiki/Promiscuous%20mode | Promiscuous mode | In computer networking, promiscuous mode is a mode for a wired network interface controller (NIC) or wireless network interface controller (WNIC) that causes the controller to pass all traffic it receives to the central processing unit (CPU) rather than passing only the frames that the controller is specifically programmed to receive. This mode is normally used for packet sniffing that takes place on a router or on a computer connected to a wired network or one being part of a wireless LAN. Interfaces are placed into promiscuous mode by software bridges often used with hardware virtualization.
In IEEE 802 networks such as Ethernet or IEEE 802.11, each frame includes a destination MAC address. In non-promiscuous mode, when a NIC receives a frame, it drops it unless the frame is addressed to that NIC's MAC address or is a broadcast or multicast addressed frame. In promiscuous mode, however, the NIC allows all frames through, thus allowing the computer to read frames intended for other machines or network devices.
Many operating systems require superuser privileges to enable promiscuous mode. A non-routing node in promiscuous mode can generally only monitor traffic to and from other nodes within the same broadcast domain (for Ethernet and IEEE 802.11) or ring (for Token Ring). Computers attached to the same Ethernet hub satisfy this requirement, which is why network switches are used to combat malicious use of promiscuous mode. A router may monitor all traffic that it routes.
Promiscuous mode is often used to diagnose network connectivity issues. There are programs that make use of this feature to show the user all the data being transferred over the network. Some protocols like FTP and Telnet transfer data and passwords in clear text, without encryption, and network scanners can see this data. Therefore, computer users are encouraged to stay away from insecure protocols like telnet and use more secure ones such as SSH.
Detection
As promiscuous mode can be used in a malicious way to capture private data in transit on a network, computer security professionals might be interested in detecting network devices that are in promiscuous mode. In promiscuous mode, some software might send responses to frames even though they were addressed to another machine. However, experienced sniffers can prevent this (e.g., using carefully designed firewall settings). An example is sending a ping (ICMP echo request) with the wrong MAC address but the right IP address. If an adapter is operating in normal mode, it will drop this frame, and the IP stack never sees or responds to it. If the adapter is in promiscuous mode, the frame will be passed on, and the IP stack on the machine (to which a MAC address has no meaning) will respond as it would to any other ping. The sniffer can prevent this by configuring a firewall to block ICMP traffic.
Some applications that use promiscuous mode
The following applications and applications classes use promiscuous mode.
Packet Analyzer
NetScout Sniffer
Wireshark (formerly Ethereal)
tcpdump
OmniPeek
Capsa
ntop
Firesheep
Virtual machine
VMware's VMnet bridging
VirtualBox bridging mode
Containers
Docker with optional Macvlan driver on Linux
Cryptanalysis
Aircrack-ng
AirSnort
Cain and Abel
Network monitoring
KisMAC (used for WLAN)
Kismet
IPTraf
Snort
CommView
Gaming
XLink Kai
See also
Packet analyzer
MAC spoofing
Monitor mode
References
External links
SearchSecurity.com definition of promiscuous mode
Network analyzers |
357881 | https://en.wikipedia.org/wiki/Test-driven%20development | Test-driven development | Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed, and tracking all software development by repeatedly testing the software against all test cases. This is as opposed to software being developed first and test cases created later.
Software engineer Kent Beck, who is credited with having developed or "rediscovered" the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.
Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999, but more recently has created more general interest in its own right.
Programmers also apply the concept to improving and debugging legacy code developed with older techniques.
Test-driven development cycle
The following sequence is based on the book Test-Driven Development by Example:
1. Add a test
The adding of a new feature begins by writing a test that passes iff the feature's specifications are met. The developer can discover these specifications by asking about use cases and user stories. A key benefit of test-driven development is that it makes the developer focus on requirements before writing code. This is in contrast with the usual practice, where unit tests are only written after code.
2. Run all tests. The new test should fail for expected reasons
This shows that new code is actually needed for the desired feature. It validates that the test harness is working correctly. It rules out the possibility that the new test is flawed and will always pass.
3. Write the simplest code that passes the new test
Inelegant or hard code is acceptable, as long as it passes the test. The code will be honed anyway in Step 5. No code should be added beyond the tested functionality.
4. All tests should now pass
If any fail, the new code must be revised until they pass. This ensures the new code meets the test requirements and does not break existing features.
5. Refactor as needed, using tests after each refactor to ensure that functionality is preserved
Code is refactored for readability and maintainability. In particular, hard-coded test data should be removed. Running the test suite after each refactor helps ensure that no existing functionality is broken.
Examples of refactoring:
moving code to where it most logically belongs
removing duplicate code
making names self-documenting
splitting methods into smaller pieces
re-arranging inheritance hierarchies
Repeat
The cycle above is repeated for each new piece of functionality. Tests should be small and incremental, and commits made often. That way, if new code fails some tests, the programmer can simply undo or revert rather than debug excessively. When using external libraries, it is important not to write tests that are so small as to effectively test merely the library itself, unless there is some reason to believe that the library is buggy or not feature-rich enough to serve all the needs of the software under development.
Development style
There are various aspects to using test-driven development, for example the principles of "keep it simple, stupid" (KISS) and "You aren't gonna need it" (YAGNI). By focusing on writing only the code necessary to pass tests, designs can often be cleaner and clearer than is achieved by other methods. In Test-Driven Development by Example, Kent Beck also suggests the principle "Fake it till you make it".
To achieve some advanced design concept such as a design pattern, tests are written that generate that design. The code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but it allows the developer to focus only on what is important.
Writing the tests first: The tests should be written before the functionality that is to be tested. This has been claimed to have many benefits. It helps ensure that the application is written for testability, as the developers must consider how to test the application from the outset rather than adding it later. It also ensures that tests for every feature get written. Additionally, writing the tests first leads to a deeper and earlier understanding of the product requirements, ensures the effectiveness of the test code, and maintains a continual focus on software quality. When writing feature-first code, there is a tendency by developers and organisations to push the developer on to the next feature, even neglecting testing entirely. The first TDD test might not even compile at first, because the classes and methods it requires may not yet exist. Nevertheless, that first test functions as the beginning of an executable specification.
Each test case fails initially: This ensures that the test really works and can catch an error. Once this is shown, the underlying functionality can be implemented. This has led to the "test-driven development mantra", which is "red/green/refactor", where red means fail and green means pass. Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring. Receiving the expected test results at each stage reinforces the developer's mental model of the code, boosts confidence and increases productivity.
Keep the unit small
For TDD, a unit is most commonly defined as a class, or a group of related functions often called a module. Keeping units relatively small is claimed to provide critical benefits, including:
Reduced debugging effort – When test failures are detected, having smaller units aids in tracking down errors.
Self-documenting tests – Small test cases are easier to read and to understand.
Advanced practices of test-driven development can lead to acceptance test–driven development (ATDD) and specification by example where the criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit test-driven development (UTDD) process. This process ensures the customer has an automated mechanism to decide whether the software meets their requirements. With ATDD, the development team now has a specific target to satisfy – the acceptance tests – which keeps them continuously focused on what the customer really wants from each user story.
Best practices
Test structure
Effective layout of a test case ensures all required actions are completed, improves the readability of the test case, and smooths the flow of execution. Consistent structure helps in building a self-documenting test case. A commonly applied structure for test cases has (1) setup, (2) execution, (3) validation, and (4) cleanup.
Setup: Put the Unit Under Test (UUT) or the overall test system in the state needed to run the test.
Execution: Trigger/drive the UUT to perform the target behavior and capture all output, such as return values and output parameters. This step is usually very simple.
Validation: Ensure the results of the test are correct. These results may include explicit outputs captured during execution or state changes in the UUT.
Cleanup: Restore the UUT or the overall test system to the pre-test state. This restoration permits another test to execute immediately after this one. In some cases, in order to preserve the information for possible test failure analysis, the cleanup should be starting the test just before the test's setup run.
Individual best practices
Some best practices that an individual could follow would be to separate common set-up and tear-down logic into test support services utilized by the appropriate test cases, to keep each test oracle focused on only the results necessary to validate its test, and to design time-related tests to allow tolerance for execution in non-real time operating systems. The common practice of allowing a 5-10 percent margin for late execution reduces the potential number of false negatives in test execution. It is also suggested to treat test code with the same respect as production code. Test code must work correctly for both positive and negative cases, last a long time, and be readable and maintainable. Teams can get together with and review tests and test practices to share effective techniques and catch bad habits.
Practices to avoid, or "anti-patterns"
Having test cases depend on system state manipulated from previously executed test cases (i.e., you should always start a unit test from a known and pre-configured state).
Dependencies between test cases. A test suite where test cases are dependent upon each other is brittle and complex. Execution order should not be presumed. Basic refactoring of the initial test cases or structure of the UUT causes a spiral of increasingly pervasive impacts in associated tests.
Interdependent tests. Interdependent tests can cause cascading false negatives. A failure in an early test case breaks a later test case even if no actual fault exists in the UUT, increasing defect analysis and debug efforts.
Testing precise execution behavior timing or performance.
Building "all-knowing oracles". An oracle that inspects more than necessary is more expensive and brittle over time. This very common error is dangerous because it causes a subtle but pervasive time sink across the complex project.
Testing implementation details.
Slow running tests.
Benefits
A 2005 study found that using TDD meant writing more tests and, in turn, programmers who wrote more tests tended to be more productive. Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive.
Programmers using pure TDD on new ("greenfield") projects reported they only rarely felt the need to invoke a debugger. Used in conjunction with a version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging.
Test-driven development offers more than just simple validation of correctness, but can also drive the design of a program. By focusing on the test cases first, one must imagine how the functionality is used by clients (in the first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit is complementary to design by contract as it approaches code through test cases rather than through mathematical assertions or preconceptions.
Test-driven development offers the ability to take small steps when required. It allows a programmer to focus on the task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered initially, and tests to create these extraneous circumstances are implemented separately. Test-driven development ensures in this way that all written code is covered by at least one test. This gives the programming team, and subsequent users, a greater level of confidence in the code.
While it is true that more code is required with TDD than without TDD because of the unit test code, the total code implementation time could be shorter based on a model by Müller and Padberg. Large numbers of tests help to limit the number of defects in the code. The early and frequent nature of the testing helps to catch defects early in the development cycle, preventing them from becoming endemic and expensive problems. Eliminating defects early in the process usually avoids lengthy and tedious debugging later in the project.
TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the methodology requires that the developers think of the software in terms of small units that can be written and tested independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner interfaces. The use of the mock object design pattern also contributes to the overall modularization of the code because this pattern requires that the code be written so that modules can be switched easily between mock versions for unit testing and "real" versions for deployment.
Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code path. For example, for a TDD developer to add an else branch to an existing if statement, the developer would first have to write a failing test case that motivates the branch. As a result, the automated tests resulting from TDD tend to be very thorough: they detect any unexpected changes in the code's behaviour. This detects problems that can arise where a change later in the development cycle unexpectedly alters other functionality.
Madeyski provided empirical evidence (via a series of laboratory experiments with over 200 developers) regarding the superiority of the TDD practice over the traditional Test-Last approach or testing for correctness approach, with respect to the lower coupling between objects (CBO). The mean effect size represents a medium (but close to large) effect on the basis of meta-analysis of the performed experiments which is a substantial finding. It suggests a better modularization (i.e., a more modular design), easier reuse and testing of the developed software products due to the TDD programming practice. Madeyski also measured the effect of the TDD practice on unit tests using branch coverage (BC) and mutation score indicator (MSI), which are indicators of the thoroughness and the fault detection effectiveness of unit tests, respectively. The effect size of TDD on branch coverage was medium in size and therefore is considered substantive effect. These findings have been subsequently confirmed by further, smaller experimental evaluations of TDD.
Limitations
Test-driven development does not perform sufficient testing in situations where full functional tests are required to determine success or failure, due to extensive use of unit tests. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world.
Management support is essential. Without the entire organization believing that test-driven development is going to improve the product, management may feel that time spent writing tests is wasted.
Unit tests created in a test-driven development environment are typically created by the developer who is writing the code being tested. Therefore, the tests may share blind spots with the code: if, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify those parameters. Another example: if the developer misinterprets the requirements for the module they are developing, the code and the unit tests they write will both be wrong in the same way. Therefore, the tests will pass, giving a false sense of correctness.
A high number of passing unit tests may bring a false sense of security, resulting in fewer additional software testing activities, such as integration testing and compliance testing.
Tests become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings, are themselves prone to failure, and they are expensive to maintain. This is especially the case with fragile tests. There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs, it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during the code refactoring phase described above.
Writing and maintaining an excessive number of tests costs time. Also, more-flexible modules (with limited tests) might accept new requirements without the need for changing the tests. For those reasons, testing for only extreme conditions, or a small sample of data, can be easier to adjust than a set of highly detailed tests.
The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore, these original, or early, tests become increasingly precious as time goes by. The tactic is to fix it early. Also, if a poor architecture, a poor design, or a poor testing strategy leads to a late change that makes dozens of existing tests fail, then it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to undetectable holes in the test coverage.
Test-driven work
Test-driven development has been adopted outside of software development, in both product and service teams, as test-driven work. Similar to TDD, non-software teams develop quality control (QC) checks (usually manual tests rather than automated tests) for each aspect of the work prior to commencing. These QC checks are then used to inform the design and validate the associated outcomes. The six steps of the TDD sequence are applied with minor semantic changes:
"Add a check" replaces "Add a test"
"Run all checks" replaces "Run all tests"
"Do the work" replaces "Write some code"
"Run all checks" replaces "Run tests"
"Clean up the work" replaces "Refactor code"
"Repeat"
TDD and ATDD
Test-driven development is related to, but different from acceptance test–driven development (ATDD). TDD is primarily a developer's tool to help create well-written unit of code (function, class, or module) that correctly performs a set of operations. ATDD is a communication tool between the customer, developer, and tester to ensure that the requirements are well-defined. TDD requires test automation. ATDD does not, although automation helps with regression testing. Tests used in TDD can often be derived from ATDD tests, since the code units implement some portion of a requirement. ATDD tests should be readable by the customer. TDD tests do not need to be.
TDD and BDD
BDD (behavior-driven development) combines practices from TDD and from ATDD.
It includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. Tools such as JBehave, Cucumber, Mspec and Specflow provide syntaxes which allow product owners, developers and test engineers to define together the behaviors which can then be translated into automated tests.
Code visibility
Test suite code clearly has to be able to access the code it is testing. On the other hand, normal design criteria such as information hiding, encapsulation and the separation of concerns should not be compromised. Therefore, unit test code for TDD is usually written within the same project or module as the code being tested.
In object oriented design this still does not provide access to private data and methods. Therefore, extra work may be necessary for unit tests. In Java and other languages, a developer can use reflection to access private fields and methods. Alternatively, an inner class can be used to hold the unit tests so they have visibility of the enclosing class's members and attributes. In the .NET Framework and some other programming languages, partial classes may be used to expose private methods and data for the tests to access.
It is important that such testing hacks do not remain in the production code. In C and other languages, compiler directives such as #if DEBUG ... #endif can be placed around such additional classes and indeed all other test-related code to prevent them being compiled into the released code. This means the released code is not exactly the same as what was unit tested. The regular running of fewer but more comprehensive, end-to-end, integration tests on the final release build can ensure (among other things) that no production code exists that subtly relies on aspects of the test harness.
There is some debate among practitioners of TDD, documented in their blogs and other writings, as to whether it is wise to test private methods and data anyway. Some argue that private members are a mere implementation detail that may change, and should be allowed to do so without breaking numbers of tests. Thus it should be sufficient to test any class through its public interface or through its subclass interface, which some languages call the "protected" interface. Others say that crucial aspects of functionality may be implemented in private methods and testing them directly offers advantage of smaller and more direct unit tests.
Software for TDD
There are many testing frameworks and tools that are useful in TDD.
xUnit frameworks
Developers may use computer-assisted testing frameworks, commonly collectively named xUnit (which are derived from SUnit, created in 1998), to create and automatically run the test cases. xUnit frameworks provide assertion-style test validation capabilities and result reporting. These capabilities are critical for automation as they move the burden of execution validation from an independent post-processing activity to one that is included in the test execution. The execution framework provided by these test frameworks allows for the automatic execution of all system test cases or various subsets along with other features.
TAP results
Testing frameworks may accept unit test output in the language-agnostic Test Anything Protocol created in 1987.
Fakes, mocks and integration tests
Unit tests are so named because they each test one unit of code. A complex module may have a thousand unit tests and a simple module may have only ten. The unit tests used for TDD should never cross process boundaries in a program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage developers from running the whole suite. Introducing dependencies on external modules or data also turns unit tests into integration tests. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause of the failure.
When code under development relies on a database, a web service, or any other external process or service, enforcing a unit-testable separation is also an opportunity and a driving force to design more modular, more testable and more reusable code. Two steps are necessary:
Whenever external access is needed in the final design, an interface should be defined that describes the access available. See the dependency inversion principle for a discussion of the benefits of doing this regardless of TDD.
The interface should be implemented in two ways, one of which really accesses the external process, and the other of which is a fake or mock. Fake objects need do little more than add a message such as "Person object saved" to a trace log, against which a test assertion can be run to verify correct behaviour. Mock objects differ in that they themselves contain test assertions that can make the test fail, for example, if the person's name and other data are not as expected.
Fake and mock object methods that return data, ostensibly from a data store or user, can help the test process by always returning the same, realistic data that tests can rely upon. They can also be set into predefined fault modes so that error-handling routines can be developed and reliably tested. In a fault mode, a method may return an invalid, incomplete or null response, or may throw an exception. Fake services other than data stores may also be useful in TDD: A fake encryption service may not, in fact, encrypt the data passed; a fake random number service may always return 1. Fake or mock implementations are examples of dependency injection.
A test double is a test-specific capability that substitutes for a system capability, typically a class or function, that the UUT depends on. There are two times at which test doubles can be introduced into a system: link and execution. Link time substitution is when the test double is compiled into the load module, which is executed to validate testing. This approach is typically used when running in an environment other than the target environment that requires doubles for the hardware level code for compilation. The alternative to linker substitution is run-time substitution in which the real functionality is replaced during the execution of a test case. This substitution is typically done through the reassignment of known function pointers or object replacement.
Test doubles are of a number of different types and varying complexities:
Dummy – A dummy is the simplest form of a test double. It facilitates linker time substitution by providing a default return value where required.
Stub – A stub adds simplistic logic to a dummy, providing different outputs.
Spy – A spy captures and makes available parameter and state information, publishing accessors to test code for private information allowing for more advanced state validation.
Mock – A mock is specified by an individual test case to validate test-specific behavior, checking parameter values and call sequencing.
Simulator – A simulator is a comprehensive component providing a higher-fidelity approximation of the target capability (the thing being doubled). A simulator typically requires significant additional development effort.
A corollary of such dependency injection is that the actual database or other external-access code is never tested by the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven code with the "real" implementations of the interfaces discussed above. These are integration tests and are quite separate from the TDD unit tests. There are fewer of them, and they must be run less often than the unit tests. They can nonetheless be implemented using the same testing framework.
Integration tests that alter any persistent store or database should always be designed carefully with consideration of the initial and final state of the files or database, even if any test fails. This is often achieved using some combination of the following techniques:
The TearDown method, which is integral to many test frameworks.
try...catch...finally exception handling structures where available.
Database transactions where a transaction atomically includes perhaps a write, a read and a matching delete operation.
Taking a "snapshot" of the database before running any tests and rolling back to the snapshot after each test run. This may be automated using a framework such as Ant or NAnt or a continuous integration system such as CruiseControl.
Initialising the database to a clean state before tests, rather than cleaning up after them. This may be relevant where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before detailed diagnosis can be performed.
TDD for complex systems
Exercising TDD on large, challenging systems requires a modular architecture, well-defined components with published interfaces, and disciplined system layering with maximization of platform independence. These proven practices yield increased testability and facilitate the application of build and test automation.
Designing for testability
Complex systems require an architecture that meets a range of requirements. A key subset of these requirements includes support for the complete and effective testing of the system. Effective modular design yields components that share traits essential for effective TDD.
High Cohesion ensures each unit provides a set of related capabilities and makes the tests of those capabilities easier to maintain.
Low Coupling allows each unit to be effectively tested in isolation.
Published Interfaces restrict Component access and serve as contact points for tests, facilitating test creation and ensuring the highest fidelity between test and production unit configuration.
A key technique for building effective modular architecture is Scenario Modeling where a set of sequence charts is constructed, each one focusing on a single system-level execution scenario. The Scenario Model provides an excellent vehicle for creating the strategy of interactions between components in response to a specific stimulus. Each of these Scenario Models serves as a rich set of requirements for the services or functions that a component must provide, and it also dictates the order in which these components and services interact together. Scenario modeling can greatly facilitate the construction of TDD tests for a complex system.
Managing tests for large teams
In a larger system, the impact of poor component quality is magnified by the complexity of interactions. This magnification makes the benefits of TDD accrue even faster in the context of larger projects. However, the complexity of the total population of tests can become a problem in itself, eroding potential gains. It sounds simple, but a key initial step is to recognize that test code is also important software and should be produced and maintained with the same rigor as the production code.
Creating and managing the architecture of test software within a complex system is just as important as the core product architecture. Test drivers interact with the UUT, test doubles and the unit test framework.
Conference
First TDD Conference was held during July 2021. Conferences were recorded on YouTube
See also
References
External links
TestDrivenDevelopment on WikiWikiWeb
Microsoft Visual Studio Team Test from a TDD approach
Write Maintainable Unit Tests That Will Save You Time And Tears
Improving Application Quality Using Test-Driven Development (TDD)
Test Driven Development Conference
Extreme programming
Software development philosophies
Software development process
Software testing |
358277 | https://en.wikipedia.org/wiki/Cayley%20graph | Cayley graph | In mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem (named after Arthur Cayley), and uses a specified set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs makes them particularly good candidates for constructing families of expander graphs.
Definition
Let be a group and be a generating set of . The Cayley graph is an edge-colored directed graph constructed as follows:
Each element of is assigned a vertex: the vertex set of is identified with
Each element of is assigned a color .
For every and , there is a directed edge of color from the vertex corresponding to to the one corresponding to .
Not every source requires that generate the group. If is not a generating set for , then is disconnected and each connected component represents a coset of the subgroup generated by .
If an element of is its own inverse, then it is typically represented by an undirected edge.
The set is sometimes assumed to be symmetric (i.e. ) and not containing the identity element of the group. In this case, the uncolored Cayley graph can be represented as a simple undirected graph.
In geometric group theory, the set is often assumed to be finite which corresponds to being locally finite.
Examples
Suppose that is the infinite cyclic group and the set consists of the standard generator 1 and its inverse (−1 in the additive notation); then the Cayley graph is an infinite path.
Similarly, if is the finite cyclic group of order and the set consists of two elements, the standard generator of and its inverse, then the Cayley graph is the cycle . More generally, the Cayley graphs of finite cyclic groups are exactly the circulant graphs.
The Cayley graph of the direct product of groups (with the cartesian product of generating sets as a generating set) is the cartesian product of the corresponding Cayley graphs. Thus the Cayley graph of the abelian group with the set of generators consisting of four elements is the infinite grid on the plane , while for the direct product with similar generators the Cayley graph is the finite grid on a torus.
A Cayley graph of the dihedral group on two generators and is depicted to the left. Red arrows represent composition with . Since is self-inverse, the blue lines, which represent composition with , are undirected. Therefore the graph is mixed: it has eight vertices, eight arrows, and four edges. The Cayley table of the group can be derived from the group presentation
A different Cayley graph of is shown on the right. is still the horizontal reflection and is represented by blue lines, and is a diagonal reflection and is represented by pink lines. As both reflections are self-inverse the Cayley graph on the right is completely undirected. This graph corresponds to the presentation
The Cayley graph of the free group on two generators and corresponding to the set is depicted at the top of the article, and represents the identity element. Travelling along an edge to the right represents right multiplication by while travelling along an edge upward corresponds to the multiplication by Since the free group has no relations, the Cayley graph has no cycles. This Cayley graph is a 4-regular infinite tree and is a key ingredient in the proof of the Banach–Tarski paradox.
A Cayley graph of the discrete Heisenberg group
is depicted to the right. The generators used in the picture are the three matrices given by the three permutations of 1, 0, 0 for the entries . They satisfy the relations , which can also be understood from the picture. This is a non-commutative infinite group, and despite being a three-dimensional space, the Cayley graph has four-dimensional volume growth.
Characterization
The group acts on itself by left multiplication (see Cayley's theorem). This may be viewed as the action of on its Cayley graph. Explicitly, an element maps a vertex to the vertex The set of edges of the Cayley graph and their color is preserved by this action: the edge is mapped to the edge , both having color . The left multiplication action of a group on itself is simply transitive, in particular, Cayley graphs are vertex-transitive. The following is a kind of converse to this:
Sabidussi's Theorem. An (unlabeled and uncolored) directed graph is a Cayley graph of a group if and only if it admits a simply transitive action of by graph automorphisms (i.e. preserving the set of directed edges).
To recover the group and the generating set from the unlabeled directed graph select a vertex and label it by the identity element of the group. Then label each vertex of by the unique element of that maps to The set of generators of that yields as the Cayley graph is the set of labels of out-neighbors of .
Elementary properties
The Cayley graph depends in an essential way on the choice of the set of generators. For example, if the generating set has elements then each vertex of the Cayley graph has incoming and outgoing directed edges. In the case of a symmetric generating set with elements, the Cayley graph is a regular directed graph of degree
Cycles (or closed walks) in the Cayley graph indicate relations between the elements of In the more elaborate construction of the Cayley complex of a group, closed paths corresponding to relations are "filled in" by polygons. This means that the problem of constructing the Cayley graph of a given presentation is equivalent to solving the Word Problem for .
If is a surjective group homomorphism and the images of the elements of the generating set for are distinct, then it induces a covering of graphs
where In particular, if a group has generators, all of order different from 2, and the set consists of these generators together with their inverses, then the Cayley graph is covered by the infinite regular tree of degree corresponding to the free group on the same set of generators.
For any finite Cayley graph, considered as undirected, the vertex connectivity is at least equal to 2/3 of the degree of the graph. If the generating set is minimal (removal of any element and, if present, its inverse from the generating set leaves a set which is not generating), the vertex connectivity is equal to the degree. The edge connectivity is in all cases equal to the degree.
If is the left-regular representation with matrix form denoted , the adjacency matrix of is .
Every group character of the group induces an eigenvector of the adjacency matrix of . When is Abelian, the associated eigenvalue is
which takes the form for integers
In particular, the associated eigenvalue of the trivial character (the one sending every element to 1) is the degree of , that is, the order of . If is an Abelian group, there are exactly characters, determining all eigenvalues. The corresponding orthonormal basis of eigenvectors is given by It is interesting to note that this eigenbasis is independent of the generating set .
More generally for symmetric generating sets, take a complete set of irreducible representations of and let with eigenvalue set . Then the set of eigenvalues of is exactly where eigenvalue appears with multiplicity for each occurrence of as an eigenvalue of
Schreier coset graph
If one, instead, takes the vertices to be right cosets of a fixed subgroup one obtains a related construction, the Schreier coset graph, which is at the basis of coset enumeration or the Todd–Coxeter process.
Connection to group theory
Knowledge about the structure of the group can be obtained by studying the adjacency matrix of the graph and in particular applying the theorems of spectral graph theory. Conversely, for symmetric generating sets, the spectral and representation theory of are directly tied together: take a complete set of irreducible representations of and let with eigenvalues . Then the set of eigenvalues of is exactly where eigenvalue appears with multiplicity for each occurrence of as an eigenvalue of
The genus of a group is the minimum genus for any Cayley graph of that group.
Geometric group theory
For infinite groups, the coarse geometry of the Cayley graph is fundamental to geometric group theory. For a finitely generated group, this is independent of choice of finite set of generators, hence an intrinsic property of the group. This is only interesting for infinite groups: every finite group is coarsely equivalent to a point (or the trivial group), since one can choose as finite set of generators the entire group.
Formally, for a given choice of generators, one has the word metric (the natural distance on the Cayley graph), which determines a metric space. The coarse equivalence class of this space is an invariant of the group.
Expansion properties
When , the Cayley graph is -regular, so spectral techniques may be used to analyze the expansion properties of the graph. In particular for abelian groups, the eigenvalues of the Cayley graph are more easily computable and given by with top eigenvalue equal to , so we may use Cheeger's inequality to bound the edge expansion ratio using the spectral gap.
Representation theory can be used to construct such expanding Cayley graphs, in the form of Kazhdan property (T). The following statement holds:
If a discrete group has Kazhdan's property (T), and is a finite, symmetric generating set of , then there exists a constant depending only on such that for any finite quotient of the Cayley graph of with respect to the image of is a -expander.
For example the group has property (T) and is generated by elementary matrices and this gives relatively explicit examples of expander graphs.
Integral classification
An integral graph is one whose eigenvalues are all integers. While the complete classification of integral graphs remains an open problem, the Cayley graphs of certain groups are always integral.
Using previous characterizations of the spectrum of Cayley graphs, note that is integral iff the eigenvalues of are integral for every representation of .
Cayley integral simple group
A group is Cayley integral simple (CIS) if the connected Cayley graph is integral exactly when the symmetric generating set is the complement of a subgroup of . A result of Ahmady, Bell, and Mohar shows that all CIS groups are isomorphic to , or for primes . It is important that actually generates the entire group in order for the Cayley graph to be connected. (If does not generate , the Cayley graph may still be integral, but the complement of is not necessarily a subgroup.)
In the example of , the symmetric generating sets (up to graph isomorphism) are
: is a -cycle with eigenvalues
: is with eigenvalues
The only subgroups of are the whole group and the trivial group, and the only symmetric generating set that produces an integral graph is the complement of the trivial group. Therefore must be a CIS group.
The proof of the complete CIS classification uses the fact that every subgroup and homomorphic image of a CIS group is also a CIS group.
Cayley integral group
A slightly different notion is that of a Cayley integral group , in which every symmetric subset produces an integral graph . Note that no longer has to generate the entire group.
The complete list of Cayley integral groups is given by , and the dicyclic group of order , where and is the quaternion group. The proof relies on two important properties of Cayley integral groups:
Subgroups and homomorphic images of Cayley integral groups are also Cayley integral groups.
A group is Cayley integral iff every connected Cayley graph of the group is also integral.
Normal and Eulerian generating sets
Given a general group , a subset is normal if is closed under conjugation by elements of (generalizing the notion of a normal subgroup), and is Eulerian if for every , the set of elements generating the cyclic group is also contained in .
A 2019 result by Guo, Lytkina, Mazurov, and Revin proves that the Cayley graph is integral for any Eulerian normal subset , using purely representation theoretic techniques.
The proof of this result is relatively short: given an Eulerian normal subset, select pairwise nonconjugate so that is the union of the conjugacy classes . Then using the characterization of the spectrum of a Cayley graph, one can show the eigenvalues of are given by taken over irreducible characters of . Each eigenvalue in this set must be an element of for a primitive root of unity (where must be divisible by the orders of each ). Because the eigenvalues are algebraic integers, to show they are integral it suffices to show that they are rational, and it suffices to show is fixed under any automorphism of . There must be some relatively prime to such that for all , and because is both Eulerian and normal, for some . Sending bijects conjugacy classes, so and have the same size and merely permutes terms in the sum for . Therefore is fixed for all automorphisms of , so is rational and thus integral.
Consequently, if is the alternating group and is a set of permutations given by , then the Cayley graph is integral. (This solved a previously open problem from the Kourovka Notebook.) In addition when is the symmetric group and is either the set of all transpositions or the set of transpositions involving a particular element, the Cayley graph is also integral.
History
Cayley graphs were first considered for finite groups by Arthur Cayley in 1878. Max Dehn in his unpublished lectures on group theory from 1909–10 reintroduced Cayley graphs under the name Gruppenbild (group diagram), which led to the geometric group theory of today. His most important application was the solution of the word problem for the fundamental group of surfaces with genus ≥ 2, which is equivalent to the topological problem of deciding which closed curves on the surface contract to a point.
Bethe lattice
The Bethe lattice or infinite Cayley tree is the Cayley graph of the free group on generators. A presentation of a group by generators corresponds to a surjective map from the free group on generators to the group and at the level of Cayley graphs to a map from the infinite Cayley tree to the Cayley graph. This can also be interpreted (in algebraic topology) as the universal cover of the Cayley graph, which is not in general simply connected.
See also
Vertex-transitive graph
Generating set of a group
Lovász conjecture
Cube-connected cycles
Algebraic graph theory
Cycle graph (algebra)
Notes
External links
Cayley diagrams
Group theory
Permutation groups
Graph families
Application-specific graphs
Geometric group theory |
358913 | https://en.wikipedia.org/wiki/Broadcast%20flag | Broadcast flag | A broadcast flag is a bit field sent in the data stream of a digital television program that indicates whether or not the data stream can be recorded, or if there are any restrictions on recorded content. Possible restrictions include the inability to save an unencrypted digital program to a hard disk or other non-volatile storage, inability to make secondary copies of recorded content (in order to share or archive), forceful reduction of quality when recording (such as reducing high-definition video to the resolution of standard TVs), and inability to skip over commercials.
In the United States, new television receivers using the ATSC standard were supposed to incorporate this functionality by July 1, 2005. The requirement was successfully contested in 2005 and rescinded in 2011.
FCC ruling
Officially called "Digital Broadcast Television Redistribution Control," the FCC's rule is in 47 CFR 73.9002(b) and the following sections, stating in part: "No party shall sell or distribute in interstate commerce a Covered Demodulator Product that does not comply with the Demodulator Compliance Requirements and Demodulator Robustness Requirements." According to the rule, hardware must "actively thwart" piracy.
The rule's Demodulator Compliance Requirements insists that all HDTV demodulators must "listen" for the flag (or assume it to be present in all signals). Flagged content must be output only to "protected outputs" (such as DVI and HDMI ports with HDCP encryption), or in degraded form through analog outputs or digital outputs with visual resolution of 720x480 pixels (EDTV) or less. Flagged content may be recorded only by "authorized" methods, which may include tethering of recordings to a single device.
Since broadcast flags could be activated at any time, a viewer who often records a program might suddenly find that it is no longer possible to save their favorite show. This and other reasons lead many to see the flags as a direct affront to consumer rights.
the Demodulator Robustness Requirements are difficult to implement in open source systems. Devices must be "robust" against user access or modifications so that someone could not easily alter it to ignore the broadcast flags that permit access to the full digital stream. Since open-source device drivers are by design user-modifiable, a PC TV tuner card with open-source drivers would not be "robust".
The GNU Radio project already successfully demonstrated that purely software-based demodulators can exist and the hardware rule is not fully enforceable.
Current status
In American Library Association v. FCC, 406 F.3d 689 (D.C. Cir. 2005), the United States Court of Appeals for the D.C. Circuit ruled that the FCC had exceeded its authority in creating this rule. The court stated that the Commission could not prohibit the manufacture of computer or video hardware without copy-protection technology because the FCC only has authority to regulate transmissions, not devices that receive communications. While it is always possible that the Supreme Court could overturn this ruling, the more likely reemergence of the broadcast flag is in legislation of the United States Congress granting such authority to the FCC.
On May 1, 2006, Sen. Ted Stevens inserted a version of the Broadcast Flag into the Communications, Consumer's Choice, and Broadband Deployment Act of 2006. On June 22, 2006 Sen. John E. Sununu offered an amendment to strike the broadcast and radio flag, but this failed and the broadcast-flag amendment was approved by the Commerce committee. Nonetheless, the overall bill was never passed, and thus died upon adjournment of the 109th Congress in December 2006.
On May 18, 2008, News.com reported that Microsoft had confirmed that current versions of Windows Media Center shipping with the Windows family of operating systems adhered to the use of the broadcast flag, following reports of users being blocked from taping specific airings of NBC programs, mainly American Gladiators and Medium. A Microsoft spokesperson said that Windows Media Center adheres to the "rules set forth by the FCC."
On August 22, 2011, the FCC officially eliminated the broadcast flag regulations.
Related technologies
Radio broadcast flag and RIAA
With the coming of digital radio, the recording industry is attempting to change the ground rules for copyright of songs played on radio. Currently, over the air (i.e. broadcast but not Internet) radio stations may play songs freely but RIAA wants Congress to insert a radio broadcast flag. On April 26, 2006, Congress held a hearing over the radio broadcast flag. Among the witnesses were musicians Anita Baker and Todd Rundgren.
European Broadcast Flag
At present no equivalent signal is typically used in European DVB transmissions, although DVB-CPCM would provide such a set of signal as defined by DVB-SI, usable on clear-to-air television broadcasts. How adherence to such a system would be enforced in a receiver is not yet clear.
In the UK, the BBC introduced content protection restrictions in 2010 on Free to Air content by licensing data necessary to receive the service information for the Freeview HD broadcasts. However the BBC have stated the highest protection applied will be to allow only one copy to be made.
ISDB
ISDB broadcasts are protected as to allow the broadcast to be digitally recorded once, but to not allow digital copies of the recording to be made. Analog recordings can be copied freely. It is possible to disallow the use of analog outputs, although this has yet to be implemented. The protection can be circumvented with the correct hardware and software.
DVB-CPCM
The Digital Video Broadcasting organization is developing DVB-CPCM which allows broadcasters (especially PayTV broadcaster) far more control over the use of content on (and beyond) home networks. The DVB standards are commonly used in Europe and around the world (for satellite, terrestrial, and cable distribution), but are also employed in the United States by Dish Network. In Europe, some entertainment companies were lobbying to legally mandate the use of DVB-CPCM. Opponents fear that mandating DVB-CPCM will kill independent receiver manufacturers that use open source operating systems (e.g., Linux-based set-top boxes.)
Pay-per-view use of broadcast flag
In the US, since April 15, 2008, pay-per-view movies on cable and satellite television now are flagged to prevent a recording off a pay-per-view channel to a digital video recorders or other related devices from being retained after 24 hours from the ordered time of the film. This is the standard film industry practice, including for digital rentals from the iTunes Store and Google Play. Movies recorded before that point would still be available without flagging and could be copied freely, though as of 2015 those pre-2008 DVR units are well out-of-date or probably non-functional, and the pay-per-view concern is moot for all but special events, as nearly all satellite providers and cable providers have moved to more easily restricted video on demand platforms; pay-per-view films have been drawn down to non-notable content.
See also
Copy Control Information
Digital Millennium Copyright Act
Digital Rights Management
Digital Transition Content Security Act
Family Entertainment and Copyright Act
References
, October 1, 2005.
External links
Copyright Protection of Digital Television: The “Broadcast Flag”
Electronic Frontier Foundation's Broadcast Flag page
The Broadcast Flag and "Plug & Play": The FCC's Lockdown of Digital Television
U.S. District Court shoots down broadcast flag (CNET)
Broadcast Flag: Media Industry May Try to Steal the Law - June 2005 MP3 Newswire article
Circuit Court ruling striking down (PDF format)
ATSC
Digital television
High-definition television
Digital rights management standards
Federal Communications Commission
Television terminology
History of television |
359396 | https://en.wikipedia.org/wiki/7z | 7z | 7z is a compressed archive file format that supports several different data compression, encryption and pre-processing algorithms. The 7z format initially appeared as implemented by the 7-Zip archiver. The 7-Zip program is publicly available under the terms of the GNU Lesser General Public License. The LZMA SDK 4.62 was placed in the public domain in December 2008. The latest stable version of 7-Zip and LZMA SDK is version 21.06.
The official, informal 7z file format specification is distributed with 7-Zip's source code since 2015. The specification can be found in plain text format in the 'doc' sub-directory of the source code distribution. There have been additional third-party attempts at writing more concrete documentation based on the released code.
Features and enhancements
The 7z format provides the following main features:
Open, modular architecture that allows any compression, conversion, or encryption method to be stacked.
High compression ratios (depending on the compression method used).
AES-256 encryption.
Large file support (up to approximately 16 exbibytes, or 264 bytes).
Unicode file names.
Support for solid compression, where multiple files of like type are compressed within a single stream, in order to exploit the combined redundancy inherent in similar files.
Compression and encryption of archive headers.
Support for multi-part archives : e.g. xxx.7z.001, xxx.7z.002, ... (see the context menu items Split File... to create them and Combine Files... to re-assemble an archive from a set of multi-part component files).
Support for custom codec plugin DLLs.
The format's open architecture allows additional future compression methods to be added to the standard.
Compression methods
The following compression methods are currently defined:
LZMA – A variation of the LZ77 algorithm, using a sliding dictionary up to 4 GB in length for duplicate string elimination. The LZ stage is followed by entropy coding using a Markov chain-based range coder and binary trees.
LZMA2 – modified version of LZMA providing better multithreading support and less expansion of incompressible data.
Bzip2 – The standard Burrows–Wheeler transform algorithm. Bzip2 uses two reversible transformations; BWT, then Move to front with Huffman coding for symbol reduction (the actual compression element).
PPMd – Dmitry Shkarin's 2002 PPMdH (PPMII/cPPMII) with small changes: PPMII is an improved version of the 1984 PPM compression algorithm (prediction by partial matching).
DEFLATE – Standard algorithm based on 32 kB LZ77 and Huffman coding. Deflate is found in several file formats including ZIP, gzip, PNG and PDF. 7-Zip contains a from-scratch DEFLATE encoder that frequently beats the de facto standard zlib version in compression size, but at the expense of CPU usage.
A suite of recompression tools called AdvanceCOMP contains a copy of the DEFLATE encoder from the 7-Zip implementation; these utilities can often be used to further compress the size of existing gzip, ZIP, PNG, or MNG files.
Pre-processing filters
The LZMA SDK comes with the BCJ and BCJ2 preprocessors included, so that later stages are able to achieve greater compression: For x86, ARM, PowerPC (PPC), IA-64 Itanium, and ARM Thumb processors, jump targets are 'normalized' before compression by changing relative position into absolute values. For x86, this means that near jumps, calls and conditional jumps (but not short jumps and conditional jumps) are converted from the machine language "jump 1655 bytes backwards" style notation to normalized "jump to address 5554" style notation; all jumps to 5554, perhaps a common subroutine, are thus encoded identically, making them more compressible.
BCJ – Converter for 32-bit x86 executables. Normalise target addresses of near jumps and calls from relative distances to absolute destinations.
BCJ2– Pre-processor for 32-bit x86 executables. BCJ2 is an improvement on BCJ, adding additional x86 jump/call instruction processing. Near jump, near call, conditional near jump targets are split out and compressed separately in another stream.
Delta encoding – delta filter, basic preprocessor for multimedia data.
Similar executable pre-processing technology is included in other software; the RAR compressor features displacement compression for 32-bit x86 executables and IA-64 executables, and the UPX runtime executable file compressor includes support for working with 16-bit values within DOS binary files.
Encryption
The 7z format supports encryption with the AES algorithm with a 256-bit key. The key is generated from a user-supplied passphrase using an algorithm based on the SHA-256 hash function. The SHA-256 is executed 218 (262144) times, which causes a significant delay on slow PCs before compression or extraction starts. This technique is called key stretching and is used to make a brute-force search for the passphrase more difficult. Current GPU-based, and custom hardware attacks limit the effectiveness of this particular method of key stretching, so it is still important to choose a strong password.
The 7z format provides the option to encrypt the filenames of a 7z archive.
Limitations
The 7z format does not store filesystem permissions (such as UNIX owner/group permissions or NTFS ACLs), and hence can be inappropriate for backup/archival purposes. A workaround on UNIX-like systems for this is to convert data to a tar bitstream before compressing with 7z. But it is worth noting that GNU tar (common in many UNIX environments) can also compress with the LZMA2 algorithm ("xz") natively, without the use of 7z, using the "-J" switch. The resulting file extension is ".tar.xz" or ".txz" and not ".tar.7z". This method of compression has been adopted with many distributions for packaging, such as Arch, Debian (deb), Fedora (rpm) and Slackware. (The older "lzma" format is less efficient.) On the other hand, it is important to note, that tar does not save the filesystem encoding, which means that tar compressed filenames can become unreadable if decompressed on a different computer.
The 7z format does not allow extraction of some "broken files"—that is (for example) if one has the first segment of a series of 7z files, 7z cannot give the start of the files within the archive—it must wait until all segments are downloaded. The 7z format also lacks recovery records, making it vulnerable to data degradation unless used in conjunction with external solutions, like parchives, or within filesystems with robust error-correction. By way of comparison, zip files also lack a recovery feature while rar has one.
See also
Comparison of archive formats
List of archive formats
Open format
References
Further reading
External links
Computer-related introductions in 1999
Archive formats
Russian inventions |
360788 | https://en.wikipedia.org/wiki/Backdoor%20%28computing%29 | Backdoor (computing) | A backdoor is a typically covert method of bypassing normal authentication or encryption in a computer, product, embedded device (e.g. a home router), or its embodiment (e.g. part of a cryptosystem, algorithm, chipset, or even a "homunculus computer" —a tiny computer-within-a-computer such as that found in Intel's AMT technology). Backdoors are most often used for securing remote access to a computer, or obtaining access to plaintext in cryptographic systems. From there it may be used to gain access to privileged information like passwords, corrupt or delete data on hard drives, or transfer information within autoschediastic networks.
A backdoor may take the form of a hidden part of a program, a separate program (e.g. Back Orifice may subvert the system through a rootkit), code in the firmware of the hardware, or parts of an operating system such as Windows. Trojan horses can be used to create vulnerabilities in a device. A Trojan horse may appear to be an entirely legitimate program, but when executed, it triggers an activity that may install a backdoor. Although some are secretly installed, other backdoors are deliberate and widely known. These kinds of backdoors have "legitimate" uses such as providing the manufacturer with a way to restore user passwords.
Many systems that store information within the cloud fail to create accurate security measures. If many systems are connected within the cloud, hackers can gain access to all other platforms through the most vulnerable system.
Default passwords (or other default credentials) can function as backdoors if they are not changed by the user. Some debugging features can also act as backdoors if they are not removed in the release version.
In 1993, the United States government attempted to deploy an encryption system, the Clipper chip, with an explicit backdoor for law enforcement and national security access. The chip was unsuccessful.
Overview
The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference. They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning (see trapdoor function), and thus the term "backdoor" is now preferred, only after the term trapdoor went out of use. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under ARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970.
A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. An example of this sort of backdoor was used as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password which gave the user access to the system, and to undocumented parts of the system (in particular, a video game-like simulation mode and direct interaction with the artificial intelligence).
Although the number of backdoors in systems using proprietary software (software whose source code is not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission.
Politics and attribution
There are a number of cloak and dagger considerations that come into play when apportioning responsibility.
Covert backdoors sometimes masquerade as inadvertent defects (bugs) for reasons of plausible deniability. In some cases, these might begin life as an actual bug (inadvertent error), which, once discovered are then deliberately left unfixed and undisclosed, whether by a rogue employee for personal advantage, or with C-level executive awareness and oversight.
It is also possible for an entirely above-board corporation's technology base to be covertly and untraceably tainted by external agents (hackers), though this level of sophistication is thought to exist mainly at the level of nation state actors. For example, if a photomask obtained from a photomask supplier differs in a few gates from its photomask specification, a chip manufacturer would be hard-pressed to detect this if otherwise functionally silent; a covert rootkit running in the photomask etching equipment could enact this discrepancy unbeknown to the photomask manufacturer, either, and by such means, one backdoor potentially leads to another. (This hypothetical scenario is essentially a silicon version of the undetectable compiler backdoor, discussed below.)
In general terms, the long dependency-chains in the modern, highly specialized technological economy and innumerable human-elements process control-points make it difficult to conclusively pinpoint responsibility at such time as a covert backdoor becomes unveiled.
Even direct admissions of responsibility must be scrutinized carefully if the confessing party is beholden to other powerful interests.
Examples
Worms
Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit, placed secretly on millions of music CDs through late 2005, are intended as DRM measures—and, in that case, as data-gathering agents, since both surreptitious programs they installed routinely contacted central servers.
A sophisticated attempt to plant a backdoor in the Linux kernel, exposed in November 2003, added a small and subtle code change by subverting the revision control system. In this case, a two-line change appeared to check root access permissions of a caller to the sys_wait4 function, but because it used assignment = instead of equality checking ==, it actually granted permissions to the system. This difference is easily overlooked, and could even be interpreted as an accidental typographical error, rather than an intentional attack.
In January 2014, a backdoor was discovered in certain Samsung Android products, like the Galaxy devices. The Samsung proprietary Android versions are fitted with a backdoor that provides remote access to the data stored on the device. In particular, the Samsung Android software that is in charge of handling the communications with the modem, using the Samsung IPC protocol, implements a class of requests known as remote file server (RFS) commands, that allows the backdoor operator to perform via modem remote I/O operations on the device hard disk or other storage. As the modem is running Samsung proprietary Android software, it is likely that it offers over-the-air remote control that could then be used to issue the RFS commands and thus to access the file system on the device.
Object code backdoors
Harder to detect backdoors involve modifying object code, rather than source code – object code is much harder to inspect, as it is designed to be machine-readable, not human-readable. These backdoors can be inserted either directly in the on-disk object code, or inserted at some point during compilation, assembly linking, or loading – in the latter case the backdoor never appears on disk, only in memory. Object code backdoors are difficult to detect by inspection of the object code, but are easily detected by simply checking for changes (differences), notably in length or in checksum, and in some cases can be detected or analyzed by disassembling the object code. Further, object code backdoors can be removed (assuming source code is available) by simply recompiling from source on a trusted system.
Thus for such backdoors to avoid detection, all extant copies of a binary must be subverted, and any validation checksums must also be compromised, and source must be unavailable, to prevent recompilation. Alternatively, these other tools (length checks, diff, checksumming, disassemblers) can themselves be compromised to conceal the backdoor, for example detecting that the subverted binary is being checksummed and returning the expected value, not the actual value. To conceal these further subversions, the tools must also conceal the changes in themselves – for example, a subverted checksummer must also detect if it is checksumming itself (or other subverted tools) and return false values. This leads to extensive changes in the system and tools being needed to conceal a single change.
Because object code can be regenerated by recompiling (reassembling, relinking) the original source code, making a persistent object code backdoor (without modifying source code) requires subverting the compiler itself – so that when it detects that it is compiling the program under attack it inserts the backdoor – or alternatively the assembler, linker, or loader. As this requires subverting the compiler, this in turn can be fixed by recompiling the compiler, removing the backdoor insertion code. This defense can in turn be subverted by putting a source meta-backdoor in the compiler, so that when it detects that it is compiling itself it then inserts this meta-backdoor generator, together with the original backdoor generator for the original program under attack. After this is done, the source meta-backdoor can be removed, and the compiler recompiled from original source with the compromised compiler executable: the backdoor has been bootstrapped. This attack dates to , and was popularized in Thompson's 1984 article, entitled "Reflections on Trusting Trust"; it is hence colloquially known as the "Trusting Trust" attack. See compiler backdoors, below, for details. Analogous attacks can target lower levels of the system,
such as the operating system, and can be inserted during the system booting process; these are also mentioned in , and now exist in the form of boot sector viruses.
Asymmetric backdoors
A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology: Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g., via publishing, being discovered and disclosed by reverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termed kleptography; they can be carried out in software, hardware (for example, smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now called cryptovirology. Notably, NSA inserted a kleptographic backdoor into the Dual EC DRBG standard.
There exists an experimental asymmetric backdoor in RSA key generation. This OpenSSL RSA backdoor, designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available.
Compiler backdoors
A sophisticated form of black box backdoor is a compiler backdoor, where not only is a compiler subverted (to insert a backdoor in some other program, such as a login program), but it is further modified to detect when it is compiling itself and then inserts both the backdoor insertion code (targeting the other program) and the code-modifying self-compilation, like the mechanism through which retroviruses infect their host. This can be done by modifying the source code, and the resulting compromised compiler (object code) can compile the original (unmodified) source code and insert itself: the exploit has been boot-strapped.
This attack was originally presented in , which was a United States Air Force security analysis of Multics, where they described such an attack on a PL/I compiler, and call it a "compiler trap door"; they also mention a variant where the system initialization code is modified to insert a backdoor during booting, as this is complex and poorly understood, and call it an "initialization trapdoor"; this is now known as a boot sector virus.
This attack was then actually implemented by Ken Thompson, and popularized in his Turing Award acceptance speech in 1983 (published 1984), "Reflections on Trusting Trust", which points out that trust is relative, and the only software one can truly trust is code where every step of the bootstrapping has been inspected. This backdoor mechanism is based on the fact that people only review source (human-written) code, and not compiled machine code (object code). A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job.
Thompson's paper describes a modified version of the Unix C compiler that would put an invisible backdoor in the Unix login command when it noticed that the login program was being compiled, and would also add this feature undetectably to future compiler versions upon their compilation as well.
Because the compiler itself was a compiled program, users would be extremely unlikely to notice the machine code instructions that performed these tasks. (Because of the second task, the compiler's source code would appear "clean".) What's worse, in Thompson's proof of concept implementation, the subverted compiler also subverted the analysis program (the disassembler), so that anyone who examined the binaries in the usual way would not actually see the real code that was running, but something else instead.
An updated analysis of the original exploit is given in , and a historical overview and survey of the literature is given in .
Occurrences
Thompson's version was, officially, never released into the wild. It is believed, however, that a version was distributed to BBN and at least one use of the backdoor was recorded. There are scattered anecdotal reports of such backdoors in subsequent years.
In August 2009, an attack of this kind was discovered by Sophos labs. The W32/Induc-A virus infected the program compiler for Delphi, a Windows programming language. The virus introduced its own code to the compilation of new Delphi programs, allowing it to infect and propagate to many systems, without the knowledge of the software programmer. An attack that propagates by building its own Trojan horse can be especially hard to discover. It is believed that the Induc-A virus had been propagating for at least a year before it was discovered.
Countermeasures
Once a system has been compromised with a backdoor or Trojan horse, such as the Trusting Trust compiler, it is very hard for the "rightful" user to regain control of the system – typically one should rebuild a clean system and transfer data (but not executables) over. However, several practical weaknesses in the Trusting Trust scheme have been suggested. For example, a sufficiently motivated user could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to hide the Trojan horse, such as subverting the disassembler; but there are ways to counter that defense, too, such as writing a disassembler from scratch.
A generic method to counter trusting trust attacks is called Diverse Double-Compiling (DDC). The method requires a different compiler and the source code of the compiler-under-test. That source, compiled with both compilers, results in two different stage-1 compilers, which however should have the same behavior. Thus the same source compiled with both stage-1 compilers must then result in two identical stage-2 compilers. A formal proof is given that the latter comparison guarantees that the purported source code and executable of the compiler-under-test correspond, under some assumptions. This method was applied by its author to verify that the C compiler of the GCC suite (v. 3.0.4) contained no trojan, using icc (v. 11.0) as the different compiler.
In practice such verifications are not done by end users, except in extreme circumstances of intrusion detection and analysis, due to the rarity of such sophisticated attacks, and because programs are typically distributed in binary form. Removing backdoors (including compiler backdoors) is typically done by simply rebuilding a clean system. However, the sophisticated verifications are of interest to operating system vendors, to ensure that they are not distributing a compromised system, and in high-security settings, where such attacks are a realistic concern.
List of known backdoors
Back Orifice was created in 1998 by hackers from Cult of the Dead Cow group as a remote administration tool. It allowed Windows computers to be remotely controlled over a network and parodied the name of Microsoft's BackOffice.
The Dual EC DRBG cryptographically secure pseudorandom number generator was revealed in 2013 to possibly have a kleptographic backdoor deliberately inserted by NSA, who also had the private key to the backdoor.
Several backdoors in the unlicensed copies of WordPress plug-ins were discovered in March 2014. They were inserted as obfuscated JavaScript code and silently created, for example, an admin account in the website database. A similar scheme was later exposed in the Joomla plugin.
Borland Interbase versions 4.0 through 6.0 had a hard-coded backdoor, put there by the developers. The server code contains a compiled-in backdoor account (username: politically, password: correct), which could be accessed over a network connection; a user logging in with this backdoor account could take full control over all Interbase databases. The backdoor was detected in 2001 and a patch was released.
Juniper Networks backdoor inserted in the year 2008 into the versions of firmware ScreenOS from 6.2.0r15 to 6.2.0r18 and from 6.3.0r12 to 6.3.0r20 that gives any user administrative access when using a special master password.
Several backdoors were discovered in C-DATA Optical Line Termination (OLT) devices. Researchers released the findings without notifying C-DATA because they believe the backdoors were intentionally placed by the vendor.
See also
Backdoor:Win32.Hupigon
Backdoor.Win32.Seed
Hardware backdoor
Titanium (malware)
References
Further reading
External links
Finding and Removing Backdoors
Three Archaic Backdoor Trojan Programs That Still Serve Great Pranks
Backdoors removal — List of backdoors and their removal instructions.
FAQ Farm's Backdoors FAQ: wiki question and answer forum
List of backdoors and Removal —
Types of malware
Spyware
Espionage techniques
Rootkits
Cryptography |
360867 | https://en.wikipedia.org/wiki/Sky%20Cinema | Sky Cinema | Sky Cinema is a British subscription film service owned by Sky Group (a division of Comcast). In the United Kingdom, Sky Cinema channels currently broadcast on the Sky satellite and Virgin Media cable platforms, and in addition Sky Cinema on demand content are available through these as well as via Now TV, BT TV and TalkTalk TV.
In 2016, Sky rebranded its television film channel operations under one single branding on 8 July, the channels in the United Kingdom and Ireland were rebranded from Sky Movies to Sky Cinema; on 22 September in Germany and Austria, the Sky Cinema brand (originally used for the flagship network) was extended to the German channels in the group formerly known as Sky Film; the Italian Sky Cinema channels followed suit on 5 November by adopting the brand packages introduced in the United Kingdom and Ireland earlier.
History
1989–1998: Early years
Launched on 5 February 1989, Sky Movies was originally a single channel, part of Sky's original four-channel package – alongside Sky News, Eurosport and Sky Channel (which later became Sky One) – on the Astra 1A satellite system, with the first film shown on the channel at 6.00pm was 1987's Project X, starring Matthew Broderick. Before launch, Sky signed first-run deals with 20th Century Fox, Paramount, Warner Bros. Entertainment, Columbia, Orion and Buena Vista Distribution Company (which included Touchstone and Disney). One year after it began broadcasts, it became the first Sky channel to scramble its signal, using a encryption system called VideoCrypt. Anyone attempting to view it without a decoder and smart card could only see a scrambled picture.
On 2 November 1990, Sky Television merged with rival British Satellite Broadcasting, acquiring The Movie Channel. With the launch of the second Astra satellite (1B) The Movie Channel was added to the Sky package on 15 April 1991 and the first film shown was 1989's Indiana Jones and the Last Crusade, starring Harrison Ford and Sean Connery. From the relaunch of the channel under BSkyB, this ident was made by Pacific Data Images and heavily based on NBC's movie opening used from 1987 to 1993. Similarly, Sky Movies was made available to viewers on BSB's old satellite on 8 April earlier that year replaces its music channel, The Power Station. Also in the same year (6 May), Sky Movies and The Movie Channel started broadcasting for 24 hours per day – which previously they had been on air from early afternoon until the early hours of the next morning. In addition of these slots for 6.00pm, 8.00pm and 10.00pm, Sky Movies had several different film genres were used every evening such as:
Monday Night Comedy
Tuesday Night Action
Thursday Night Horror
After Dark – used for Saturday and Wednesday nights at approximately 11.30pm, which included erotic films with an BBFC 18 certificate that often shows sexually explicit material and various adult-oriented content
At the same time, The Movie Channel started to begin its evening films at the later slots of 6.15pm, 8.15pm and 10.15pm, also include showing classic and children's films (at around 4.00pm) during the daytime hours between early morning and late afternoons were used.
For three consecutive years in the early 1990s, Sky Movies carried several non-film premium content known as "special events" including World Wrestling Federation's annual matches, various music concerts and live boxing competitions such as the first event was Mike Tyson vs. Buster Douglas on 11 February 1990. This was because at this time all of Sky's other channels including Sky Sports, were shown free-to-air and during this period, the service was often referred to as Sky Movies Plus (up until 31 August 1993 shortly before the launch of the new Multichannels package). When Sky Sports became a pay channel on 1 September 1992, Sky Movies stopped showing non-movie related programming.
On 1 October 1992, The Comedy Channel was replaced by Sky Movies Gold, a service dedicated to "classic movies" from 4.00pm to midnight every day and it was added as a three-channel package, with the first film was 1976's Rocky (starring Sylvester Stallone) shown at 6.00pm on the new network.
From 1 February 1993, British Sky Broadcasting introduced a new system of ratings for Sky Movies, The Movie Channel and Sky Movies Gold were used at various times replacing the British Board of Film Classification certificates which lasted over four years, and remained on air until 31 October 1997:
On 1 October 1995, Sky Movies Gold starts sharing its transponder space with The Disney Channel (which was delayed for over six years) resulting in the service's broadcasting hours changed from 10.00pm to 6.00am.
The two main channels were rebranded under a common brand on 1 November 1997, Sky Movies became Sky Movies Screen 1 and The Movie Channel became Sky Movies Screen 2, as well as Sky Box Office launches a four-channel near on-demand service on 1 December of that year, also carried by Cable & Wireless chooses not to use the service instead opting for Front Row. Following the major rebrand once again on 10 September 1998, as Sky Movies Screen 1 became Sky MovieMax, Sky Movies Screen 2 became Sky Premier and Sky Movies Gold was renamed Sky Cinema.
1998–2007: Digital era
The launch of Sky Digital from the new Astra 28.2°E satellite position on 1 October 1998 was accompanied by a major expansion of channels. Sky Premier and Sky MovieMax both added three multiplex channels each (Sky Premier 2 to 4 and Sky MovieMax 2 to 4), Sky Cinema launched Sky Cinema 2, and additionally, Sky Premier Widescreen – at the time was the only channel devoted to showing widescreen films were all launched exclusively on digital satellite. Also on the same year (15 November), Sky MovieMax and Sky Premier launched on the ONdigital terrestrial platform. On 1 October 1999, Sky MovieMax 5 was launched.
From 1 July 2002, as the Sky Movies channels saw yet another rebranding exercise, the Sky Premier channels were renamed Sky Movies Premier, the Sky MovieMax channels became Sky Movies Max and the Sky Cinema channels became Sky Movies Cinema. Eventually in June 2003, Sky listened to demands for more widescreen films, the service was closed and the majority of films on the remaining channels were actually shown in widescreen. On 1 November 2003, the Sky Movies Premier and Sky Movies Max channels were all brought under one banner as simply Sky Movies 1 to 9. At the same time, Sky Movies Cinema 1 and 2 became Sky Cinema 1 and 2.
Sky Movies along with numerous other channels became available to watch via Sky Mobile TV in 2005, in partnership with Vodafone. From 30 January 2006, Sky Movies 9 and the new Sky Movies 10 started broadcasting from 5.00pm to 3.00am. They were PIN-protected, meaning that for the first time when films with a BBFC 15 certificate were able to be shown as early as 5.00pm. With the launch of Sky HD, the two channels were also available in a high-definition format.
2007–2016: Sky Movies goes categorised
From 4 April 2007, the Sky Movies channels were revamped with each service covering a different genre, but three of the HD channels have launched already before the other:
Premiere (includes +1)
Comedy
Action & Thriller
Family
Drama
Classics
Sci-Fi & Horror
Modern Greats
Indie
HD1 and HD2
Sky later made Sky Movies HD1 and HD2 available to subscribers without HDTV equipment through two channels simulcasting the same content in SDTV format, the channels were known as Sky Movies SD1 and SD2. These channels were renamed Sky Movies Screen 1 and Screen 2 in February 2008, and the HDTV channels were renamed Sky Movies Screen 1 HD and Screen 2 HD accordingly. On 20 March 2008, an additional high-definition film channel called Sky Movies Premiere HD, which is a simulcast version of the current Sky Movies Premiere channel, was added after many requests for the channel from Sky HD subscribers.
Sky also announced that in October 2008, they would launch six new high-definition simulcast channels called Sky Movies Action/Thriller HD, Sky Movies Sci-Fi/Horror HD, Sky Movies Drama HD, Sky Movies Modern Greats HD, Sky Movies Family HD and Sky Movies Comedy HD. This means that almost all Sky Movies channels are broadcast in both standard- and high-definition except for Sky Movies Premiere +1, Sky Movies Classics and Sky Movies Indie which remained standard-definition only until Sky Movies Indie HD launched on 26 October 2009. Sky Movies were rebranded as the part of the various Sky channels on 1 January 2010.
On 26 March 2010, some Sky Movies channels were renamed, the new Sky Movies Showcase that replaces Sky Movies Screen 1 were devoted to box sets, collections and seasons. Sky Movies also reshuffled its bouquet of ten channels to achieve greater "clarity" for subscribers. The changes included Sky Movies Action & Thriller becoming Sky Movies Action & Adventure, Sky Movies Drama becoming Sky Movies Drama & Romance and Sky Movies Screen 2 becoming Sky Movies Crime & Thriller. The Sky Movies HD channels launched on the Virgin Media platform on 2 August 2010.
Sky Movies Classics HD launched on 9 August 2010 was exclusively on Sky, and the channel was also added to Virgin Media on 4 October 2011. Smallworld Cable added the Sky Movies HD channels to their line-up in the first quarter of 2012, followed by UPC Ireland on 16 August 2012.
On 28 March 2013, Sky Movies Disney was launched effectively replaces Disney Cinemagic, as part of a multi-year film output deal between Sky and The Walt Disney Company. This marks the first time that Disney has been involved in a co-branded linear film channel anywhere in the world, included new Disney films are available on Sky Movies Disney around six months after they have ended their cinema run. To facilitate the channel, Sky Movies Classics has ceased broadcasting, when Sky Movies Modern Greats was rebranded as Sky Movies Greats and Sky Movies Indie became Sky Movies Select, whether the content of the three former brands was merged into Select and Greats.
2016–present: Rebrand and 4K UHD
On 15 June 2016, Sky announced that Sky Movies would rebrand as Sky Cinema on 8 July within this change aligns the channel's naming with those of Sky's film services in other European countries, in consort with Sky plc's takeover of Sky Deutschland and Sky Italia. To compete with subscription video-on-demand services, Sky announced that the rebranded network would premiere "a new film each day", and that it would expand the service's on-demand library. Sky also announced plans to launch a 4K ultra-high-definition feed later in the year. 4K films became available on 13 August 2016 for Sky Q customers in a 2TB box with Sky Cinema and multi-screen packs, as well as 70 were available by the end of 2016.
On 22 June 2020, Sky added a content warning to several older films stating that they "have outdated attitudes, language, and cultural depictions which may cause offence today".
On 23 July 2020, Sky Cinema launched a twelfth channel, Sky Cinema Animation, replaces Sky Cinema Premiere +1 on Sky and Virgin Media UK. Sky Cinema Premiere +1 continued to air on Virgin Media Ireland until its removal on 13 August 2020. The timeshift resumed broadcasting on 6 January 2021, replacing Sky Cinema Disney, which was shut down on 30 December 2020 (with the content moving into Disney+) and was temporarily replaced by Sky Cinema Five Star Movies from the next day (31 December) until Premiere +1's return on 6 January 2021.
On 5 August 2021, Sky agreed a deal with ViacomCBS to launch Paramount+ in the United Kingdom, Ireland, Italy, Germany, Switzerland and Austria by 2022. The app will be available on Sky Q, and Sky Cinema subscribers will have access to Paramount+ at no charge.
Channels
Sky regularly gives one, two or three of their Sky Cinema channels a temporary rebrand to air different kinds of seasonal or promotional programming. This table is just an example of some temporary channel rebrandings were included on the list.
Current
Former
Original productions
Sky Cinema has a dedicated production team that produces over 100 hours of original film-related programming each year – including Sky Cinema News and The Top Ten Show. In addition, Sky's close relationships with the film studios means it regularly gets exclusive access for on-set to talenting one-off 'making-ofs' and various talent-based programming.
In 1998, Elisabeth Murdoch (who was BSkyB's director of channels and services at the time) advocated Sky setting up a film funding and production unit (similar to BBC Films and Film4 Productions). The result was Sky Pictures, which existed in order to investing both low-budget and mainstream British films. However, following a lack of success and her decision to leave Sky and set up her own production company called Shine, the unit was scaled back and closed in 2001.
In January 2018, Sky announced a partnership with film distributor Altitude Film Distribution, with the launch of Sky Cinema Original Films, this new brand would distribute films for Sky Cinema's on-demand service, as well as release them into cinemas. The first film under the new banner was the United Kingdom release of the 2017 animated film Monster Family. Other films like The Hurricane Heist, Anon, Final Score and Extremely Wicked, Shockingly Evil and Vile have also been released.
See also
Premiere
Home Video Channel
Turner Classic Movies
Film4
Carlton Cinema
The Studio
Movies 24
Movies4Men
Movies4Men 2
Great! Movies
AMC
References
External links
1989 establishments in the United Kingdom
English-language television stations in the United Kingdom
Movie channels in the United Kingdom
Sky television channels
Television channels and stations established in 1989
Television channels in the United Kingdom |
363188 | https://en.wikipedia.org/wiki/NetWare | NetWare | NetWare is a discontinued computer network operating system developed by Novell, Inc. It initially used cooperative multitasking to run various services on a personal computer, using the IPX network protocol.
The original NetWare product in 1983 supported clients running both CP/M and MS-DOS, ran over a proprietary star network topology and was based on a Novell-built file server using the Motorola 68000 processor. The company soon moved away from building its own hardware, and NetWare became hardware-independent, running on any suitable Intel-based IBM PC compatible system, and able to utilize a wide range of network cards. From the beginning NetWare implemented a number of features inspired by mainframe and minicomputer systems that were not available in its competitors' products.
In 1991, Novell introduced cheaper peer-to-peer networking products for DOS and Windows, unrelated to their server-centric NetWare. These are NetWare Lite 1.0 (NWL), and later Personal NetWare 1.0 (PNW) in 1993.
In 1993, the main NetWare product line took a dramatic turn when version 4 introduced NetWare Directory Services (NDS, later renamed eDirectory), a global directory service based on ISO X.500 concepts (seven years later, Microsoft released Active Directory, which lacked the tree structure and time synchronization of NDS). The directory service, along with a new e-mail system (GroupWise), application configuration suite (ZENworks), and security product (BorderManager) were all targeted at the needs of large enterprises.
By 2000, however, Microsoft was taking more of Novell's customer base and Novell increasingly looked to a future based on a Linux kernel. The successor to NetWare, Open Enterprise Server (OES), released in March 2005, offers all the services previously hosted by NetWare 6.5, but on a SUSE Linux Enterprise Server; the NetWare kernel remained an option until OES 11 in late 2011.
The final update release was version 6.5SP8 of May 2009; NetWare is no longer on Novell's product list. NetWare 6.5SP8 General Support ended in 2010; Extended Support was available until the end of 2015, and Self Support until the end of 2017. The replacement is Open Enterprise Server.
History
NetWare evolved from a very simple concept: file sharing instead of disk sharing. By controlling access at the level of individual files, instead of entire disks, files could be locked and better access control implemented. In 1983 when the first versions of NetWare originated, all other competing products were based on the concept of providing shared direct disk access. Novell's alternative approach was validated by IBM in 1984, which helped promote the NetWare product.
Novell NetWare shares disk space in the form of NetWare volumes, comparable to logical volumes. Client workstations running DOS run a special terminate and stay resident (TSR) program that allows them to map a local drive letter to a NetWare volume. Clients log into a server in order to be allowed to map volumes, and access can be restricted according to the login name. Similarly, they can connect to shared printers on the dedicated server, and print as if the printer is connected locally.
At the end of the 1990s, with Internet connectivity booming, the Internet's TCP/IP protocol became dominant on LANs. Novell had introduced limited TCP/IP support in NetWare 3.x (circa 1992) and 4.x (circa 1995), consisting mainly of FTP services and UNIX-style LPR/LPD printing (available in NetWare 3.x), and a Novell-developed webserver (in NetWare 4.x). Native TCP/IP support for the client file and print services normally associated with NetWare was introduced in NetWare 5.0 (released in 1998). There was also a short-lived product, NWIP, that encapsulated IPX in TCP/IP, intended to ease transition of an existing NetWare environment from IPX to IP.
During the early to mid-1980s Microsoft introduced their own LAN system in LAN Manager, based on the competing NBF protocol. Early attempts to compete with NetWare failed, but this changed with the inclusion of improved networking support in Windows for Workgroups, and then the successful Windows NT and Windows 95. NT, in particular, offered a sub-set of NetWare's services, but on a system that could also be used on a desktop, and due to the vertical integration there was no need for a third-party client.
Early years
NetWare originated from consulting work by SuperSet Software, a group founded by the friends Drew Major, Dale Neibaur, Kyle Powell and later Mark Hurst. This work stemmed from their classwork at Brigham Young University in Provo, Utah, starting in October 1981.
In 1981, Raymond Noorda engaged the work by the SuperSet team. The team was originally assigned to create a CP/M disk sharing system to help network the CP/M Motorola 68000 hardware that Novell sold at the time. The first S-Net is CP/M-68K-based and shares a hard disk. In 1983, the team was privately convinced that CP/M was a doomed platform and instead came up with a successful file-sharing system for the newly introduced IBM-compatible PC. They also wrote an application called Snipes – a text-mode game – and used it to test the new network and demonstrate its capabilities. Snipes [aka 'NSnipes' for 'Network Snipes'] is the first network application ever written for a commercial personal computer, and it is recognized as one of the precursors of many popular multiplayer games such as Doom and Quake.
First called ShareNet or S-Net, this network operating system (NOS) was later called Novell NetWare. NetWare is based on the NetWare Core Protocol (NCP), which is a packet-based protocol that enables a client to send requests to and receive replies from a NetWare server. Initially, NCP was directly tied to the IPX/SPX protocol, and NetWare communicated natively using only IPX/SPX.
The first product to bear the NetWare name was released in 1983. There were two distinct versions of NetWare at that time. One version was designed to run on the Intel 8086 processor and another on the Motorola processor which was called NetWare 68 (aka S-Net); it runs on the Motorola 68000 processor on a proprietary Novell-built file server (Novell could not write an original network operating system from scratch so they licensed a Unix kernel and based NetWare on that) and uses a star network topology. This was soon joined by NetWare 86 4.x, which was written for the Intel 8086. This was replaced in 1985 with Advanced NetWare 86 version 1.0a which allows more than one server on the same network. In 1986, after the Intel 80286 processor became available, Novell released Advanced NetWare 286 1.0a. Two versions were offered for sale; the basic version was sold as ELS I and the more enhanced version was sold as ELS II. The acronym ELS was used to identify this new product line as NetWare's Entry Level System.
NetWare 286 2.x
Advanced NetWare version 2.x, launched in 1986, was written for the then-new 80286 CPU. The 80286 CPU features a new 16-bit protected mode that provides access to up to 16 MiB RAM as well as new mechanisms to aid multi-tasking. (Prior to the 80286, PC CPU servers used the Intel 8088/8086 8-/16-bit processors, which are limited to an address space of 1 MiB with not more than 640 KiB of directly addressable RAM.) The combination of a higher 16 MiB RAM limit, 80286 processor feature utilization, and 256 MB NetWare volume size limit (compared to the 32 MB that DOS allowed at that time) allowed the building of reliable, cost-effective server-based local area networks for the first time. The 16 MiB RAM limit was especially important, since it makes enough RAM available for disk caching to significantly improve performance. This became the key to Novell's performance while also allowing larger networks to be built.
In a significant innovation, NetWare 286 is also hardware-independent, unlike competing network server systems. Novell servers can be assembled using any brand system with an Intel 80286 CPU, any MFM, RLL, ESDI, or SCSI hard drive and any 8- or 16-bit network adapter for which NetWare drivers are available – and 18 different manufacturer's network cards were supported at launch.
The server could support up to four network cards, and these can be a mixture of technologies such as ARCNET, Token Ring and Ethernet. The operating system is provided as a set of compiled object modules that required configuration and linking. Any change to the operating system requires a re-linking of the kernel. Installation also requires the use of a proprietary low-level format program for MFM hard drives called COMPSURF.
The file system used by NetWare 2.x is NetWare File System 286, or NWFS 286, supporting volumes of up to 256 MB. NetWare 286 recognizes 80286 protected mode, extending NetWare's support of RAM from 1 MiB to the full 16 MiB addressable by the 80286. A minimum of 2 MiB is required to start up the operating system; any additional RAM is used for FAT, DET and file caching. Since 16-bit protected mode is implemented in the 80286 and every subsequent Intel x86 processor, NetWare 286 version 2.x will run on any 80286 or later compatible processor.
NetWare 2.x implements a number of features inspired by mainframe and minicomputer systems that were not available in other operating systems of the day. The System Fault Tolerance (SFT) features includes standard read-after-write verification (SFT-I) with on-the-fly bad block re-mapping (at the time, disks did not have that feature built in) and software RAID1 (disk mirroring, SFT-II). The Transaction Tracking System (TTS) optionally protects files against incomplete updates. For single files, this requires only a file attribute to be set. Transactions over multiple files and controlled roll-backs are possible by programming to the TTS API.
NetWare 286 2.x normally requires a dedicated PC to act as the server, where the server uses DOS only as a boot loader to execute the operating system file . All memory is allocated to NetWare; no DOS ran on the server. However, a "non-dedicated" version was also available for price-conscious customers. In this, DOS 3.3 or higher remains in memory, and the processor time-slices between the DOS and NetWare programs, allowing the server computer to be used simultaneously as a network file server and as a user workstation. Because all extended memory (RAM above 1 MiB) is allocated to NetWare, DOS is limited to only 640 KiB; expanded memory managers that used the MMU of 80386 and higher processors, such as EMM386, do not work; 8086-style expanded memory on dedicated plug-in cards is possible however. Time slicing is accomplished using the keyboard interrupt, which requires strict compliance with the IBM PC design model, otherwise performance is affected.
Server licensing on early versions of NetWare 286 is accomplished by using a key card. The key card was designed for an 8-bit ISA bus, and has a serial number encoded on a ROM chip. The serial number has to match the serial number of the NetWare software running on the server. To broaden the hardware base, particularly to machines using the IBM MCA bus, later versions of NetWare 2.x do not require the key card; serialised license floppy disks are used in place of the key cards.
Licensing is normally for 100 users, but two ELS versions were also available. First a 5-user ELS in 1987, and followed by the 8-user ELS 2.12 II in 1988.
NetWare 3.x
NetWare's 3.x range was a major step forward. It began with version 3.0 in 1990, followed quickly by version 3.10 and 3.11 in 1991.
A key feature was support for 32-bit protected mode, eliminating the 16 MiB memory limit of NetWare 286 and therefore allowing larger hard drives to be supported (since NetWare 3.x cached the entire file allocation table and directory entry table into memory for improved performance).
NetWare version 3.x was also much simpler to install, with disk and network support provided by software modules called a NetWare Loadable Module (NLM) loaded either at start-up or when it was needed. NLMs could also add functionality such as anti-virus software, backup software, database and web servers. Support for long filenames was also provided by an NLM.
A new file system was introduced by NetWare 3.x – "NetWare File System 386", or NWFS 386, which significantly extended volume capacity (1 TB, 4 GB files), and could handle up to 16 volume segments spanning multiple physical disk drives. Volume segments could be added while the server was in use and the volume was mounted, allowing a server to be expanded without interruption.
In NetWare 386 3.x all NLMs ran on the server at the same level of processor memory protection, known as "ring 0". This provided the best possible performance, it sacrificed reliability because there was no memory protection, and furthermore NetWare 3.x used a co-operative multitasking model, meaning that an NLM was required to yield to the kernel regularly. For either of these reasons a badly behaved NLM could result in a fatal (ABEND) error.
NetWare continued to be administered using console-based utilities.
For a while, Novell also marketed an OEM version of NetWare 3, called Portable NetWare, together with OEMs such as Hewlett-Packard, DEC and Data General, who ported Novell source code to run on top of their Unix operating systems. Portable NetWare did not sell well.
While NetWare 3.x was current, Novell introduced its first high-availability clustering system, named NetWare SFT-III, which allowed a logical server to be completely mirrored to a separate physical machine. Implemented as a shared-nothing cluster, under SFT-III the OS was logically split into an interrupt-driven I/O engine and the event-driven OS core. The I/O engines serialized their interrupts (disk, network etc.) into a combined event stream that was fed to two identical copies of the system engine through a fast (typically 100 Mbit/s) inter-server link. Because of its non-preemptive nature, the OS core, stripped of non-deterministic I/O, behaves deterministically, like a large finite state machine. The outputs of the two system engines were compared to ensure proper operation, and two copies fed back to the I/O engines. Using the existing SFT-II software RAID functionality present in the core, disks could be mirrored between the two machines without special hardware. The two machines could be separated as far as the server-to-server link would permit. In case of a server or disk failure, the surviving server could take over client sessions transparently after a short pause since it had full state information. SFT-III was the first NetWare version able to make use of SMP hardware – the I/O engine could optionally be run on its own CPU. NetWare SFT-III, ahead of its time in several ways, was a mixed success.
With NetWare 3 an improved routing protocol, NetWare Link Services Protocol, has been introduced which scales better than Routing Information Protocol and allows building large networks.
NetWare 4.x
Version 4 in 1993 introduced NetWare Directory Services, later re-branded as Novell Directory Services (NDS), based on X.500, which replaced the Bindery with a global directory service, in which the infrastructure was described and managed in a single place. Additionally, NDS provided an extensible schema, allowing the introduction of new object types. This allowed a single user authentication to NDS to govern access to any server in the directory tree structure. Users could therefore access network resources no matter on which server they resided, although user license counts were still tied to individual servers. (Large enterprises could opt for a license model giving them essentially unlimited per-server users if they let Novell audit their total user count.)
Version 4 also introduced a number of useful tools and features, such as transparent compression at file system level and RSA public/private encryption.
Another new feature was the NetWare Asynchronous Services Interface (NASI). It allowed network sharing of multiple serial devices, such as modems. Client port redirection occurred via a DOS or Windows driver allowing companies to consolidate modems and analog phone lines.
NetWare for OS/2
Promised as early as 1988, when the Microsoft-IBM collaboration was still ongoing and OS/2 1.x was still a 16-bit product, the product didn't become commercially available until after IBM and Microsoft had parted ways and OS/2 2.0 had become a 32-bit, pre-emptive multitasking and multithreading OS.
By August 1993, Novell released its first version of "NetWare for OS/2". This first release supported OS/2 2.1 (1993) as the base OS, and required that users first buy and install IBM OS/2, then purchase NetWare 4.01, and then install the NetWare for OS/2 product. It retailed for $200.
By around 1995, and coincidental with IBM's renewed marketing push for its 32-bit OS/2 Warp OS, both as a desktop client and as a LAN server (OS/2 Warp Server), NetWare for OS/2 began receiving some good press coverage. "NetWare 4.1 for OS/2" allowed to run Novell's network stack and server modules on top of IBM's 32-bit kernel and network stack. It was basically NetWare 4.x running as a service on top of OS/2. It was compatible with third party client and server utilities and NetWare Loadable Modules.
Since IBM's 32-bit OS/2 included Netbios, IPX/SPX and TCP/IP support, this means that sysadmins could run all three most popular network stacks on a single box, and use the OS/2 box as a workstation too. NetWare for OS/2 shared memory on the system with OS/2 seamlessly. The book "Client Server survival Guide with OS/2" described it as "glue code that lets the unmodified NetWare 4.x server program think it owns all resources on a OS/2 system". It also claimed that a NetWare server running on top of OS/2 only suffered a 5% to 10% overhead over NetWare running over the bare metal hardware, while gaining OS/2's pre-emptive multitasking and object oriented GUI.
Novell continued releasing bugfixes and updates to NetWare for OS/2 up to 1998.
Strategic mistakes
Novell's strategy with NetWare 286 2.x and 3.x proved very successful; before the arrival of Windows NT Server, Novell claimed 90% of the market for PC based servers.
While the design of NetWare 3.x and later involved a DOS partition to load NetWare server files; while of little technical import (DOS merely loaded NetWare into memory and turned execution over to it; in later versions, DOS could be unloaded from RAM), this feature became a marketing liability. Additionally, the NetWare console remained text-based, which was also a marketing, rather than technical, issue when the Windows graphical interface gained widespread acceptance. Novell could have eliminated this technical liability by retaining the design of NetWare 286, which installed the server file into a Novell partition and allowed the server to boot from the Novell partition without creating a bootable DOS partition. Novell finally added support for this in a Support Pack for NetWare 6.5.
As Novell initially used IPX/SPX instead of TCP/IP, they were poorly positioned to take advantage of the Internet in 1995. This resulted in Novell servers being bypassed for routing and Internet access in favor of hardware routers, Unix-based operating systems such as FreeBSD, and SOCKS and HTTP Proxy Servers on Windows and other operating systems.
A decision by the management of Novell also took away the ability of independent resellers and engineers to recommend and sell the product. The reduction of their effective sales force created this downward spiral in sales.
NetWare 4.1x and NetWare for Small Business
Novell priced NetWare 4.10 similarly to NetWare 3.12, allowing customers who resisted NDS (typically small businesses) to try it at no cost.
Later Novell released NetWare version 4.11 in 1996 which included many enhancements that made the operating system easier to install, easier to operate, faster, and more stable. It also included the first full 32-bit client for Microsoft Windows-based workstations, SMP support and the NetWare Administrator (NWADMIN or NWADMN32), a GUI-based administration tool for NetWare. Previous administration tools used the Cworthy interface, the character-based GUI tools such as SYSCON and PCONSOLE with blue text-based background. Some of these tools survive to this day, for instance MONITOR.NLM.
Novell packaged NetWare 4.11 with its Web server, TCP/IP support and the Netscape browser into a bundle dubbed IntranetWare (also written as intraNetWare). A version designed for networks of 25 or fewer users was named IntranetWare for Small Business and contained a limited version of NDS and tried to simplify NDS administration. The intranetWare name was dropped in NetWare 5.
During this time Novell also began to leverage its directory service, NDS, by tying their other products into the directory. Their e-mail system, GroupWise, was integrated with NDS, and Novell released many other directory-enabled products such as ZENworks and BorderManager.
NetWare still required IPX/SPX as NCP used it, but Novell started to acknowledge the demand for TCP/IP with NetWare 4.11 by including tools and utilities that made it easier to create intranets and link networks to the Internet. Novell bundled tools, such as the IPX/IP gateway, to ease the connection between IPX workstations and IP networks. It also began integrating Internet technologies and support through features such as a natively hosted web server.
NetWare 5.x
With the release of NetWare 5 in October 1998 Novell switched its primary NCP interface from the IPX/SPX network protocol to TCP/IP to meet market demand. Products continued to support IPX/SPX, but the emphasis shifted to TCP/IP. New features included:
a GUI for NetWare
Novell Storage Services (NSS), a file system to replace the traditional NetWare File System (which Novell continued to support)
Java virtual machine for NetWare
Novell Distributed Print Services (NDPS), an infrastructure for printing over networks
ConsoleOne, a Java-based GUI administration console
directory-enabled Public key infrastructure services (PKIS)
directory-enabled DNS and DHCP servers
support for Storage Area Networks (SANs)
Novell Cluster Services (NCS), a replacement for SFT-III
Oracle 8i with a 5-user license
The Cluster Services improved on SFT-III, as NCS did not require specialized hardware or identical server configurations.
Novell released NetWare 5 during a time when NetWare's market share had started dropping precipitously; many companies and organizations replaced their NetWare servers with servers running Microsoft's Windows NT operating system.
Around this time Novell also released their last upgrade to the NetWare 4 operating system, NetWare 4.2.
NetWare 5 and above supported Novell NetStorage for Internet-based access to files stored within NetWare.
Novell released NetWare 5.1 in January 2000. It introduced a number of tools, such as:
IBM WebSphere Application Server
NetWare Management Portal (later called Novell Remote Manager), web-based management of the operating system
FTP, NNTP and streaming-media servers
NetWare Web Search Server
WebDAV support
NetWare 6.0
NetWare 6 was released in October 2001, shortly after its predecessor. This version has a simplified licensing scheme based on users, not server connections. This allows unlimited connections per user to any number of NetWare servers in the network. Novell Cluster Services was also improved to support 32-node clusters; the base NetWare 6.0 product included a two-node clustering license.
NetWare 6.5
NetWare 6.5 was released in August 2003. Some of the new features in this version included:
more open-source products such as PHP, MySQL and OpenSSH
a port of the Bash shell and a lot of traditional Unix utilities such as wget, grep, awk and sed to provide additional capabilities for scripting
iSCSI support (both target and initiator)
Virtual Office – an "out of the box" web portal for end users providing access to e-mail, personal file storage, company address book, etc.
Domain controller functionality
Universal password
DirXML Starter Pack – synchronization of user accounts with another eDirectory tree, a Windows NT domain or Active Directory.
exteNd Application Server – a Java EE 1.3-compatible application server
support for customized printer driver profiles and printer usage auditing
NX bit support
support for USB storage devices
support for encrypted volumes
The latest – and apparently last – Service Pack for NetWare 6.5 is SP8, released May 2009.
Open Enterprise Server
1.0
In 2003, Novell announced the successor product to NetWare: Open Enterprise Server (OES). First released in March 2005, OES completes the separation of the services traditionally associated with NetWare (such as Directory Services, and file-and-print) from the platform underlying the delivery of those services. OES is essentially a set of applications (eDirectory, NetWare Core Protocol services, iPrint, etc.) that can run atop either a Linux or a NetWare kernel platform. Clustered OES implementations can even migrate services from Linux to NetWare and back again, making Novell one of the very few vendors to offer a multi-platform clustering solution.
Consequent to Novell's acquisitions of Ximian and the German Linux distributor SuSE, Novell moved away from NetWare and shifted its focus towards Linux. Marketing was focused on getting faithful NetWare users to move to the Linux platform for future releases. The clearest indication of this direction was Novell's controversial decision to release Open Enterprise Server on Linux only, not NetWare. Novell later watered down this decision and stated that NetWare's 90 million users would be supported until at least 2015. Meanwhile, many former NetWare customers rejected the confusing mix of licensed software running on an open-source Linux operating system in favor of moving to complete Open Source solutions such as those offered by Red Hat.
2.0
OES 2 was released on 8 October 2007. It includes NetWare 6.5 SP7, which supports running as a paravirtualized guest inside the Xen hypervisor and new Linux based version using SLES10.
New features include
64-bit support
Virtualization
Dynamic Storage Technology, which provide Shadow Volumes
Domain services for Windows (provided in OES 2 service pack 1)
From the 1990s
some organizations still used Novell NetWare, but it had started to lose popularity from the mid-1990s, when NetWare was the de facto standard for file- and printer-sharing software for the Intel x86 server platform.
Microsoft successfully took market share from NetWare products from the late-1990s. Microsoft's more aggressive marketing was aimed directly at non-technical management through major magazines, while Novell NetWare's was through more technical magazines read by IT personnel.
Novell did not adapt their pricing structure to current market conditions, and NetWare sales suffered,
NetWare Lite / Personal NetWare
NetWare Lite and Personal NetWare were a series of peer-to-peer networks developed by Novell for DOS- and Windows-based computers aimed at personal users and small businesses between 1991 and 1995.
Performance
NetWare dominated the network operating system (NOS) market from the mid-1980s through the mid- to late-1990s due to its extremely high performance relative to other NOS technologies. Most benchmarks during this period demonstrated a 5:1 to 10:1 performance advantage over products from Microsoft, Banyan, and others. One noteworthy benchmark pitted NetWare 3.x running NFS services over TCP/IP (not NetWare's native IPX protocol) against a dedicated Auspex NFS server and an SCO Unix server running NFS service. NetWare NFS outperformed both 'native' NFS systems and claimed a 2:1 performance advantage over SCO Unix NFS on the same hardware.
The reasons for NetWare's performance advantage are given below.
File service instead of disk service
When first developed, nearly all LAN storage was based on the disk server model. This meant that if a client computer wanted to read a particular block from a particular file it would have to issue the following requests across the relatively slow LAN:
Read first block of directory
Continue reading subsequent directory blocks until the directory block containing the information on the desired file was found, could be many directory blocks
Read through multiple file entry blocks until the block containing the location of the desired file block was found, could be many directory blocks
Read the desired data block
NetWare, since it was based on a file service model, interacted with the client at the file API level:
Send file open request (if this hadn't already been done)
Send a request for the desired data from the file
All of the work of searching the directory to figure out where the desired data was physically located on the disk was performed at high speed locally on the server.
By the mid-1980s, most NOS products had shifted from the disk service to the file service model. Today, the disk service model is making a comeback, see SAN.
Aggressive caching
From the start, the NetWare design focused on servers with copious amounts of RAM. The entire file allocation table (FAT) was read into RAM when a volume was mounted, thereby requiring a minimum amount of RAM proportional to online disk space; adding a disk to a server would often require a RAM upgrade as well. Unlike most competing network operating systems prior to Windows NT, NetWare automatically used all otherwise unused RAM for caching active files, employing delayed write-backs to facilitate re-ordering of disk requests (elevator seeks). An unexpected shutdown could therefore corrupt data, making an uninterruptible power supply practically a mandatory part of a server installation.
The default dirty cache delay time was fixed at 2.2 seconds in NetWare 286 versions 2.x. Starting with NetWare 386 3.x, the dirty disk cache delay time and dirty directory cache delay time settings controlled the amount of time the server would cache changed ("dirty") data before saving (flushing) the data to a hard drive. The default setting of 3.3 seconds could be decreased to 0.5 seconds but not reduced to zero, while the maximum delay was 10 seconds. The option to increase the cache delay to 10 seconds provided a significant performance boost. Windows 2000 and 2003 server do not allow adjustment to the cache delay time. Instead, they use an algorithm that adjusts cache delay.
Efficiency of NetWare Core Protocol (NCP)
Most network protocols in use at the time NetWare was developed didn't trust the network to deliver messages. A typical client file read would work something like this:
Client sends read request to server
Server acknowledges request
Client acknowledges acknowledgement
Server sends requested data to client
Client acknowledges data
Server acknowledges acknowledgement
In contrast, NCP was based on the idea that networks worked perfectly most of the time, so the reply to a request served as the acknowledgement. Here is an example of a client read request using this model:
Client sends read request to server
Server sends requested data to client
All requests contained a sequence number, so if the client didn't receive a response within an appropriate amount of time it would re-send the request with the same sequence number. If the server had already processed the request it would resend the cached response, if it had not yet had time to process the request it would only send a "positive acknowledgement". The bottom line to this 'trust the network' approach was a 2/3 reduction in network transactions and the associated latency.
Non-preemptive OS designed for network services
One of the raging debates of the 1990s was whether it was more appropriate for network file service to be performed by a software layer running on top of a general purpose operating system, or by a special purpose operating system. NetWare was a special purpose operating system, not a timesharing OS. It was written from the ground up as a platform for client-server processing services. Initially it focused on file and print services, but later demonstrated its flexibility by running database, email, web and other services as well. It also performed efficiently as a router, supporting IPX, TCP/IP, and Appletalk, though it never offered the flexibility of a 'hardware' router.
In 4.x and earlier versions, NetWare did not support preemption, virtual memory, graphical user interfaces, etc. Processes and services running under the NetWare OS were expected to be cooperative, that is to process a request and return control to the OS in a timely fashion. On the down side, this trust of application processes to manage themselves could lead to a misbehaving application bringing down the server.
See also
Novell NetWare Access Server (NAS)
Comparison of operating systems
Btrieve
NCOPY
References
Further reading
External links
NetWare Cool Solutions – Tips & tricks, guides, tools and other resources submitted by the NetWare community
Another brief history of NetWare
Epic uptime of NetWare 3 server, arstechnica.com
1983 software
Network operating systems
NetWare
Proprietary software
X86 operating systems
PowerPC operating systems
MIPS operating systems
Discontinued operating systems |
363628 | https://en.wikipedia.org/wiki/Tensor%20algebra | Tensor algebra | In mathematics, the tensor algebra of a vector space V, denoted T(V) or T(V), is the algebra of tensors on V (of any rank) with multiplication being the tensor product. It is the free algebra on V, in the sense of being left adjoint to the forgetful functor from algebras to vector spaces: it is the "most general" algebra containing V, in the sense of the corresponding universal property (see below).
The tensor algebra is important because many other algebras arise as quotient algebras of T(V). These include the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras.
The tensor algebra also has two coalgebra structures; one simple one, which does not make it a bialgebra, but does lead to the concept of a cofree coalgebra, and a more complicated one, which yields a bialgebra, and can be extended by giving an antipode to create a Hopf algebra structure.
Note: In this article, all algebras are assumed to be unital and associative. The unit is explicitly required to define the coproduct.
Construction
Let V be a vector space over a field K. For any nonnegative integer k, we define the kth tensor power of V to be the tensor product of V with itself k times:
That is, TkV consists of all tensors on V of order k. By convention T0V is the ground field K (as a one-dimensional vector space over itself).
We then construct T(V) as the direct sum of TkV for k = 0,1,2,…
The multiplication in T(V) is determined by the canonical isomorphism
given by the tensor product, which is then extended by linearity to all of T(V). This multiplication rule implies that the tensor algebra T(V) is naturally a graded algebra with TkV serving as the grade-k subspace. This grading can be extended to a Z grading by appending subspaces for negative integers k.
The construction generalizes in a straightforward manner to the tensor algebra of any module M over a commutative ring. If R is a non-commutative ring, one can still perform the construction for any R-R bimodule M. (It does not work for ordinary R-modules because the iterated tensor products cannot be formed.)
Adjunction and universal property
The tensor algebra is also called the free algebra on the vector space , and is functorial; this means that the map extends to linear maps for forming a functor from the category of -vector spaces to the category of associative algebra. Similarly with other free constructions, the functor is left adjoint to the forgetful functor that sends each associative -algebra to its underlying vector space.
Explicitly, the tensor algebra satisfies the following universal property, which formally expresses the statement that it is the most general algebra containing V:
Any linear map from to an associative algebra over can be uniquely extended to an algebra homomorphism from to as indicated by the following commutative diagram:
Here is the canonical inclusion of into . As for other universal properties, the tensor algebra can be definied as the unique algebra satisfying this property (specifically, it is unique up to a unique isomorphism), but this definition requires to prove that an object satisfying this property exists.
The above universal property implies that is a functor from the category of vector spaces over , to the category of -algebras. This means that any linear map between -vector spaces and extends uniquely to a -algebra homomorphism from to .
Non-commutative polynomials
If V has finite dimension n, another way of looking at the tensor algebra is as the "algebra of polynomials over K in n non-commuting variables". If we take basis vectors for V, those become non-commuting variables (or indeterminates) in T(V), subject to no constraints beyond associativity, the distributive law and K-linearity.
Note that the algebra of polynomials on V is not , but rather : a (homogeneous) linear function on V is an element of for example coordinates on a vector space are covectors, as they take in a vector and give out a scalar (the given coordinate of the vector).
Quotients
Because of the generality of the tensor algebra, many other algebras of interest can be constructed by starting with the tensor algebra and then imposing certain relations on the generators, i.e. by constructing certain quotient algebras of T(V). Examples of this are the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras.
Coalgebra
The tensor algebra has two different coalgebra structures. One is compatible with the tensor product, and thus can be extended to a bialgebra, and can be further be extended with an antipode to a Hopf algebra structure. The other structure, although simpler, cannot be extended to a bialgebra. The first structure is developed immediately below; the second structure is given in the section on the cofree coalgebra, further down.
The development provided below can be equally well applied to the exterior algebra, using the wedge symbol in place of the tensor symbol ; a sign must also be kept track of, when permuting elements of the exterior algebra. This correspondence also lasts through the definition of the bialgebra, and on to the definition of a Hopf algebra. That is, the exterior algebra can also be given a Hopf algebra structure.
Similarly, the symmetric algebra can also be given the structure of a Hopf algebra, in exactly the same fashion, by replacing everywhere the tensor product by the symmetrized tensor product , i.e. that product where
In each case, this is possible because the alternating product and the symmetric product obey the required consistency conditions for the definition of a bialgebra and Hopf algebra; this can be explicitly checked in the manner below. Whenever one has a product obeying these consistency conditions, the construction goes thorough; insofar as such a product gave rise to a quotient space, the quotient space inherits the Hopf algebra structure.
In the language of category theory, one says that there is a functor from the category of -vector spaces to the category of -associate algebras. But there is also a functor taking vector spaces to the category of exterior algebras, and a functor taking vector spaces to symmetric algebras. There is a natural map from to each of these. Verifying that quotienting preserves the Hopf algebra structure is the same as verifying that the maps are indeed natural.
Coproduct
The coalgebra is obtained by defining a coproduct or diagonal operator
Here, is used as a short-hand for to avoid an explosion of parentheses. The symbol is used to denote the "external" tensor product, needed for the definition of a coalgebra. It is being used to distinguish it from the "internal" tensor product , which is already being used to denote multiplication in the tensor algebra (see the section Multiplication, below, for further clarification on this issue). In order to avoid confusion between these two symbols, most texts will replace by a plain dot, or even drop it altogether, with the understanding that it is implied from context. This then allows the symbol to be used in place of the symbol. This is not done below, and the two symbols are used independently and explicitly, so as to show the proper location of each. The result is a bit more verbose, but should be easier to comprehend.
The definition of the operator is most easily built up in stages, first by defining it for elements and then by homomorphically extending it to the whole algebra. A suitable choice for the coproduct is then
and
where is the unit of the field . By linearity, one obviously has
for all It is straightforward to verify that this definition satisfies the axioms of a coalgebra: that is, that
where is the identity map on . Indeed, one gets
and likewise for the other side. At this point, one could invoke a lemma, and say that extends trivially, by linearity, to all of , because is a free object and is a generator of the free algebra, and is a homomorphism. However, it is insightful to provide explicit expressions. So, for , one has (by definition) the homomorphism
Expanding, one has
In the above expansion, there is no need to ever write as this is just plain-old scalar multiplication in the algebra; that is, one trivially has that
The extension above preserves the algebra grading. That is,
Continuing in this fashion, one can obtain an explicit expression for the coproduct acting on a homogenous element of order m:
where the symbol, which should appear as ш, the sha, denotes the shuffle product. This is expressed in the second summation, which is taken over all (p + 1, m − p)-shuffles. The above is written with a notational trick, to keep track of the field element 1: the trick is to write , and this is shuffled into various locations during the expansion of the sum over shuffles. The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right. Any one given shuffle obeys
As before, the algebra grading is preserved:
Counit
The counit is given by the projection of the field component out from the algebra. This can be written as for and for . By homomorphism under the tensor product , this extends to
for all
It is a straightforward matter to verify that this counit satisfies the needed axiom for the coalgebra:
Working this explicitly, one has
where, for the last step, one has made use of the isomorphism , as is appropriate for the defining axiom of the counit.
Bialgebra
A bialgebra defines both multiplication, and comultiplication, and requires them to be compatible.
Multiplication
Multiplication is given by an operator
which, in this case, was already given as the "internal" tensor product. That is,
That is, The above should make it clear why the symbol needs to be used: the was actually one and the same thing as ; and notational sloppiness here would lead to utter chaos. To strengthen this: the tensor product of the tensor algebra corresponds to the multiplication used in the definition of an algebra, whereas the tensor product is the one required in the definition of comultiplication in a coalgebra. These two tensor products are not the same thing!
Unit
The unit for the algebra
is just the embedding, so that
That the unit is compatible with the tensor product is "trivial": it is just part of the standard definition of the tensor product of vector spaces. That is, for field element k and any More verbosely, the axioms for an associative algebra require the two homomorphisms (or commuting diagrams):
on , and that symmetrically, on , that
where the right-hand side of these equations should be understood as the scalar product.
Compatibility
The unit and counit, and multiplication and comultiplication, all have to satisfy compatibility conditions. It is straightforward to see that
Similarly, the unit is compatible with comultiplication:
The above requires the use of the isomorphism in order to work; without this, one loses linearity. Component-wise,
with the right-hand side making use of the isomorphism.
Multiplication and the counit are compatible:
whenever x or y are not elements of , and otherwise, one has scalar multiplication on the field: The most difficult to verify is the compatibility of multiplication and comultiplication:
where exchanges elements. The compatibility condition only needs to be verified on ; the full compatibility follows as a homomorphic extension to all of The verification is verbose but straightforward; it is not given here, except for the final result:
For an explicit expression for this was given in the coalgebra section, above.
Hopf algebra
The Hopf algebra adds an antipode to the bialgebra axioms. The antipode on is given by
This is sometimes called the "anti-identity". The antipode on is given by
and on by
This extends homomorphically to
Compatibility
Compatibility of the antipode with multiplication and comultiplication requires that
This is straightforward to verify componentwise on :
Similarly, on :
Recall that
and that
for any that is not in
One may proceed in a similar manner, by homomorphism, verifying that the antipode inserts the appropriate cancellative signs in the shuffle, starting with the compatibility condition on and proceeding by induction.
Cofree cocomplete coalgebra
One may define a different coproduct on the tensor algebra, simpler than the one given above. It is given by
Here, as before, one uses the notational trick (recalling that trivially).
This coproduct gives rise to a coalgebra. It describes a coalgebra that is dual to the algebra structure on T(V∗), where V∗ denotes the dual vector space of linear maps V → F. In the same way that the tensor algebra is a free algebra, the corresponding coalgebra is termed cocomplete co-free. With the usual product this is not a bialgebra. It can be turned into a bialgebra with the product where (i,j) denotes the binomial coefficient for . This bialgebra is known as the divided power Hopf algebra.
The difference between this, and the other coalgebra is most easily seen in the term. Here, one has that
for , which is clearly missing a shuffled term, as compared to before.
See also
Braided vector space
Braided Hopf algebra
Monoidal category
Multilinear algebra
Stanisław Lem's Love and Tensor Algebra
Fock space
References
(See Chapter 3 §5)
Algebras
Multilinear algebra
Tensors
Hopf algebras |
363890 | https://en.wikipedia.org/wiki/One-way%20function | One-way function | In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way (see Theoretical definition, below).
The existence of such one-way functions is still an open conjecture. Their existence would prove that the complexity classes P and NP are not equal, thus resolving the foremost unsolved question of theoretical computer science. The converse is not known to be true, i.e. the existence of a proof that P≠NP would not directly imply the existence of one-way functions.
In applied contexts, the terms "easy" and "hard" are usually interpreted relative to some specific computing entity; typically "cheap enough for the legitimate users" and "prohibitively expensive for any malicious agents". One-way functions, in this sense, are fundamental tools for cryptography, personal identification, authentication, and other data security applications. While the existence of one-way functions in this sense is also an open conjecture, there are several candidates that have withstood decades of intense scrutiny. Some of them are essential ingredients of most telecommunications, e-commerce, and e-banking systems around the world.
Theoretical definition
A function f : {0,1}* → {0,1}* is one-way if f can be computed by a polynomial time algorithm, but any polynomial time randomized algorithm that attempts to compute a pseudo-inverse for f succeeds with negligible probability. (The * superscript means any number of repetitions, see Kleene star.) That is, for all randomized algorithms , all positive integers c and all sufficiently large n = length(x) ,
where the probability is over the choice of x from the discrete uniform distribution on {0,1}n, and the randomness of .
Note that, by this definition, the function must be "hard to invert" in the average-case, rather than worst-case sense. This is different from much of complexity theory (e.g., NP-hardness), where the term "hard" is meant in the worst-case. That is why even if some candidates for one-way functions (described below) are known to be NP-complete, it does not imply their one-wayness. The latter property is only based on the lack of known algorithms to solve the problem.
It is not sufficient to make a function "lossy" (not one-to-one) to have a one-way function. In particular, the function that outputs the string of n zeros on any input of length n is not a one-way function because it is easy to come up with an input that will result in the same output. More precisely: For such a function that simply outputs a string of zeroes, an algorithm F that just outputs any string of length n on input f(x) will "find" a proper preimage of the output, even if it is not the input which was originally used to find the output string.
Related concepts
A one-way permutation is a one-way function that is also a permutation—that is, a one-way function that is bijective. One-way permutations are an important cryptographic primitive, and it is not known if their existence is implied by the existence of one-way functions.
A trapdoor one-way function or trapdoor permutation is a special kind of one-way function. Such a function is hard to invert unless some secret information, called the trapdoor, is known.
A collision-free hash function f is a one-way function that is also collision-resistant; that is, no randomized polynomial time algorithm can find a collision—distinct values x, y such that f(x) = f(y)—with non-negligible probability.
Theoretical implications of one-way functions
If f is a one-way function, then the inversion of f would be a problem whose output is hard to compute (by definition) but easy to check (just by computing f on it). Thus, the existence of a one-way function implies that FP≠FNP, which in turn implies that P≠NP. However, P≠NP does not imply the existence of one-way functions.
The existence of a one-way function implies the existence of many other useful concepts, including:
Pseudorandom generators
Pseudorandom function families
Bit commitment schemes
Private-key encryption schemes secure against adaptive chosen-ciphertext attack
Message authentication codes
Digital signature schemes (secure against adaptive chosen-message attack)
The existence of one-way functions also implies that there is no natural proof for P≠NP.
Candidates for one-way functions
The following are several candidates for one-way functions (as of April 2009). Clearly, it is not known whether
these functions are indeed one-way; but extensive research has so far failed to produce an efficient inverting algorithm for any of them.
Multiplication and factoring
The function f takes as inputs two prime numbers p and q in binary notation and returns their product. This function can be "easily" computed in O(b2) time, where b is the total number of bits of the inputs. Inverting this function requires finding the factors of a given integer N. The best factoring algorithms known run in time, where b is the number of bits needed to represent N.
This function can be generalized by allowing p and q to range over a suitable set of semiprimes. Note that f is not one-way for randomly selected integers , since the product will have 2 as a factor with probability 3/4 (because the probability that an arbitrary p is odd is 1/2, and likewise for q, so if they're chosen independently, the probability that both are odd is therefore 1/4; hence the probability that p or q is even is ).
The Rabin function (modular squaring)
The Rabin function, or squaring modulo , where and are primes is believed to be a collection of one-way functions. We write
to denote squaring modulo : a specific member of the Rabin collection. It can be shown that extracting square roots, i.e. inverting the Rabin function, is computationally equivalent to factoring (in the sense of polynomial-time reduction). Hence it can be proven that the Rabin collection is one-way if and only if factoring is hard. This also holds for the special case in which and are of the same bit length. The Rabin cryptosystem is based on the assumption that this Rabin function is one-way.
Discrete exponential and logarithm
Modular exponentiation can be done in polynomial time. Inverting this function requires computing the discrete logarithm. Currently there are several popular groups for which no algorithm to calculate the underlying discrete logarithm in polynomial time is known. These groups are all finite abelian groups and the general discrete logarithm problem can be described as thus.
Let G be a finite abelian group of cardinality n. Denote its group operation by multiplication. Consider a primitive element and another element . The discrete logarithm problem is to find the positive integer k, where , such that:
The integer k that solves the equation is termed the discrete logarithm of β to the base α. One writes .
Popular choices for the group G in discrete logarithm cryptography are the cyclic groups (Zp)× (e.g. ElGamal encryption, Diffie–Hellman key exchange, and the Digital Signature Algorithm) and cyclic subgroups of elliptic curves over finite fields (see elliptic curve cryptography).
An elliptic curve is a set of pairs of elements of a field satisfying . The elements of the curve form a group under an operation called "point addition" (which is not the same as the addition operation of the field). Multiplication kP of a point P by an integer k (i.e., a group action of the additive group of the integers) is defined as repeated addition of the point to itself. If k and P are known, it is easy to compute , but if only R and P are known, it is assumed to be hard to compute k.
Cryptographically secure hash functions
There are a number of cryptographic hash functions that are fast to compute, such as SHA 256. Some of the simpler versions have fallen to sophisticated analysis, but the strongest versions continue to offer fast, practical solutions for one-way computation. Most of the theoretical support for the functions are more techniques for thwarting some of the previously successful attacks.
Other candidates
Other candidates for one-way functions have been based on the hardness of the decoding of random linear codes, the subset sum problem (Naccache–Stern knapsack cryptosystem), or other problems.
Universal one-way function
There is an explicit function f that has been proved to be one-way, if and only if one-way functions exist. In other words, if any function is one-way, then so is f. Since this function was the first combinatorial complete one-way function to be demonstrated, it is known as the "universal one-way function". The problem of finding a one way function is thus reduced to proving that one such function exists.
See also
One-way compression function
Cryptographic hash function
Geometric cryptography
Trapdoor function
References
Further reading
Jonathan Katz and Yehuda Lindell (2007). Introduction to Modern Cryptography. CRC Press. .
Section 10.6.3: One-way functions, pp. 374–376.
Section 12.1: One-way functions, pp. 279–298.
Cryptography
Cryptographic primitives
Unsolved problems in computer science |
364578 | https://en.wikipedia.org/wiki/Identity%20document | Identity document | An identity document (also called ID or colloquially as papers) is any document that may be used to prove a person's identity. If issued in a small, standard credit card size form, it is usually called an identity card (IC, ID card, citizen card), or passport card. Some countries issue formal identity documents, as national identification cards which may be compulsory or non-compulsory, while others may require identity verification using regional identification or informal documents. When the identity document incorporates a person's photograph, it may be called photo ID.
In the absence of a formal identity document, a driver's license may be accepted in many countries for identity verification. Some countries do not accept driver's licenses for identification, often because in those countries they do not expire as documents and can be old or easily forged. Most countries accept passports as a form of identification.
Some countries require all people to have an identity document available at any time. Many countries require all foreigners to have a passport or occasionally a national identity card from their home country available at any time if they do not have a residence permit in the country.
The identity document is used to connect a person to information about the person, often in a database. The photo and the possession of it is used to connect the person with the document. The connection between the identity document and information database is based on personal information present on the document, such as the bearer's full name, age, birth date, address, an identification number, card number, gender, citizenship and more. A unique national identification number is the most secure way, but some countries lack such numbers or don't mention them on identity documents.
History
A version of the passport considered to be the earliest identity document inscribed into law was introduced by King Henry V of England with the Safe Conducts Act 1414.
For the next 500 years up to the onset of the First World War, most people did not have or need an identity document.
Photographic identification appeared in 1876 but it did not become widely used until the early 20th century when photographs became part of passports and other ID documents such as driver's licenses, all of which came to be referred to as "photo IDs". Both Australia and Great Britain, for example, introduced the requirement for a photographic passport in 1915 after the so-called Lody spy scandal.
The shape and size of identity cards were standardized in 1985 by ISO/IEC 7810. Some modern identity documents are smart cards including a difficult-to-forge embedded integrated circuit that were standardized in 1988 by ISO/IEC 7816. New technologies allow identity cards to contain biometric information, such as a photograph; face, hand, or iris measurements; or fingerprints. Many countries now issue electronic identity cards.
Adoption
Law enforcement officials claim that identity cards make surveillance and the search for criminals easier and therefore support the universal adoption of identity cards. In countries that don't have a national identity card, there is, however, concern about the projected large costs and potential abuse of high-tech smartcards.
In many countries – especially English-speaking countries such as Australia, Canada, Ireland, New Zealand, the United Kingdom, and the United States – there are no government-issued compulsory identity cards for all citizens. Ireland's Public Services Card is not considered a national identity card by the Department of Employment Affairs and Social Protection (DEASP), but many say it is in fact becoming that, and without public debate or even a legislative foundation.
There is debate in these countries about whether such cards and their centralised database constitute an infringement of privacy and civil liberties. Most criticism is directed towards the enhanced possibilities of extensive abuse of centralised and comprehensive databases storing sensitive data. A 2006 survey of UK Open University students concluded that the planned compulsory identity card under the Identity Cards Act 2006 coupled with a central government database generated the most negative response among several alternative configurations. None of the countries listed above mandate possession of identity documents, but they have de facto equivalents since these countries still require proof of identity in many situations. For example, all vehicle drivers must have a driving licence, and young people may need to use specially issued "proof of age cards" when purchasing alcohol. In addition, and uniquely among native English-speaking countries without ID cards, the United States requires all its male residents between the ages of 18 and 25, including foreigners, to register for military conscription.
Arguments for
Arguments for identity documents as such:
In order to avoid mismatching people, and to fight fraud, there should be a way, as securely as possible, to prove a person's identity.
Every human being already carries their own personal identification in the form of DNA, which is extremely hard to falsify or to discard (in terms of modification). Even for non-state commercial and private interactions, this may shortly become the preferred identifier, rendering a state-issued identity card a lesser evil than the potentially extensive privacy risks associated with everyday use of a person's genetic profile for identification purposes.
Arguments for national identity documents:
If using only private alternatives, such as ID cards issued by banks, the inherent lack of consistency regarding issuance policies can lead to downstream problems. For example, in Sweden private companies such as banks (citing security reasons) refused to issue ID cards to individuals without a Swedish card or Swedish passport. This forced the government to start issuing national cards. It is also harder to control information usage by private companies, such as when credit card issuers or social media companies map purchase behaviour in order to assist ad targeting.
Arguments against
Arguments against identity documents as such:
The development and administration costs of an identity card system can be very high. Figures from £30 ($45 in the United States) to £90 or even higher were suggested for the abandoned UK ID card. In countries like Chile the identity card is personally paid for by each person up to £6; in other countries, such as France or Venezuela, the ID card is free. This, however, does not disclose the true cost of issuing ID cards as some additional portion may be borne by taxpayers in general.
Arguments against national identity documents:
Rather than relying on government-issued ID cards, US federal policy has the alternative to encourage the variety of identification systems that already exist, such as driver's or firearms licences or private cards.
Arguments against overuse or abuse of identity documents:
Cards reliant on a centralized database can be used to track anyone's physical movements and private life, thus infringing on personal freedom and privacy. The proposed British ID card (see next section) proposes a series of linked databases managed by private sector firms. The management of disparate linked systems across a range of institutions and any number of personnel is alleged to be a security disaster in the making.
If race is displayed on mandatory ID documents, this information can lead to racial profiling.
National policies
According to Privacy International, , possession of identity cards was compulsory in about 100 countries, though what constitutes "compulsory" varies. In some countries (see below), it is compulsory to have an identity card when a person reaches a prescribed age. The penalty for non-possession is usually a fine, but in some cases it may result in detention until identity is established. For people suspected with crimes such as shoplifting or no bus ticket, non-possession might result in such detention, also in countries not formally requiring identity cards. In practice, random checks are rare, except in certain times.
A number of countries do not have national identity cards. These include Andorra, Australia, the Bahamas, Canada, Denmark, India (see below), Japan (see below), Kiribati, the Marshall Islands, Nauru, New Zealand, Palau, Samoa, Turkmenistan, Tuvalu, and the United Kingdom. Other identity documents such as passports or driver's licenses are then used as identity documents when needed. However, governments of Kiribati, Samoa and Uzbekistan (as of 2021) are planning to introduce new national identity cards in the near future. Some of these, e.g. Denmark, have more simple official identity cards, which do not match the security and level of acceptance of a national identity card, used by people without driver's licenses.
A number of countries have voluntary identity card schemes. These include Austria, Belize, Finland, France (see France section), Hungary (however, all citizens of Hungary must have at least one of: valid passport, photo-based driving licence, or the National ID card), Iceland, Ireland, Norway, Saint Lucia, Sweden, Switzerland and the United States. The United Kingdom's scheme was scrapped in January 2011 and the database was destroyed.
In the United States, the Federal government issues optional identity cards known as "Passport Cards" (which include important information such as the nationality). On the other hand, states issue optional identity cards for people who do not hold a driver's license as an alternate means of identification. These cards are issued by the same organisation responsible for driver's licenses, usually called the Department of Motor Vehicles. Passport Cards hold limited travel status or provision, usually for domestic travel requirements. Note, this is not an obligatory identification card for citizens.
For the Sahrawi people of Western Sahara, pre-1975 Spanish identity cards are the main proof that they were Saharawi citizens as opposed to recent Moroccan colonists. They would thus be allowed to vote in an eventual self-determination referendum.
Companies and government departments may issue ID cards for security purposes, proof of identity, or proof of a qualification. For example, all taxicab drivers in the UK carry ID cards. Managers, supervisors, and operatives in construction in the UK have a photographic ID card, the CSCS (Construction Skills Certification Scheme) card, indicating training and skills including safety training. Those working on UK railway lands near working lines must carry a photographic ID card to indicate training in track safety (PTS and other cards) possession of which is dependent on periodic and random alcohol and drug screening. In Queensland and Western Australia, anyone working with children has to take a background check and get issued a Blue Card or Working with Children Card, respectively.
Africa
Liberia
Liberia has begun the issuance process of its national biometric identification card, which citizens and foreign residents will use to open bank accounts and participate in other government services on a daily basis.
More than 4.5 million people are expected to register and obtain ID cards of citizenship or residence in Liberia. The project has already started where NIR (National Identification Registry) is issuing Citizen National ID Cards. The centralized National Biometric Identification System (NBIS) will be integrated with other government ministries. Resident ID Cards and ECOWAS ID Cards will also be issued.
Cape Verde
Cartão Nacional de Identificação (CNI) is the national identity card of Cape Verde.
Egypt
It is compulsory for all Egyptian citizens age 16 or older to possess an ID card ( Biṭāqat taḥqīq shakhṣiyya, literally, "Personal Verification Card"). In daily colloquial speech, it is generally simply called "el-biṭāqa" ("the card"). It is used for:
Opening or closing a bank account
Registering at a school or university
Registering the number of a mobile or landline telephone
Interacting with most government agencies, including:
Applying for or renewing a driver's license
Applying for a passport
Applying for any social services or grants
Registering to vote, and voting in elections
Registering as a taxpayer
Egyptian ID cards consist of 14 digits, the national identity number, and expire after 7 years from the date of issue. Some feel that Egyptian ID cards are problematic, due to the general poor quality of card holders' photographs and the compulsory requirements for ID card holders to identify their religion and for married women to include their husband's name on their cards.
Tunisia
Every citizen of Tunisia is expected to apply for an ID card by the age of 18; however, with the approval of a parent(s), a Tunisian citizen may apply for, and receive, an ID card prior to their eighteenth birthday upon parental request.
In 2016, The government has introduced a new bill to the parliament to issue new biometric ID documents. The bill has created controversy amid civil society organizations.
The Gambia
All Gambian citizens over 18 years of age are required to hold a Gambian National Identity Card. In July 2009, a new biometric identity card was introduced. The biometric card is one of the acceptable documents required to apply for a Gambian Driving Licence.
Ghana
Ghana begun the issueing of a national identity card for Ghanaian citizens in 1973.
However, the project was discontinued three years later due to problems with logistics and lack of financial support. This was the first time the idea of national identification systems in the form of the Ghana Card arose in the country. Full implementation of the Ghana Cards begun from 2006.
According to the National Identification Authority, over 15 million Ghanaians have been registered for the Ghana card by September 2020.
Mauritius
Mauritius requires all citizens who have reached the age of 18 to apply for a National Identity Card. The National Identity Card is one of the few accepted forms of identification, along with passports. A National Identity Card is needed to apply for a passport for all adults, and all minors must take with them the National Identity Card of a parent(s) when applying for a passport.
Mozambique
Bilhete de identidade (BI) is the national ID card of Mozambique.
Nigeria
Nigeria first introduced a national identity card in 2005, but its adoption back then was limited and not widespread.
The country is now in the process of introducing a new biometric ID card complete with a SmartCard and other security features. The National Identity Management Commission (NIMC) is the federal government agency responsible for the issuance of these new cards, as well as the management of the new National Identity Database.
The Federal Government of Nigeria announced in April 2013 that after the next general election in 2015, all subsequent elections will require that voters will only be eligible to stand for office or vote provided the citizen possesses a NIMC-issued identity card.
The Central Bank of Nigeria is also looking into instructing banks to request for a National Identity Number (NIN) for any citizen maintaining an account with any of the banks operating in Nigeria. The proposed kick off date is yet to be determined.
South Africa
South African citizens aged 15 years and 6 months or older are eligible for an ID card. The South African identity document is not valid as a travel document or valid for use outside South Africa. Although carrying the document is not required in daily life, it is necessary to show the document or a certified copy as proof of identity when:
Signing a contract, including
Opening or closing a bank account
Registering at a school or university
Buying a mobile phone and registering the number
Interacting with most government agencies, including
Applying for or renewing a driving licence or firearm licence
Applying for a passport
Applying for any social services or grants
Registering to vote, and voting in elections
Registering as a taxpayer or for unemployment insurance
The South African identity document used to also contain driving and firearms licences; however, these documents are now issued separately in card format.
In mid 2013 a smart card ID was launched to replace the ID book. The cards were launched on July 18, 2013, when a number of dignitaries received the first cards at a ceremony in Pretoria. The government plans to have the ID books phased out over a six to eight-year period. The South African government is looking into possibly using this smart card not just as an identification card but also for licences, National Health Insurance, and social grants.
Zimbabwe
Zimbabweans are required to apply for National Registration at the age of 16. Zimbabwean citizens are issued with a plastic card which contains a photograph and their particulars onto it. Before the introduction of the plastic card, the Zimbabwean ID card used to be printed on anodised aluminium. Along with Driving Licences, the National Registration Card (including the old metal type) is universally accepted as proof of identity in Zimbabwe. Zimbabweans are required by law to carry identification on them at all times and visitors to Zimbabwe are expected to carry their passport with them at all times.
Asia
Afghanistan
Afghan citizens over the age of 18 are required to carry a national ID document called Tazkira.
Bahrain
Bahraini citizens must have both an ID card, called a "smart card", which is recognized as an official document and can be used within the Gulf Cooperation Council, and a passport, which is recognized worldwide.
Bangladesh
Biometric identification has existed in Bangladesh since 2008. All Bangladeshis who are 18 years of age and older are included in a central Biometric Database, which is used by the Bangladesh Election Commission to oversee the electoral procedure in Bangladesh. All Bangladeshis are issued with an NID Card which can be used to obtain a passport, Driving Licence, credit card, and to register land ownership.
Bhutan
The Bhutanese national identity card (called the Buthanese Citizenship card) is an electronic ID card, compulsory for all Bhutanese nationals and costs 100 Bhutanese ngultrum.
China
The People's Republic of China requires each of its citizens aged 16 and over to carry an identity card. The card is the only acceptable legal document to obtain employment, a residence permit, driving licence or passport, and to open bank accounts or apply for entry to tertiary education and technical colleges.
Hong Kong
The Hong Kong Identity Card (or HKID) is an official identity document issued by the Immigration Department of Hong Kong to all people who hold the right of abode, right to land or other forms of limited stay longer than 180 days in Hong Kong. According to Basic Law of Hong Kong, all permanent residents are eligible to obtain the Hong Kong Permanent Identity Card which states that the holder has the right of abode in Hong Kong. All persons aged 16 and above must carry a valid legal government identification document in public. All persons aged 16 and above must be able to produce valid legal government identification documents when requested by legal authorities; otherwise, they may be held in detention to investigate his or her identity and legal right to be in Hong Kong.
India
While there is no mandatory identity card in India, the Aadhaar card, a multi-purpose national identity card, carrying 16 personal details and a unique identification number, has been available to all citizens since 2007. The card contains a photograph, full name, date of birth, and a unique, randomly generated 12-digit National Identification Number. However, the card itself is rarely required as proof, the number or a copy of the card being sufficient. The card has a SCOSTA QR code embedded on the card, through which all the details on the card are accessible. In addition to Aadhaar, PAN cards, ration cards, voter cards and driving licences are also used. These may be issued by either the government of India or the government of any state, and are valid throughout the nation. The Indian passport may also be used.
Indonesia
Residents over 17 are required to hold a KTP (Kartu Tanda Penduduk) identity card. The card will identify whether the holder is an Indonesian citizen or foreign national. In 2011, the Indonesian government started a two-year ID issuance campaign that utilizes smartcard technology and biometric duplication of fingerprint and iris recognition. This card, called the Electronic KTP (e-KTP), will replace the conventional ID card beginning in 2013. By 2013, it is estimated that approximately 172 million Indonesian nationals will have an e-KTP issued to them.
Iran
Every citizen of Iran has an identification document called Shenasnameh (Iranian identity booklet) in Persian (شناسنامه). This is a booklet based on the citizen's birth certificate which features their Shenasnameh National ID number, their birth date, their birthplace, and the names, birth dates and National ID numbers of their legal ascendants. In other pages of the Shenasnameh, their marriage status, names of spouse(s), names of children, date of every vote cast and eventually their death would be recorded.
Every Iranian permanent resident above the age of 15 must hold a valid National Identity Card (Persian:کارت ملی) or at least obtain their unique National Number from any of the local Vital Records branches of the Iranian Ministry of Interior.
In order to apply for an NID card, the applicant must be at least 15 years old and have a photograph attached to their Birth Certificate, which is undertaken by the Vital Records branch.
Since June 21, 2008, NID cards have been compulsory for many things in Iran and Iranian missions abroad (e.g., obtaining a passport, driver's license, any banking procedure, etc.).
Iraq
Every Iraqi citizen must have a National Card (البطاقة الوطنية).
Israel
Israeli law requires every permanent resident above the age of 16, whether a citizen or not, to carry an identification card called te'udat zehut () in Hebrew or biţāqat huwīya () in Arabic.
The card is designed in a bilingual form, printed in Hebrew and Arabic; however, the personal data is presented in Hebrew by default and may be presented in Arabic as well if the owner decides so. The card must be presented to an official on duty (e.g., a policeman) upon request, but if the resident is unable to do this, one may contact the relevant authority within five days to avoid a penalty.
Until the mid-1990s, the identification card was considered the only legally reliable document for many actions such as voting or opening a bank account. Since then, the new Israeli driver's licenses which include photos and extra personal information are now considered equally reliable for most of these transactions. In other situations any government-issued photo ID, such as a passport or a military ID, may suffice.
Japan
Japanese citizens are not required to have identification documents with them within the territory of Japan. When necessary, official documents, such as one's Japanese driver's license, basic resident registration card, radio operator license, social insurance card, health insurance card or passport are generally used and accepted. On the other hand, mid- to long-term foreign residents are required to carry their Zairyū cards, while short-term visitors and tourists (those with a Temporary Visitor status sticker in their passport) are required to carry their passports.
Kuwait
The Kuwaiti identity card is issued to Kuwaiti citizens. It can be used as a travel document when visiting countries in the Gulf Cooperation Council.
Macau
The Macau Resident Identity Card is an official identity document issued by the Identification Department to permanent residents and non-permanent residents.
Malaysia
In Malaysia, the MyKad is the compulsory identity document for Malaysian citizens aged 12 and above. Introduced by the National Registration Department of Malaysia on September 5, 2001, as one of four MSC Malaysia flagship applications and a replacement for the High Quality Identity Card (Kad Pengenalan Bermutu Tinggi), Malaysia became the first country in the world to use an identification card that incorporates both photo identification and fingerprint biometric data on an in-built computer chip embedded in a piece of plastic.
Myanmar
Myanmar citizens are required to obtain a National Registration Card (NRC), while non-citizens are given a Foreign Registration Card (FRC).
Nepal
New biometric cards rolled out in 2018. Information displayed in both English and Nepali.
Pakistan
In Pakistan, all adult citizens must register for the Computerized National Identity Card (CNIC), with a unique number, at age 18. CNIC serves as an identification document to authenticate an individual's identity as a citizen of Pakistan.
Earlier on, National Identity Cards (NICs) were issued to citizens of Pakistan. Now, the government has shifted all its existing records of National Identity Cards (NIC) to the central computerized database managed by NADRA.
New CNIC's are machine readable and have security features such as facial and finger print information. At the end of 2013, smart national identity cards, SNICs, were also made available.
Palestine
The Palestinian Authority issues identification cards following agreements with Israel. Since 1995, in accordance to the Oslo Accords, the data is forwarded to Israeli databases and verified. In February 2014, a presidential decision issued by Palestinian president Mahmoud Abbas to abolish the religion field was announced. Israel has objected to abolishing religion on Palestinian IDs because it controls their official records, IDs and passports and the PA does not have the right to make amendments to this effect without the prior approval of Israel. The Palestinian Authority in Ramallah said that abolishing religion on the ID has been at the center of negotiations with Israel since 1995. The decision was criticized by Hamas officials in Gaza Strip, saying it is unconstitutional and will not be implemented in Gaza because it undermines the Palestinian cause.
Papua New Guinea
E-National ID cards were rolled out in 2015.
Philippines
A new Philippines identity card known as the Philippine Identification System (PhilSys) ID card began to be issued in August 2018 to Filipino citizens and foreign residents age 18 and above. This national ID card is non-compulsory but should harmonize existing government-initiated identification cards that have been issued – including the Unified Multi-Purpose ID issued to members of the Social Security System, Government Service Insurance System, Philippine Health Insurance Corporation and the Home Development Mutual Fund (Pag-IBIG Fund).
Singapore
In Singapore, every citizen, and permanent resident (PR) must register at the age of 15 for an Identity Card (IC). The card is necessary not only for procedures of state but also in the day-to-day transactions such as registering for a mobile phone line, obtaining certain discounts at stores, and logging on to certain websites on the internet. Schools frequently use it to identify students, both online and in exams.
South Korea
Every citizen of South Korea over the age of 17 is issued an ID card called Jumindeungrokjeung (주민등록증). It has had several changes in its history, the most recent form being a plastic card meeting the ISO 7810 standard. The card has the holder's photo and a 15 digit ID number calculated from the holder's birthday and birthplace. A hologram is applied for the purpose of hampering forgery. This card has no additional features used to identify the holder, save the photo. Other than this card, the South Korean government accepts a Korean driver's license card, an Alien Registration Card, a passport and a public officer ID card as an official ID card.
Sri Lanka
The E-National Identity Card (abbreviation: E-NIC) is the identity document in use in Sri Lanka. It is compulsory for all Sri Lankan citizens who are sixteen years of age and older to have a NIC. NICs are issued from the Department for Registration of Persons. The Registration of Persons Act No.32 of 1968 as amended by Act Nos 28 and 37 of 1971 and Act No.11 of 1981 legislates the issuance and usage of NICs.
Sri Lanka is in the process of developing a Smart Card based RFID NIC card which will replace the obsolete 'laminated type' cards by storing the holders information on a chip that can be read by banks, offices, etc., thereby reducing the need to have documentation of these data physically by storing in the cloud.
The NIC number is used for unique personal identification, similar to the social security number in the US.
In Sri Lanka, all citizens over the age of 16 need to apply for a National Identity Card (NIC). Each NIC has a unique 10 digit number, in the format 000000000A (where 0 is a digit and A is a letter). The first two digits of the number are your year of birth (e.g.: 93xxxxxxxx for someone born in 1993). The final letter is generally a 'V' or 'X'. An NIC number is required to apply for a passport (over 16), driving license (over 18) and to vote (over 18). In addition, all citizens are required to carry their NIC on them at all times as proof of identity, given the security situation in the country. NICs are not issued to non-citizens, who are still required to carry a form of photo identification (such as a photocopy of their passport or foreign driving license) at all times. At times the Postal ID card may also be used.
Taiwan
The "National Identification Card" () is issued to all nationals of the Republic of China (Official name of Taiwan) aged 14 and older who have household registration in the Taiwan area. The Identification Card is used for virtually all activities that require identity verification within Taiwan such as opening bank accounts, renting apartments, employment applications and voting.
The Identification Card contains the holder's photo, ID number, Chinese name, and (Minguo calendar) date of birth. The back of the card also contains the person's registered address where official correspondence is sent, place of birth, and the name of legal ascendants and spouse (if any).
If residents move, they must re-register at a municipal office ().
ROC nationals with household registration in Taiwan are known as "registered nationals". ROC nationals who do not have household registration in Taiwan (known as "unregistered nationals") do not qualify for the Identification Card and its associated privileges (e.g., the right to vote and the right of abode in Taiwan), but qualify for the Republic of China passport, which unlike the Identification Card, is not indicative of residency rights in Taiwan. If such "unregistered nationals" are residents of Taiwan, they will hold a Taiwan Area Resident Certificate as an identity document, which is nearly identical to the Alien Resident Certificate issued to foreign nationals/citizens residing in Taiwan.
Thailand
In Thailand, the Thai National ID Card (Thai: บัตรประจำตัวประชาชน; RTGS: bat pracham tua pracha chon) is an official identity document issued only to Thai Nationals. The card proves the holder's identity for receiving government services and other entitlements.
United Arab Emirates (UAE)
The Federal Authority For Identity and Citizenship is a government agency that is responsible for issuing the National Identity Cards for the citizens (UAE nationals), GCC (Gulf Corporation Council) nationals and residents in the country. All individuals are mandated to apply for the ID card at all ages. For individuals of 15 years and above, fingerprint biometrics (10 fingerprints, palm, and writer) are captured in the registration process. Each person has a unique 15-digit identification number (IDN) that a person holds throughout his/her life.
The Identity Card is a smart card that has a state-of-art technology in the smart cards field with very high security features which make it difficult to duplicate. It is a 144KB Combi Smart Card, where the electronic chip includes personal information, 2 fingerprints, 4-digit pin code, digital signature, and certificates (digital and encryption). Personal photo, IDN, name, date of birth, signature, nationality, and the ID card expiry date are fields visible on the physical card.
In the UAE it is used as an official identification document for all individuals to benefit from services in the government, some of the non-government, and private entities in the UAE. This supports the UAE's vision of smart government as the ID card is used to securely access e-services in the country. The ID card could also be used by citizens as an official travel document between GCC countries instead of using passports. The implementation of the national ID program in the UAE enhanced security of the individuals by protecting their identities and preventing identity theft.
Vietnam
In Vietnam, all citizens above 14 years old must possess a citizen identification card provided by the local authority, and must be reissued when the citizens' years of age reach 25, 40 and 60. Formerly a people's ID document was used.
Europe
European Economic Area
National identity cards issued to citizens of the EEA (European Union, Iceland, Liechtenstein, Norway) and Switzerland, which states EEA or Swiss citizenship, can not only be used as an identity document within the home country, but also as a travel document to exercise the right of free movement in the EEA and Switzerland.
During the UK Presidency of the EU in 2005 a decision was made to: "Agree common standards for security features and secure issuing procedures for ID cards (December 2005), with detailed standards agreed as soon as possible thereafter. In this respect, the UK Presidency put forward a proposal for the EU-wide use of biometrics in national identity cards".
From August 2, 2021, the European identity card is intended to replace and standardize the various identity card styles currently in use.
Austria
The Austrian identity card is issued to Austrian citizens. It can be used as a travel document when visiting countries in the EEA (EU plus EFTA) countries, Europe's microstates, the British Crown Possessions, Albania, Bosnia and Herzegovina, Georgia, Kosovo, Moldova, Montenegro, the Republic of Macedonia, North Cyprus, Serbia, Montserrat, the French overseas territories and British Crown Possessions, and on organized tours to Jordan (through Aqaba airport) and Tunisia. Only around 10% of the citizens of Austria had this card in 2012,[2] as they can use the Austrian driver's licenses or other identity cards domestically and the more widely accepted Austrian passport abroad.
Belgium
In Belgium, everyone above the age of 12 is issued an identity card ( in French, in Dutch and in German), and from the age of 15 carrying this card at all times is mandatory. For foreigners residing in Belgium, similar cards (foreigner's cards, in Dutch, in French) are issued, although they may also carry a passport, a work permit, or a (temporary) residence permit.
Since 2000, all newly issued Belgian identity cards have a chip (eID card), and roll-out of these cards is expected to be complete in the course of 2009. Since 2008, the aforementioned foreigner's card has also been replaced by an eID card, containing a similar chip. The eID cards can be used both in the public and private sector for identification and for the creation of legally binding electronic signatures.
Until end 2010 Belgian consulates issued old style ID cards (105 x 75 mm) to Belgian citizens who were permanently residing in their jurisdiction and who chose to be registered at the consulate (which is strongly advised).
Since 2011 Belgian consulates issue electronic ID cards, the electronic chip on which is not activated however.
Bulgaria
In Bulgaria, it is obligatory to possess an identity card (Bulgarian – лична карта, lichna karta) at the age of 14 and above. Any person above 14 being checked by the police without carrying at least some form of identification is liable to a fine of 50 Bulgarian levs (about €25).
Croatia
All Croatian citizens may request an Identity Card, called Osobna iskaznica (literally Personal card). All persons over the age of 18 must have an Identity Card and carry it at all times. Refusal to carry or produce an Identity Card to a police officer can lead to a fine of 100 kuna or more and detention until the individual's identity can be verified by fingerprints.
The Croatian ID card is valid in the entire European Union, and can also be used to travel throughout the non-EU countries of the Balkans.
The 2013 design of the Croatian ID card is prepared for future installation of an electronic identity card chip, which is set for implementation in 2014.
Cyprus
The acquisition and possession of a Civil Identity Card is compulsory for any eligible person who has reached twelve years of age. On January 29, 2015, it was announced that all future IDs to be issued will be biometric. They can be applied for at Citizen Service Centres (KEP) or at consulates with biometric data capturing facilities.
An ID card costs €30 for adults and €20 for children with 10/5 years validity respectively. It is a valid travel document for the entire European Union.
Czech Republic
In Czech, an ID is called Občanský průkaz, an identity card with a photo is issued to all citizens of the Czech Republic at the age of 15. It is officially recognised by all member states of the European Union for intra EU travel. Travelling outside the EU mostly requires the Czech passport.
Denmark
Denmark is one of few EU countries that currently do not issue EU standard national identity cards (not counting driving licences and passports issued for other purposes).
Danish citizens are not required by law to carry an identity card. A traditional identity document (without photo), the personal identification number certificate (Danish:Personnummerbevis) is of little use in Danish society, as it has been largely replaced by the much more versatile National Health Insurance Card (Danish:Sundhedskortet) which contains the same information and more. The National Health Insurance Card is issued to all citizens age 12 and above. It is commonly referred to as an identity card despite the fact it has no photo of the holder. Both certificates retrieve their information from the Civil Registration System. However, the personnummerbevis is still issued today and has been since September 1968.
Danish driver's licenses and passports are the only identity cards issued by the government containing both the personal identification number and a photo. A foreign citizen without driving skills living in Denmark cannot get such documents. Foreign driving licenses and passports are accepted with limitations. A foreigner living in Denmark will have a residence permit with their personal identification number and a photo.
Until 2004, the national debit card Dankort contained a photo of the holder and was widely accepted as an identity card. The Danish banks lobbied successfully to have pictures removed from the national debit cards and so since 2004 the Dankort has no longer contained a photo. Hence it is rarely accepted for identification. Between 2004 and 2016, counties issued a "photo identity card" (), which can be used as age verification, but it is limited for identification purposes because of limited security for issuing, and it is not valid for EU travel.
Since 2017, municipalities have issued identity cards (), which are more secure and also valid for identification purposes, but still not for EU travel. Since early 2018, it has been possible to add nationality to the card in order to allow passage through Swedish border control, something Sweden allowed until 2019.
The cards still don't fully conform to international travel document requirements and aren't approved for EU travel, since they don't have the gender and birth date (although it does have a Danish identity number, which has this encoded) nor a machine readable zone and chip, and are not registered in the EU travel document database PRADO.
Estonia
The Estonian identity card () is a chipped picture ID in the Republic of Estonia. An Estonian identity card is officially recognised by all member states of the European Union for intra EU travel. For travelling outside the EU, Estonian citizens may also require a passport.
The card's chip stores a key pair, allowing users to cryptographically sign digital documents based on principles of public key cryptography using DigiDoc. Under Estonian law, since December 15, 2000 the cryptographic signature is legally equivalent to a manual signature.
The Estonian identity card is also used for authentication in Estonia's ambitious Internet-based voting programme. In February 2007, Estonia was the first country in the world to institute electronic voting for parliamentary elections. Over 30 000 voters participated in the country's first e-election. By 2014, at the European Parliament elections, the number of e-voters has increased to more than 100,000 comprising 31% of the total votes cast.
Finland
In Finland, any citizen can get an identification card (henkilökortti/identitetskort). This, along with the passport, is one of two official identity documents. It is available as an electronic ID card (sähköinen henkilökortti/elektroniskt identitetskort), which enables logging into certain government services on the Internet.
Driving licenses and KELA (social security) cards with a photo are also widely used for general identification purposes even though they are not officially recognized as such. However, KELA has ended the practice of issuing social security cards with the photograph of the bearer, while it has become possible to embed the social security information onto the national ID card. For most purposes when identification is required, only valid documents are ID card, passport or driving license. However, a citizen is not required to carry any of these.
France
France has had a national ID card for all citizens since the beginning of World War II in 1940. Compulsory identity documents were created before, for workers from 1803 to 1890, nomads (gens du voyage) in 1912, and foreigners in 1917 during World War I.
National identity cards were first issued as the carte d'identité française under the law of October 27, 1940, and were compulsory for everyone over the age of 16. Identity cards were valid for 10 years, had to be updated within a year in case of change of residence, and their renewal required paying a fee. Under the Vichy regime, in addition to the face photograph, the family name, first names, date and place of birth, the card included the national identity number managed by the national statistics INSEE, which is also used as the national service registration number, as the Social Security account number for health and retirement benefits, for access to court files and for tax purposes.
Under the decree 55-1397 of October 22, 1955 a revised non-compulsory card, the carte nationale d'identité (CNI) was introduced.
The law (Art. 78–1 to 78–6 of the French Code of criminal procedure (Code de procédure pénale) mentions only that during an ID check performed by police, gendarmerie or customs officer, one can prove his identity "by any means", the validity of which is left to the judgment of the law enforcement official. Though not stated explicitly in the law, an ID card, a driving licence, a passport, a visa, a Carte de Séjour, a voting card are sufficient according to jurisprudence. The decision to accept other documents, with or without the bearer's photograph, like a Social Security card, a travel card or a bank card, is left to the discretion of the law enforcement officer.
According to Art. 78-2 of the French Penal Procedure Code ID checks are only possible:
alineas 1 & 2 : if you are the object of inquiries or investigations, have committed, prepared or attempted to commit an offence or if you are able to give information about it (contrôle judiciaire);
alinea 4 : within 20 km of French borders and in ports, airports and railway stations open to international traffic (contrôle aux frontières);
alinea 3 : whatever the person's behaviour, to prevent a breach of public order and in particular an offence against the safety of persons or property (contrôle administratif).
The last case allows checks of passers-by ID by the police, especially in neighborhoods with a higher criminality rate which are often the poorest at the condition, according to the Cour de cassation, that the policeman doesn't refer only to "general and abstract conditions" but to "particular circumstances able to characterise a risk of breach of public order and in particular an offence against the safety of persons or property" (Cass. crim. December 5, 1999, n°99-81153, Bull., n°95).
In case of necessity to establish your identity, not being able to prove it "by any means" (for example the legality of a road traffic procès-verbal depends on it), may lead to a temporary arrest (vérification d'identité) of up to 4 hours for the time strictly required for ascertaining your identity according to art. 78-3 of the French Code of criminal procedure (Code de procédure pénale).
For financial transactions, ID cards and passports are almost always accepted as proof of identity. Due to possible forgery, driver's licenses are sometimes refused. For transactions by cheque involving a larger sum, two different ID documents are frequently requested by merchants.
The current identification cards are now issued free of charge and optional, and are valid for ten years for minors, and fifteen for adults. The current government has proposed a compulsory biometric card system, which has been opposed by human rights groups and by the national authority and regulator on computing systems and databases, the Commission nationale de l'informatique et des libertés, CNIL. Another non-compulsory project is being discussed.
Germany
It is compulsory for all German citizens aged 16 or older to possess either a Personalausweis (identity card) or a passport but not to carry one. Police officers and other officials have a right to demand to see one of those documents (obligation of identification); however the law does not state that one is obliged to submit the document at that very moment. But as driver's licences, although sometimes accepted, are not legally accepted forms of identification in Germany, people usually choose to carry their Personalausweis with them.
Beginning in November 2010, German ID cards are issued in the ID-1 format and can also contain an integrated digital signature, if so desired. Until October 2010, German ID cards were issued in ISO/IEC 7810 ID-2 format. The cards have a photograph and a chip with biometric data, including, optionally, fingerprints.
Greece
A compulsory, universal ID system based on personal ID cards has been in place in Greece since World War II. ID cards are issued by the police on behalf of the Headquarters of the Police (previously issued by the Ministry of Public Order, now incorporated in the Ministry of Internal Affairs) and display the holder's signature, standardized face photograph, name and surname, legal ascendants name and surname, date and place of birth, height, municipality, and the issuing police precinct. There are also two optional fields designed to facilitate emergency medical care: ABO and Rhesus factor blood typing.
Fields included in previous ID card formats, such as vocation or profession, religious denomination, domiciliary address, name and surname of spouse, fingerprint, eye and hair color, citizenship and ethnicity were removed permanently as being intrusive of personal data and/or superfluous for the sole purpose of personal identification.
Since 2000, name fields have been filled in both Greek and Latin characters. According to the Signpost Service of the European Commission [reply to Enquiry 36581], old type Greek ID cards "are as valid as the new type according to Greek law and thus they constitute valid travel documents that all other EU Member States are obliged to accept". In addition to being equivalent to passports within the European Economic Area, Greek ID cards are the principal means of identification of voters during elections.
Since 2005, the procedure to issue an ID card has been automated and now all citizens over 12 years of age must have an ID card, which is issued within one work day. Prior to that date, the age of compulsory issue was at 14 and the whole procedure could last several months.
In Greece, an ID card is a citizen's most important state document. For instance, it is required to perform banking transactions if the teller personnel is unfamiliar with the apparent account holder, to interact with the Citizen Service Bureaus (KEP), receive parcels or registered mail etc. Citizens are also required to produce their ID card at the request of law enforcement personnel.
All the above functions can be fulfilled also with a valid Greek passport (e.g., for people who have lost their ID card and have not yet applied for a new one, people who happen to carry their passport instead of their ID card or Greeks who reside abroad and do not have an identity card, which can be issued only in Greece in contrast to passports also issued by consular authorities abroad).
Hungary
Currently, there are three types of valid ID documents (Személyazonosító igazolvány, née Személyi igazolvány, abbr. Sz.ig.) in Hungary: the oldest valid ones are hard-covered, multi-page booklets and issued before 1989 by the People's Republic of Hungary, the second type is a soft-cover, multi-page booklet issued after the change of regime; these two have one, original photo of the owner embedded, with original signatures of the owner and the local police's representative. The third type is a plastic card with the photo and the signature of the holder digitally reproduced. These are generally called Personal Identity Card.
The plastic card shows the owners full name, maiden name if applicable, birth date and place, mother's maiden name, the cardholder's gender, the ID's validity period and the local state authority which issued the card. The card has a 6 digit number + 2 letter unique ID and a separate machine readable zone on the back for identity document scanning devices. It does not have any information about the owner's residential address, nor their personal identity number – this sensitive information is contained on a separate card, called a Residency Card (Lakcímkártya). Personal identity numbers have been issued since 1975; they have the following format in numbers: gender (1 number) – birth date (6 numbers) – unique ID (4 numbers). They are no longer used as a personal identification number, but as a statistical signature.
Other valid documents are the passport (blue colored or red colored with RFID chip) and the driver's license; an individual is required to have at least one of them on hand all the time. The Personal Identity Card is mandatory to vote in state elections or open a bank account in the country.
ID cards are issued to permanent residents of Hungary; the card has a different color for foreign citizens.
Iceland
The Icelandic state-issued identity cards are called "Nafnskírteini" ("name card"). They do not state citizenship and therefore are not usable in most cases as travel documentation outside of the Nordic countries. Identity documents are not mandatory to carry by law (unless driving a car), but can be needed for bank services, age verification and other situations. Most people use driver's licences.
Ireland
Ireland does not issue mandatory national identity cards as such. Except for a brief period during the Second World War when the Irish Department of External Affairs issued identity cards to those wishing to travel to the United Kingdom, Ireland has never issued national identity cards as such.
Identity documentation is optional for Irish and British citizens. Nevertheless, identification is mandatory to obtain certain services such as air travel, banking, interactions regarding welfare and public services, age verification, and additional situations.
"Non-nationals" (no connection to a European Economic Area (EEA) country or Switzerland) aged 16 years and over must produce identification on demand to any immigration officer or a member of the Garda Síochána (police).
Passport booklets, passport cards, driving licences, GNIB Registration Certificates and other forms of identity cards can be used for identification. Ireland has issued optional passport cards since October 2015. The cards are the size of a credit card and have all the information from the biographical page of an Irish passport booklet and can be used explicitly for travel in the European Economic Area.
Ireland issues a "Public Services Card" which is useful when identification is needed for contacts regarding welfare and public services. They have photographs but not birth dates and are therefore not accepted by banks. The card is also not considered as being an identity card by the Department of Employment Affairs and Social Protection (DEASP). In an Oireachtas (parliament) committee hearing held on February 22, 2018, Tim Duggan of that department stated "A national ID card is an entirely different idea. People are generally compelled to carry (such a card)."
Italy
Anyone who is legally resident in Italy, whether a citizen or not, is entitled to request an identity card at the local municipality. Also, any Italian citizen residing abroad in any of the European countries where there is the right of free movement, is entitled to request it at the local Italian embassy/consulate.
An identity card issued to an Italian citizen is accepted in lieu of a passport in all Europe (except in Belarus, Russia and Ukraine) and to travel to Turkey, Georgia, Egypt and Tunisia.
For an Italian citizen it is not compulsory to carry the card itself, as the authorities only have the right to ask for the identity of a person, not for a specific document. However, if public-security officers are not convinced of the claimed identity, such as may be the case for a verbally provided identity claim, they may keep the claimant in custody until his/her identity is ascertained; such an arrest is limited to the time necessary for identification and has no legal consequence.
Instead, all foreigners in Italy are required by law to have an ID with them at all times. Citizens of EU member countries must be always ready to display an identity document that is legally government-issued in their country. Non-EU residents must have their passport with customs entrance stamp or a residence permit issued by Italian authorities; while all resident/immigrant aliens must have a residence permit (they are otherwise illegal and face deportation), foreigners from certain non-EU countries staying in Italy for a limited amount of time (typically for tourism) may be only required to have their passport with a proper customs stamp.
The current Italian identity document is a contactless electronic card made of polycarbonate in the ID-1 format with many security features and containing the following items printed by laser engraving:
on the front: photo, card number, municipality, name, surname, place and date of birth, sex, height, nationality, date of issue, date of expiry, signature, Card Access Number, (optional) the sentence "non valida per l'espatrio" only if the document is not valid abroad
on the back: surname and name of parents or legal guardian (if the applicant is not an adult yet), Italian fiscal code, Italian birth code, residence address, (optional) additional information if the owner is residing abroad, Italian fiscal code in form of barcode, Machine Readable Zone
Moreover, the embedded electronic microprocessor chip stores the holder's picture, name, surname, place and date of birth, residency and (only if aged 12 and more) two fingerprints.
The card is issued by the Ministry of the Interior in collaboration with the IPZS in Rome and sent to the applicant within 6 business days.
The validity is 10 years for adults, 5 years for minors aged 3–18, 3 years for children aged 0–3 and it is extended or shortened in order to expire always on birthday.
However, the old classic Italian ID card is still valid and in the process of being replaced with the new eID card since July 4, 2016. The lack of a Machine Readable Zone, the odd size, the fact that is made of paper and so easy to forge, often cause delays at border controls and, furthermore, foreign countries outside the EU sometimes refuse to accept it as a valid document. These common criticisms were considered in the development of the new Italian electronic identity card, which is in the more common credit-card format and now has many of the latest security features available nowadays.
Latvia
The Latvian "Personal certificate" is issued to Latvian citizens and are valid for travel within Europe (except Belarus, Russia, and Ukraine), Georgia, French Overseas territories and Montserrat (max. 14 days).
Liechtenstein
The Principality of Liechtenstein has a voluntary ID card system for citizens, the Identitätskarte. Liechtenstein citizens are entitled to use a valid national identity card to exercise their right of free movement in EFTA and the European Economic Area.
Lithuania
Lithuanian Personal Identity Card can be used as primary evidence of Lithuanian citizenship, just like a passport and can be used as a valid proof of citizenship and proof of identity both inside and outside Lithuania. It is valid for travel within most European nations.
Luxembourg
The Luxembourg identity card is issued to Luxembourg citizens. It serves as proof of identity and nationality and can also be used for travel within the European Union and a number of other European countries.
Malta
Maltese identity cards are issued to Maltese citizens and other lawful residents of Malta. They can be used as a travel document when visiting countries in the European Union and the European Economic Area.
Netherlands
Dutch citizens from the age of 14 are required to be able to show a valid identity document upon request by a police officer or similar official. Furthermore, identity documents are required when opening bank accounts and upon start of work for a new employer. Official identity documents for residents in the Netherlands are:
Dutch passport
Dutch identity card
Alien's Residence permit
Geprivilegieerdenkaart (amongst others for the corps diplomatique and their family members)
Passports/national ID cards of members of other E.E.A. countries
For the purpose of identification in public (but not for other purposes), also a Dutch driving license often may serve as an identity document.
In the Caribbean Netherlands, Dutch and other EEA identity cards are not valid; and the Identity card BES is an obligatory document for all residents.
Norway
In Norway there is no law penalising non-possession of an identity document. But there are rules requiring it for services like banking, air travel and voting (where personal recognition or other identification methods have not been possible).
The following documents are generally considered valid (varying a little, since no law lists them): Nordic driving licence, passport (often only from EU and EFTA), national ID card from EU, Norwegian ID card from banks and some more. Bank ID cards are printed on the reverse of Norwegian debit cards. To get a bank ID card either a Nordic passport or another passport together with Norwegian residence and work permit is needed.
The Norwegian identity card were introduced on November 30, 2020. Two versions of the card exists, one of which is stating Norwegian citizenship and usable for exercising freedom of movement within EFTA and the EEA, and for general identification. The plan started in 2007 and was delayed several times Banks are campaigning to be freed from the task of issuing ID cards, stating that it should be the responsibility of state authorities. Some banks have already ceased issuing ID cards, so people need to bring their passport for such things as credit card purchases or buying prescribed medication if not in possession of a driving licence.
Poland
Every Polish citizen 18 years of age or older residing permanently in Poland must have an Identity Card (Dowód osobisty) issued by the local Office of Civic Affairs.
Polish citizens living permanently abroad are entitled, but not required, to have one.
Portugal
All Portuguese citizens are required by law to obtain an Identity Card as they turn 6 years of age. They are not required to carry with them always but are obligated to present them to the lawful authorities if requested.
The old format of the cards (yellow laminated paper document) featured a portrait of the bearer, their fingerprint, and the names of parent(s), among other information.
They are currently being replaced by grey plastic cards with a chip, called Cartão de Cidadão (Citizen's Card), which now incorporate NIF (Tax Number), Cartão de Utente (Health Card) and Social Security, all of which are protected by a PIN obtained when the card is issued.
The new Citizen's Card is technologically more advanced than the former Identity Card and has the following characteristics:
From the physical point of view the Citizen's Card will have a 'smart card' format and will replace the existing Identity Card, taxpayer card, Social Security card, voter's card and National Health Service user's card.
From the visual point of view the front of the card will display the holder's photograph and basic personal details. The back will list the numbers under which the holder is registered with the different bodies whose cards the Citizen's Card combines and replaces. The back will also contain an optical reader and the chip.
From the electronic point of view the card will have a contact chip, with digital certificates (for electronic authentication and signature purposes). The chip may also hold the same information as the physical card itself, together with other data such as the holder's address.
Romania
Every citizen of Romania must register for an ID card (Carte de identitate, abbreviated CI) at the age of 14. The CI offers proof of the identity, address, sex and other data of the possessor. It has to be renewed every 10 years. It can be used instead of a passport for travel inside the European Union and several other countries outside the EU.
Another ID card is the Provisional ID Card (Cartea de Identitate Provizorie) issued temporarily when an individual cannot get a normal ID card. Its validity extends for up to 1 year. It cannot be used in order to travel within the EU, unlike the normal ID card.
Other forms of officially accepted identification include the driver's license and the birth certificate. However, these are accepted only in limited circumstances and cannot take the place of the ID card in most cases. The ID card is mandatory for dealing with government institutions, banks or currency exchange shops. A valid passport may also be accepted, but usually only for foreigners.
In addition, citizens can be expected to provide the personal identification number (CNP) in many circumstances; purposes range from simple unique identification and internal book-keeping (for example when drawing up the papers for the warranty of purchased goods) to being asked for identification by the police. The CNP is 13 characters long, with the format S-YY-MM-DD-RR-XXX-Y. Where S is the sex, YY is year of birth, MM is month of birth, DD is day of birth, RR is a regional id, XXX is a unique random number and Y is a control digit.
Presenting the ID card is preferred but not mandatory when asked by police officers; however, in such cases people are expected to provide a CNP or alternate means of identification which can be checked on the spot (via radio if needed).
The information on the ID card is required to be kept updated by the owner; current address of domicile in particular. Doing otherwise can expose the citizen to certain fines or be denied service by those institutions that require a valid, up to date card. In spite of this, it is common for people to let the information lapse or go around with expired ID cards.
Slovakia
The Slovak ID card (Slovak: Občiansky preukaz) is a picture ID in Slovakia. It is issued to citizens of the Slovak Republic who are 15 or older. A Slovak ID card is officially recognised by all member states of the European Economic Area and Switzerland for travel. For travel outside the EU, Slovak citizens may also require the Slovak passport, which is a legally accepted form of picture ID as well. Police officers and some other officials have a right to demand to see one of those documents, and the law states that one is obliged to submit such a document at that very moment. If one fails to comply, law enforcement officers are allowed to insist on personal identification at the police station.
Slovenia
Every Slovenian citizen regardless of age has the right to acquire an Identity Card () where every citizen of the Republic of Slovenia of 18 years of age or older is obliged by law to acquire one and carry it at all times (or any other Identity document with a picture i.e., Slovene Passport). The Card is a valid Identity Document within all members states of the European Union for travel within the EU. With exception of the Faroe Islands and Greenland, though it may be used to travel outside of the EU: Norway, Liechtenstein, BiH, Macedonia, Montenegro, Serbia, Switzerland. The front side displays the name and surname, sex, nationality, date of birth and expiration date of the card, as well as the number of the ID card a black and white photograph and a signature. On the back, permanent address, administrative unit, date of issue, EMŠO, and a code with key information in a machine-readable zone.
Depending on the holder's age (and sometimes also other factors), the card had a validity of 5 years or 10 years, and 1 year for foreigners living in Slovenia.
In Slovenia the ID cards importance is equaled only by the Slovenian passport, but a due to size a lot more practical.
Spain
In Spain, citizens, resident foreigners, and companies have similar but distinct identity numbers, some with prefix letters, all with a check-code
NIF Both natural and legal persons have a tax code or Número de Identificación Fiscal (NIF), which is the same as their identity document. For companies this was formerly known as Código de Identificación Fiscal (CIF)
DNI Spanish citizens have a Documento Nacional de Identidad (DNI), which bears this number without any letter prefix. This is sometimes known by obsolete names such as Cédula de Ciudadanía (CC),Carné de Identidad (CI) or Cédula de Identidad (CI).
Spanish citizens under May 14, but over 14 must, acquire a National Identity Card (DNI). It is issued by the National Police formerly ID-1 (bank-card) format paper encapsulated in plastic. Since 2006 a new version of the 'DNI' has been introduced. The new 'Electronic DNI' is a Smart card that allows for digital signing of documents. The chip contains most of the personal information printed on the card and a digitized version of the bearer's face, signature and fingerprints.
On the front there is a photograph, the name and surname (see Spanish naming customs), the bearer's signature, an id number, the issue date and the expiry date. On the reverse appears the date and place of birth, the gender, legal ascendants name and the current address. At the bottom there is key information in a machine-readable zone. Depending on holder's age, the card has a validity of 5 years, 10 years or indefinite (for the elderly).
CIF Código de Identidad Fiscal has been retained only for associations and foundation have a CIF which starts with the letter -G
NIE Foreigners ( eXtranjeros in Spanish) are issued with a Número de Identificación de Extranjero, which starts with the letter X or Y. NIE cards for EU citizens have been abolished and replaced by a printed A4 page, which does not need to be carried, whilst cards are still issued to non-EU citizens, now following the standard European format.
Despite the NIF/CIF/NIE/NIF distinctions the identity number is unique and always has eight digits (the NIE has 7 digits) followed by a letter calculated from a 23-Modular arithmetic check used to verify the correctness of the number. The letters I, Ñ, O and U are not used and the
sequence is as follows:
This number is the same for tax, social security and all legal purposes. Without this number (or a foreign equivalent such as a passport number), a contract may not be enforceable.
In Spain the formal identity number on an ID card is the most important piece of identification. It is used in all public and private transactions. It is required to open a bank account, to sign a contract, to have state insurance and to register at a university and should be shown when being fined by a police officer. It is one of the official documents required to vote at any election, although any other form of official ID such as a driving licence or passport may be used. The card also constitutes a valid travel document within the European Union.
Non-resident citizens of countries such as the United Kingdom, where passport numbers are not fixed for the holder's life but change with renewal, may experience difficulty with legal transactions after the document is renewed since the old number is no longer verifiable on a valid (foreign) passport. However a NIE is issued for life and does not change and can be used for the same purposes.
Sweden
Sweden does not have a legal statute for compulsory identity documents. However ID-cards are regularly used to ascertain a person's identity when completing certain transactions. These include but are not limited to banking and age verification. Also interactions with public authorities often require it, in spite of the fact that there is no law explicitly requiring it, because there are laws requiring authorities to somehow verify people's identity. Without Swedish identity documents difficulties can occur accessing health care services, receiving prescription medications and getting salaries or grants. From 2008, EU passports have been accepted for these services due to EU legislation (with exceptions including banking), but non-EU passports are not accepted.
Identity cards have therefore become an important part of everyday life.
There are currently three public authorities that issue ID-cards: the Swedish Tax Agency, the Swedish Police Authority, and the Swedish Transport Agency.
The Tax Agency cards can only be used within Sweden to validate a persons identity, but they can be obtained both by Swedish citizens and those that currently reside in Sweden. A Swedish personal identity number is required. It is possible to get one without having any Swedish ID-card. In this case a person holding such a card must guarantee the identity, and the person must be a verifiable relative or the boss at the company the person has been working or a few other verifiable people.
The Police can only issue identity documents to Swedish citizens. They issue an internationally recognised id-card according to EU standard usable for intra-European travel, and Swedish passports which are acceptable as identity documents worldwide.
The Transport Agency issues driving licences, which are valid as identity documents in Sweden. To obtain one, one must be approved as a driver and strictly have another Swedish identity document as proof of identity.
In the past there have been certain groups that have experienced problems obtaining valid identification documents. This was due to the initial process that was required to validate one's identity, unregulated security requirements by the commercial companies which issued them. Since July 2009, the Tax Agency has begun to issue ID-cards, which has simplified the identity validation process for foreign passport holders. There are still requirements for identity validation that can cause trouble, especially for foreign citizens, but the list of people who can validate one's identity has been extended.
Switzerland
Swiss citizens have no obligation of identification in Switzerland and thus, are not required by law to be able to show a valid identity document upon request by a police officer or similar official. Furthermore, identity documents are required when opening a bank account or when dealing with public administration.
Relevant in daily life of Swiss citizens are Swiss ID card and Swiss driver's license; latter needs to presented upon request by a police officer, when driving a motor-vehicle as e.g. a car, a motorcycle, a bus or a truck. Swiss citizens are entitled to use a valid national identity card to exercise their right of free movement in EFTA and the EU.
Swiss passport is needed only for e.g. travel abroad to countries not accepting Swiss ID card as travel document.
Other European countries
Albania
From January 12, 2009, the Government of Albania is issuing a compulsory electronic and biometric ID Card (Letërnjoftim) for its citizens.
Every citizen at age 16 must apply for Biometric ID card.
Azerbaijan
Azerbaijan is issuing a compulsory ID Card (Şəxsiyyət vəsiqəsi) for its citizens.
Every citizen at age 16 must apply for ID card.
Belarus
Belarus has combined the international passport and the internal passport into one document which is compulsory from age 14. It follows the international passport convention but has extra pages for domestic use.
Bosnia and Herzegovina
Bosnia and Herzegovina allows every person over the age of 15 to apply for an ID card, and all citizens over the age of 18 must have the national ID card with them at all times. A penalty is issued if the citizen does not have the acquired ID card on them or if the citizen refuses to show proof of identification.
Kosovo
The Kosovo Identity Card is an ID card issued to the citizens of Kosovo for the purpose of establishing their identity, as well as serving as proof of residency, right to work and right to public benefits. It can be used instead of a passport for travel to some neighboring countries.
Moldova
In Moldova identity cards () are being issued since 1996. The first person to get identity card was former president of Moldova – Mircea Snegur. Since then all the Moldovan citizens are required to have and use it inside the country. It can't be used to travel outside the country, however it is possible to pass the so-called Transnistrian border with it.
The Moldovan identity card may be obtained by a child from his/her date of birth. State company "Registru" is responsible for issuing identity cards and for storing data of all Moldovan citizens.
Monaco
Monégasque identity cards are issued to Monégasque citizens and can be used for travel within the Schengen Area.
Montenegro
In Montenegro every resident citizen over the age of 14 can have their Lična karta issued, and all persons over the age of 18 must have ID cards and carry them at all times when they are in public places. It can be used for international travel to Bosnia and Herzegovina, Serbia, North Macedonia, Kosovo and Albania instead of the passport.
North Macedonia
The North Macedonian identity card () is a compulsory identity document issued in the Republic of North Macedonia. The document is issued by the police on behalf of the Ministry of Interior. Every citizen over 18 must be issued this identity card.
Russia
The role of identity documentation is primarily played by the so-called Russian internal passport, a passport-size booklet which contains a person's photograph, birth information and other data such as registration at the place of residence (informally known as propiska), marital data, information about military service and underage children. Internal passports are issued by the Main Directorate for Migration Affairs to all citizens who reach their 14th birthday and do not reside outside Russia. They are re-issued at the age 20 and 45.
The internal passport is commonly considered the only acceptable ID document in governmental offices, banks, while traveling by train or plane, getting a subscription service, etc. If the person does not have an internal passport (i.e., foreign nationals or Russian citizens who live abroad), an international passport can be accepted instead, theoretically in all cases. Another exception is army conscripts, who produce the Identity Card of the Russian Armed Forces.
Internal passports can also be used to travel to Belarus, Kazakhstan, Tajikistan, Kyrgyzstan, Abkhazia and South Ossetia.
Other documents, such as driver's licenses or student cards, can sometimes be accepted as ID, subject to regulations.
San Marino
The national identity card is compulsory for all Sanmarinese citizens. Biometric and valid for international travel since 2016.
Serbia
In Serbia every resident citizen over the age of 10 can have their Lična karta issued, and all persons over the age of 16 must have ID cards and carry them at all times when they are in public places. It can be used for international travel to Bosnia and Herzegovina, Montenegro and Macedonia instead of the passport. Contact microchip on ID is optional.
Kosovo issues its own identity cards. These documents are accepted by Serbia when used as identification while crossing the Serbia-Kosovo border. They can also be used for international travel to Montenegro and Albania.
Turkey
The Turkish national ID card () is compulsory for all Turkish citizens from birth. Cards for males and females have a different color. The front shows the first and last name of the holder, first name of legal ascendants, birth date and place, and an 11 digit ID number. The back shows marital status, religious affiliation, the region of the county of origin, and the date of issue of the card. On February 2, 2010, the European Court of Human Rights ruled in a 6 to 1 vote that the religious affiliation section of the Turkish identity card violated articles 6, 9, and 12 of the European Convention of Human Rights, to which Turkey is a signatory. The ruling should coerce the Turkish government to completely omit religious affiliation on future identity cards. The Turkish police are allowed to ask any person to show ID, and refusing to comply may lead to arrest. It can be used for international travel to Northern Cyprus, Georgia and Ukraine instead of a passport.
Ministry of Interior of Turkey released EU-like identity cards for all Turkish citizens in 2017. New identity cards are fully biometric and can be used as a bank card, bus ticket or at international trips.
Ukraine
The Ukrainian identity card or Passport of the Citizen of Ukraine (also known as the Internal passport or Passport Card) is an identity document issued to citizens of Ukraine. Every Ukrainian citizen aged 14 or above and permanently residing in Ukraine must possess an identity card issued by local authorities of the State Migration Service of Ukraine.
Ukrainian identity cards are valid for 10 years (or 4 years, if issued for citizens aged 14 but less than 18) and afterwards must be exchanged for a new document.
United Kingdom
As of July 2021 the UK has no national identity card and has no general obligation of identification, although drivers may be required to produce their licence and insurance documents to a Police station within 7 days of a traffic stop if they are not able to provide them at the time.
The UK had an identity card during World War II as part of a package of emergency powers; this was abolished in 1952 by repealing the National Registration Act 1939. Identity cards were first proposed in the mid-1980s for people attending football matches, following a series of high-profile hooliganism incidents involving English football fans. However, this proposed identity card scheme never went ahead as Lord Taylor of Gosforth ruled it out as "unworkable" in the Taylor Report of 1990.
The Identity Cards Act 2006 implemented a national ID scheme backed by a National Identity Register - an ambitious database linking a variety of data including Police, Health, Immigration, Electoral Rolls and other records. Several groups such as No2ID formed to campaign against ID cards in Britain and more importantly the NIR database, which was seen as a "panopticon" and a significant threat to civil liberties. The scheme saw setbacks after the Loss of United Kingdom child benefit data (2007) and other high-profile data losses turned public opinion against the government storing large, linked personal datasets.
Various partial-rollouts were attempted such as compulsory identity cards for non-EU residents in Britain (starting late 2008), with voluntary registration for British nationals introduced in 2009 and mandatory registration proposed for certain high-security professions such as airport workers. However, the mandatory registrations met with resistance from unions such as the British Airline Pilots' Association.
After the 2010 general election a new coalition government was formed. Both parties had pledged to scrap ID cards in their election manifestos. The 2006 act was repealed by the Identity Documents Act 2010 which also required that the nascent NIR database be destroyed. The Home Office announced that the national identity register had been destroyed on February 10, 2011. Prior to the 2006 Act, work had started to update British passports with RFID chips to support the use of ePassport gates. This continued, with traditional passports being replaced with RFID versions on renewal.
Driving licences, particularly the photocard driving licence introduced in 1998, and passports are now the most widely used ID documents in the United Kingdom, but the former cannot be used as travel documents, except within the Common Travel Area. However, driving licences from the UK and other EU countries are usually accepted within other EEA countries for identity verification. Most people do not carry their passports in public without knowing in advance that they are going to need them as they do not fit in a typical wallet and are relatively expensive to replace. Consequently, driving licences are the most common and convenient form of ID in use, along with PASS-accredited cards, used mainly for proof-of-age purposes. Unlike a travel document, they do not show the holder's nationality or immigration status. Colloquially, in day-to-day life, most authorities do not ask for identification from individuals in a sudden, spot check type manner, such as by police or security guards, although this may become a concern in instances of stop and search.
Gibraltar
Gibraltar has operated an identity card system since 1943.
The cards issued were originally folded cardboard, similar to the wartime UK Identity cards abolished in 1950. There were different colours for British and non-British residents. Gibraltar requires all residents to hold identity cards, which are issued free.
In 1993 the cardboard ID card was replaced with a laminated version. However, although valid as a travel document to the UK, they were not accepted by Spain.
A new version in an EU compliant format was issued and is valid for use around the EU although as very few are seen there are sometimes problems in its use, even in the UK. ID cards are needed for some financial transactions, but apart from that and to cross the frontier with Spain, they are not in common use.
North America
Belize
Called the "Identification Card R.R". Optional, although compulsory for voting and other government transactions. Available also for any Commonwealth country citizen who has lived in Belize for a year without leaving and been at least 2 months in an area where the person has been registered in.
Canada
In Canada, different forms of identification documentation are used, but there is no de jure national identity card. The Canadian passport is issued by the federal (national) government, and the provinces and territories issue various documents which can be used for identification purposes. The most commonly used forms of identification within Canada are the health card and driver's licence issued by provincial and territorial governments. The widespread usage of these two documents for identification purposes has made them de facto identity cards.
In Canada, a driver's license usually lists the name, home address, height and date of birth of the bearer. A photograph of the bearer is usually present, as well as additional information, such as restrictions to the bearer's driving licence. The bearer is required by law to keep the address up to date.
A few provinces, such as Québec and Ontario, issue provincial health care cards which contain identification information, such as a photo of the bearer, their home address, and their date of birth. British Columbia, Saskatchewan and Ontario are among the provinces that produce photo identification cards for individuals who do not possess a driving licence, with the cards containing the bearer's photo, home address, and date of birth.
For travel abroad, a passport is almost always required. There are a few minor exceptions to this rule; required documentation to travel among North American countries is subject to the Western Hemisphere Travel Initiative, such as the NEXUS programme and the Enhanced Drivers License programme implemented by a few provincial governments as a pilot project. These programmes have not yet gained widespread acceptance, and the Canadian passport remains the most useful and widely accepted international travel document.
Costa Rica
Every Costa Rican citizen must carry an identity card immediately after turning 18. The card is named Cédula de Identidad and it is issued by the local registrar's office (Registro Civil), an office belonging to the local elections committee (Tribunal Supremo de Elecciones), which in Costa Rica has the same rank as the Supreme Court. Each card has a unique number composed of nine numerical digits, the first of them being the province where the citizen was born (with other significance in special cases such as granted citizenship to foreigners, adopted persons, or in rare cases, old people for whom no birth certificate was processed at birth). After this digit, two blocks of four digits follow; the combination corresponds to the unique identifier of the citizen.
It is widely requested as part of every legal and financial purpose, often requested at payment with credit or debit cards for identification guarantee and requested for buying alcoholic beverages or cigarettes or upon entrance to adults-only places like bars.
The card must be renewed every ten years and is freely issued again if lost. Among the information included there are, on the front, two identification pictures and digitized signature of the owner, identification number (known colloquially just as the cédula), first name, first and second-last names and an optional known as field. On the back, there is again the identification number, birth date, where the citizen issues its vote for national elections or referendums, birthplace, gender, date when it must be renewed and a matrix code that includes all this information and even a digitized fingerprint of the thumb and index finger.
The matrix code is not currently being used nor inspected by any kind of scanner.
Besides this identification card, every vehicle driver must carry a driving licence, an additional card that uses the same identification number as the ID card (Cédula de Identidad) for the driving license number. A passport is also issued with the same identification number used in the ID card. The same situation occurs with the Social Security number; it is the same number used for the ID card.
All non-Costa Rican citizens with a resident status must carry an ID card (Cédula de Residencia), otherwise, a passport and a valid visa. Each resident's ID card has a unique number composed of 12 digits; the first three of them indicate their nationality and the rest of them a sequence used by the immigration authority (called Dirección General de Migración y Extranjería). As with the Costa Rican citizens, their Social Security number and their driver's license (if they have it) would use the same number as in their own resident's ID card.
Dominican Republic
A "Cédula de Identidad y Electoral" (Identity and Voting Document) is a National ID that is also used for voting in both Presidential and Congressional ballots. Each "Cédula de Identidad y Electoral" has its unique serial number composed by the serial of the municipality of current residence, a sequential number plus a verification digit. This National ID card is issued to all legal residents of adult age. It is usually required to validate job applications, legally binding contracts, official documents, buying/selling real estate, opening a personal bank account, obtaining a Driver's License and the like. It is issued free of charge by the "Junta Central Electoral" (Central Voting Committee) to all Dominicans not living abroad at the time of reaching adulthood (16 years of age) or younger is they are legally emancipated. Foreigners who have taken permanent residence and have not yet applied for Dominican naturalization (i.e., have not opted for Dominican citizenship but have taken permanent residence) are required to pay an issuing tariff and must bring along their non-expired Country of Origin passport and deposit photocopies of their Residential Card and Dominican Red Cross Blood Type card. Foreigners residing on a permanent basis must renew their "Foreign ID" on a 2-, 4-, or 10-year renewal basis (about US$63–US$240, depending on desired renewal period).
El Salvador
In El Salvador, ID Card is called Documento Único de Identidad (DUI) (Unique Identity Document). Every citizen above 18 years must carry this ID for identification purposes at any time. It is not based on a smartcard but on a standard plastic card with two-dimensional bar-coded information with picture and signature.
Guatemala
In January 2009, the National Registry of Persons (RENAP) in Guatemala began offering a new identity document in place of the Cédula de Vecindad (neighborhood identity document) to all Guatemala citizens and foreigners. The new document is called "Documento Personal de Identification" (DPI) (Personal Identity Document). It is based on a smartcard with a chip and includes an electronic signature and several measures against fraud.
Mexico
Not mandatory, but needed in almost all official documents, the CURP is the standardized version of an identity document. It actually could be a printed green wallet-sized card (without a photo) or simply an 18-character identification key printed on a birth or death certificate.
While Mexico has a national identity card (cédula de identitad personal), it is only issued to children aged 4–17
Unlike most other countries, Mexico has assigned a CURP to nearly all minors, since both the government and most private schools ask parent(s) to supply their children's CURP to keep a data base of all the children. Also, minors must produce their CURP when applying for a passport or being registered at Public Health services by their parent(s).
Most adults need the CURP code too, since it is required for almost all governmental paperwork like tax filings and passport applications. Most companies ask for a prospective employee's CURP, voting card, or passport rather than birth certificates.
To have a CURP issued for a person, a birth certificate or similar proof must be presented to the issuing authorities to prove that the information supplied on the application is true. Foreigners applying for a CURP must produce a certificate of legal residence in Mexico. Foreign-born naturalized Mexican citizens must present their naturalization certificate.
On August 21, 2008, the Mexican cabinet passed the National Security Act, which compels all Mexican citizens to have a biometric identity card, called Citizen Identity Card (Cédula de identidad ciudadana) before 2011.
On February 13, 2009, the Mexican government designated the state of Tamaulipas to start procedures for issuing a pilot program of the national Mexican ID card.
Although the CURP is the de jure official identification document in Mexico, the Instituto Nacional Electoral's voting card is the de facto official identification and proof of legal age for citizens of ages 18 and older.
On July 28, 2009, Mexican President Felipe Calderón, facing the Mexican House of Representatives, announced the launch of the Mexican national Identity card project, which will see the first card issued before the end of 2009.
Panama
The cédula de identidad personal is required at age 12 (cedula juvenil) and age 18. Panamanian citizens must carry their cédula at all times. New biometric national identity cards rolled out in 2019. The card must be renewed every 10 years (every 5 years for those under 18), and it can only be replaced 3 times (with each replacement costing more than the previous one) without requiring a background check, to confirm and verify that the card holder is not selling his or her identity to third parties for human trafficking or other criminal activities. All cards have QR, PDF417, and Code 128 barcodes. The QR code holds all printed (on the front of the card) text information about the card holder, while the PDF417 barcode holds, in JPEG format encoded with Base64, an image of the fingerprint of the left index finger of the card holder. Panamanian biometric/electronic/machine readable ID cards are similar to biometric passports and current European/Czech national ID cards and have only a small PDF417 barcode, with a machine readable area, a contactless smart card RFID chip, and golden contact pads similar to those found in smart card credit cards and SIM cards. The machine-readable code contains all printed text information about the card holder (it replaces the QR code) while both chips (the smart card chip is hidden under the golden contact pads) contain all personal information about the card holder along with a JPEG photo of the card holder, a JPEG photo with the card holder's signature, and another JPEG photo but with all 10 fingerprints of both hands of the card holder. Earlier cards used Code 16K and Code 49 barcodes with magnetic stripes.
United States
There is no compulsory federal-level ID card that is issued to all US citizens. US citizens and nationals may obtain passports or US passport cards if they chose to, but this is ultimately optional and the alternatives described below are more popular.
For most people, driver's licenses issued by the respective state and territorial governments have become the de facto identity cards, and are used for many identification purposes, such as when purchasing alcohol and tobacco, opening bank accounts, and boarding planes, along with confirming a voter's identity in states with voter photo identification initiatives. Individuals who do not drive are able to obtain an identification card with the same functions from the same state agency that issues driver's licenses. In addition, many schools issue student and teacher ID cards.
The United States passed a bill entitled the REAL ID Act on May 11, 2005. The bill compels states to begin redesigning their driver's licenses to comply with federal security standards by December 2009. Federal agencies would reject licenses or identity cards that do not comply, which would force Americans accessing everything from airplanes to national parks and courthouses to have the federally mandated cards. At airports, those not having compliant licenses or cards would simply be redirected to a secondary screening location. The REAL ID Act is highly controversial, and 25 states have approved either resolutions or binding legislation not to participate in the program. With President Obama's selection of Janet Napolitano (a prominent critic of the program) to head the Department of Homeland Security, the future of the law remains uncertain, and bills have been introduced into Congress to amend or repeal it. The most recent of these, dubbed PASS ID, would eliminate many of the more burdensome technological requirements but still require states to meet federal standards in order to have their ID cards accepted by federal agencies.
The bill takes place as governments are growing more interested in implanting technology in ID cards to make them smarter and more secure. In 2006, the U.S. State Department studied issuing passports with Radio-frequency identification, or RFID, chips embedded in them. Virginia may become the first state to glue RFID tags into all its driver's licenses. Seventeen states, however, have passed statutes opposing or refusing to implement the Real ID Act.
The United States passport verifies both personal identity and citizenship, but is not mandatory for citizens to possess within the country and is issued by the US State Department on a discretionary basis.
Since February 1, 2008, U.S. citizens may apply for passport cards, in addition to the usual passport books. Although their main purpose is for land and sea travel within North America, the passport card may also be accepted by federal authorities (such as for domestic air travel or entering federal buildings), which may make it an attractive option for people residing where state driver's licenses and ID cards are not REAL ID-compliant, should those requirements go into effect. TSA regulations list the passport card as an acceptable identity document at airport security checkpoints.
U.S. Citizenship and Immigration Services has indicated that the U.S. passport card may be used in the Employment Eligibility Verification Form I-9 (form) process. The passport card is considered a "List A" document that may be presented by newly hired employees during the employment eligibility verification process to show work authorized status. "List A" documents are those used by employees to prove both identity and work authorization when completing the Form I-9.
The basic document needed to establish a person's identity and citizenship in order to obtain a passport is a birth certificate. These are issued by either the US state of birth or by the US Department of State for overseas births to US citizens. A child born in the US is in nearly all cases (except for children of foreign diplomats) automatically a US citizen. The parents of a child born overseas to US citizens should report the birth to the corresponding US embassy/consulate to obtain a Consular Report of Birth Abroad, or they will need to apply for recognition of their citizenship at a later date.
Social Security numbers and cards are issued by the US Social Security Administration for tracking of Social Security taxes and benefits. They have become the de facto national identification number for federal and state taxation, private financial services, and identification with various companies. SSNs do not establish citizenship because they can also be issued to permanent residents as well as citizens. They typically can only be part of the establishment of a person's identity; a photo ID that verifies date of birth is also usually requested.
A mix of various documents can be presented to, for instance, verify one's legal eligibility to take a job within the US. Identity and citizenship is established by presenting a passport alone, but this must be accompanied by a Social Security card for taxation ID purposes. A driver's license/state ID establishes identity alone, but does not establish citizenship, as these can be provided to non-citizens as well. In this case, an applicant without a passport may sign an affidavit of citizenship or be required to present a birth certificate. They must still also submit their Social Security number.
"Residency" within a certain US jurisdiction, such as a voting precinct, can be proven if the driver's license or state ID has the home address printed on it corresponding to that jurisdiction. Utility bills or other pieces of official printed mail can also suffice for this purpose. In the case of voting, citizenship must also be proven with a passport, birth certificate, or signed citizenship affidavit. Receiving in-state tuition at a state's public college or university also requires proof of residency using one of the above methods. Ownership of property, proved by a deed, also immediately confers residency in most cases.
A Social Security number does not prove any form of residency, and neither does a passport, as neither of these documents is tied to a specific jurisdiction apart from the US as a whole, and a person can be issued either of these without living in the US (such as being born abroad to parent(s) who are US citizens). Thus, "residency in the US" is not clearly defined, and determining this often depends on the particular administrative process at hand.
The Selective Service System has in the past, in times of a military draft, issued something close to a National ID Card, only for men that were eligible for the draft.
Oceania
Australia
Australia does not have a national identity card. Instead, various identity documents are used or required to prove a person's identity, whether for government or commercial purposes.
Currently, driver licences and photo cards, both issued by the states and territories, are the most widely used personal identification documents in Australia. Additionally, the Australia Post Keypass identity card, issued by Australia Post, can be used by people who do not have an Australian drivers licence or an Australian state and territory issued identity photo card.
Photo cards are also called "Proof of Age Cards" or similar and can be issued to people as another type of identity. Identification indicating age is commonly required to purchase alcohol and tobacco and to enter nightclubs and gambling venues.
Other important identity documents include a passport, an official birth certificate, an official marriage certificate, cards issued by government agencies (typically social security cards), some cards issued by commercial organisations (e.g., a debit or credit card), and utility accounts. Often, some combination of identity documents is required, such as an identity document linking a name, photograph and signature (typically photo-ID in the form of a driver licence or passport), evidence of operating in the community, and evidence of a current residential address.
New alcohol laws in the state of Queensland require some Brisbane-based pubs and bars to scan ID documents against a database of people who should be denied alcohol, for which foreign passports and driver's licences are not valid.
Micronesia
National Identity cards, called "FSM Voters National Identity card", are issued on an optional basis, free of charge. The Identity Cards were introduced in 2005.
New Zealand
New Zealand does not have an official ID card. The most commonly carried form of identification is a driver licence issued by the Transport Agency.
Other forms of special purpose identification documents are issued by different government departments, for example a Firearms Licence issued to gun owners by the Police and the SuperGold card issued to elderly people by the Ministry of Social Development.
For purchasing alcohol or tobacco, the only legal forms of identification is a New Zealand or foreign passport, a New Zealand driver licences and a Kiwi Access Card (formerly known as 18+ cards ) from the Hospitality Association of New Zealand. Overseas driver licences are not legal for this purpose.
For opening a bank account, each bank has its own list of documents that it'll accept. Generally speaking, banks accept a foreign or NZ passport, a NZ Firearms Licence, or a foreign ID card by itself. If the customer do not have these documents, they will need to produce two different documents on the approved list (for example a driver licence and a marriage certificate).
Solomon Islands
"National Voter's Identity card" are optional upon request.
Tonga
Tonga's National ID Card was first issued in 2010, and it is optional, along with the driver's licenses and passports. Either one of these are mandatory for to vote though. Applicants need to be 14 years of age or older to apply for a National ID Card.
Vanuatu
National Identity Cards are being issued since October 2017. Plans for rolling out biometric cards are due for the late 2018.
South America
Argentina
Documento Nacional de Identidad or DNI (which means National Identity Document) is the main identity document for Argentine citizens. It is issued at a person's birth, and must be updated at 8 and 14 years of age, and thereafter every 15 years in one format: a card (DNI tarjeta); it is valid if identification is required, and is required for voting. They are produced at a special plant in Buenos Aires by the Argentine national registry of people (ReNaPer).
Brazil
In Brazil, at the age of 18, all Brazilian citizens are supposed to be issued a cédula de identidade (ID card), usually known by its number, the Registro Geral (RG), Portuguese for "General Registry". The cards are needed to obtain a job, to vote, and to use credit cards. Foreigners living in Brazil have a different kind of ID card. Since the RG is not unique, being issued in a state-basis, in many places the CPF (the Brazilian revenue agency's identification number) is used as a replacement. The current Brazilian driver's license contains both the RG and the CPF, and as such can be used as an identification card as well.
There are plans in course to replace the current RG system with a new Documento Nacional de Identificação (National Identification Document), which will be electronic (accessible by a mobile application) and national in scope, and to change the current ID card to a new smartcard.
Colombia
Every resident of Colombia over the age of 14 is issued an identity card (Tarjeta de Identidad). Upon turning 18 every resident must obtain a Cédula de Ciudadanía, which is the only document that proves the identity of a person for legal purposes. ID cards must be carried at all times and must be presented to the police upon request. If the individual fails to present the ID card upon request by the police or the military, he/she is most likely going to be detained at police station even if he/she is not a suspect of any wrongdoing. ID cards are needed to obtain employment, open bank accounts, obtain a passport, driver's license, military card, to enroll in educational institutions, vote or enter public buildings including airports and courthouses. Failure to produce ID is a misdemeanor punishable with a fine.
ID duplicate costs must be assumed by citizens.
Chile
Every resident of Chile over the age of 18 must have and carry at all times their ID Card called Cédula de Identidad issued by the Civil Registry and Identification Service of Chile. It contains the full name, gender, nationality, date of birth, photograph of the data subject, right thumb print, ID number, and personal signature.
This is the only official form of identification for residents in Chile and is widely used and accepted as such. It is necessary for every contract, most bank transactions, voting, driving (along with the driver's licence) and other public and private situations.
Biometrics collection is mandatory.
Peru
In Peru, it is mandatory for all citizens over the age of 18, whether born inside or outside the territory of the Republic, to obtain a National Identity Document (Documento Nacional de Identidad).
The DNI is a public, personal and untransferable document.
The DNI is the only means of identification permitted for participating in any civil, legal, commercial, administrative, and judicial acts. It is also required for voting and must be presented to authorities upon request. The DNI can be used as a passport to travel to all South American countries that are members of UNASUR.
The DNI is issued by the National Registry of Identification and Civil Status (RENIEC). For Peruvians abroad, service is provided through the Consulates of Peru, in accordance with Articles 26, 31 and 8 of Law No. 26,497.
The document is card-sized as defined by ISO format ID-1 (prior to 2005 the DNI was size ISO ID-2; renewal of the card due to the size change was not mandatory, nor did previously-emitted cards lose validity). The front of the card presents photographs of the holder's face, their name, date and place of birth (the latter in coded form), gender and marital status; the bottom quarter consists of machine-readable text. Three dates are listed as well; the date the citizen was first registered at RENIEC; the date the document was issued; and the expiration date of the document. The back of the DNI features the holder's address (including district, department and/or province) and voting group. Eight voting record blocks are successively covered with metallic labels when the citizen presents themselves at their voting group on voting days. The back also denotes whether the holder is an organ donor, presents the holder's right index finger print, a PDF417 bar code, and a 1D bar code.
Uruguay
In Uruguay, the identity card (cédula de identidad) is issued by the Ministry of Interior and the National Civil Identification Bureau (Dirección Nacional de Identificación Civil | DNIC).
It is mandatory and essential for several activities at either governmental or private levels. The document is mandatory for all inhabitants of the Oriental Republic of Uruguay, whether they are native citizens, legal citizens, or resident aliens in the country, even for children as young as 45 days old.
It is a laminated card wide and approximately high, dominated by the color blue, showing the flag in the background with the photo of the owner, the number assigned by the DNIC (including a self-generated or check digit), full name, and the corresponding signature along with biometrics. The card is bilingual in Spanish and Portuguese.
Identity cards are required for most formal transactions, from credit card purchases to any identity validation, proof of age, and so on. The
identity card is not to be confused with the civic badge, which is used exclusively for voting in national elections (elections and plebiscites).
Venezuela
Identity cards in Venezuela consist of a plastic-laminated paper which contains the national ID number (Cédula de Identidad) as well as a color-photo and the last names, given names, date of birth, right thumb print, signature, and marital status (single, married, divorced, widowed) of the bearer. It also contains the documents expedition and expiration date. Two different prefixes can be found before the ID number: "V" for Venezuelans and "E" for foreigners (extranjeros in Spanish). This distinction is also shown in the document at the very bottom by a bold all-caps typeface displaying either the word VENEZOLANO or EXTRANJERO, respectively.
Despite Venezuela being the second country in the Americas (after the United States) to adopt a biometric passport, the current Venezuelan ID document is remarkably low-security, even for regional standards. It can hardly be called a card. The paper inside the laminated cover contains only two security measures, first, it is a special type of government-issued paper, and second, it has microfilaments in the paper that glow in the presence of UV light. The laminated cover itself is very simplistic and quite large for the paper it covers and the photo, although is standard sized (3x3.5 cm) is rather blurred. Government officials in charge of issuing the document openly recommend each individual to cut the excess plastic off and re-laminate the document in order to protect it from bending. The requirements for getting a Venezuelan identity document are quite relaxed and Venezuela lacks high-security in its birth certificates and other documents that give claim to citizenship.
Because one can get a Venezuelan passport and register to vote only by virtue of possessing a Venezuelan identity card and since the Venezuelan government has been accused by the media and the opposition of naturalizing substantial numbers of foreigners for electoral purposes, many Venezuelans accused the government of a lack of a plan to ramp up the security of the cédula de identidad along with other Venezuelan vital documents such as birth certificates as part of a strategy by the Chávez regime to continue the practice of naturalizing foreigners for electoral purposes. The government has announced that a new cédula de identidad will be available to all citizens somewhere around the first quarter of 2011. This proposed ID is indeed a polycarbonate bankcard-sized document with biometric and RFID technology. It resembles the analogous card that has been in place in the Venezuelan biometric passports since 2007. However, the release of this new card to the public has been delayed on several occasions and as of October 2018 there are no news as to when it will be available.
See also
Access badge
Anthropometry
National biometric id card
Homeland card
Home Return Permit
ID card printer
Identity Cards Act 2006
List of identity card policies by country
Location-based authentication
Magnetic stripe card
NO2ID
Pass laws
Physical security
Police certificate
Proximity card
Self-sovereign identity
Warrant card
Notes
References
Further reading
Kruger, Stephen. "Documentary Identification in the Nascent American Police State" (2012). .
Kruger, Stephen. "Police Demands for Hong Kong Identity Cards" (2012). .
External links
PRADO – Public Register of European Travel and ID Documents Online
Telegraph story: the case for and against identity cards
Scotsman story: ID Cards will lead to "massive fraud"
ID Card – Is Big Brother Stalking You? –
PRADO Glossary – EU site detailing document security technologies (security features)
Document |
369685 | https://en.wikipedia.org/wiki/Walkie-talkie | Walkie-talkie | A walkie-talkie, more formally known as a handheld transceiver (HT), is a hand-held, portable, two-way radio transceiver. Its development during the Second World War has been variously credited to Donald Hings, radio engineer Alfred J. Gross, Henryk Magnuski and engineering teams at Motorola. First used for infantry, similar designs were created for field artillery and tank units, and after the war, walkie-talkies spread to public safety and eventually commercial and jobsite work.
Typical walkie-talkies resemble a telephone handset, with a speaker built into one end and a microphone in the other (in some devices the speaker also is used as the microphone) and an antenna mounted on the top of the unit. They are held up to the face to talk. A walkie-talkie is a half-duplex communication device. Multiple walkie-talkies use a single radio channel, and only one radio on the channel can transmit at a time, although any number can listen. The transceiver is normally in receive mode; when the user wants to talk they must press a "push-to-talk" (PTT) button that turns off the receiver and turns on the transmitter. Smaller versions of this device are also very popular among young children.
Some units have additional features such as sending calls, call reception with vibration alarm, keypad locking, and a stopwatch.
History
Handheld two-way radios were developed by the military from backpack radios carried by a soldier in an infantry squad to keep the squad in contact with their commanders. Probably the first patent owner (patent filled on 20 May 1935, granted on 19 March 1936 ) was an engineer from Poland Henryk Magnuski, who later worked since 1939 on Motorola's first walkie-talkie (a hand-held radio transceiver SCR-536). Canadian inventor Donald Hings was the first to create a portable radio signaling system for his employer CM&S in 1937. He called the system a "packset", although it later became known as a "walkie-talkie". In 2001, Hings was formally decorated for the device's significance to the war effort. Hings' model C-58 "Handy-Talkie" was in military service by 1942, the result of a secret R&D effort that began in 1940.
Alfred J. Gross, a radio engineer and one of the developers of the Joan-Eleanor system, also worked on the early technology behind the walkie-talkie between 1938 and 1941, and is sometimes credited with inventing it.
The first device to be widely nicknamed a "walkie-talkie" was developed by the US military during World War II, the backpacked Motorola SCR-300. It was created by an engineering team in 1940 at the Galvin Manufacturing Company (forerunner of Motorola). The team consisted of Dan Noble, who conceived of the design using frequency modulation; Henryk Magnuski, who was the principal RF engineer; Marion Bond; Lloyd Morris; and Bill Vogel.
The first handheld walkie-talkie was the AM SCR-536 transceiver from 1941, also made by Motorola, named the Handie-Talkie (HT). The terms are often confused today, but the original walkie-talkie referred to the back mounted model, while the handie-talkie was the device which could be held entirely in the hand. Both devices used vacuum tubes and were powered by high voltage dry cell batteries.
Following World War II, Raytheon developed the SCR-536's military replacement, the AN/PRC-6. The AN/PRC-6 circuit used 13 vacuum tubes (receiver and transmitter); a second set of thirteen tubes was supplied with the unit as running spares. The unit was factory set with one crystal which could be changed to a different frequency in the field by replacing the crystal and re-tuning the unit. It used a 24-inch whip antenna. There was an optional handset that could be connected to the AN/PRC-6 by a 5-foot cable. An adjustable strap was provided for carrying and support while operating.
In the mid-1970s, the United States Marine Corps initiated an effort to develop a squad radio to replace the unsatisfactory helmet-mounted AN/PRR-9 receiver and receiver/transmitter handheld AN/PRT-4 (both developed by the US Army). The AN/PRC-68, first produced in 1976 by Magnavox, was issued to the Marines in the 1980s, and was adopted by the US Army as well.
The abbreviation HT, derived from Motorola's "Handie-Talkie" trademark, is commonly used to refer to portable handheld ham radios, with "walkie-talkie" often used as a layman's term or specifically to refer to a toy. Public safety and commercial users generally refer to their handhelds simply as "radios". Surplus Motorola Handie-Talkies found their way into the hands of ham radio operators immediately following World War II. Motorola's public safety radios of the 1950s and 1960s were loaned or donated to ham groups as part of the Civil Defense program. To avoid trademark infringement, other manufacturers use designations such as "Handheld Transceiver" or "Handie Transceiver" for their products.
Developments
Some cellular telephone networks offer a push-to-talk handset that allows walkie-talkie-like operation over the cellular network, without dialing a call each time. However, the cellphone provider must be accessible.
Walkie-talkies for public safety, commercial and industrial uses may be part of trunked radio systems, which dynamically allocate radio channels for more efficient use of limited radio spectrum. Such systems always work with a base station that acts as a repeater and controller, although individual handsets and mobiles may have a mode that bypasses the base station.
Contemporary use
Walkie-talkies are widely used in any setting where portable radio communications are necessary, including business, public safety, military, outdoor recreation, and the like, and devices are available at numerous price points from inexpensive analog units sold as toys up to ruggedized (i.e. waterproof or intrinsically safe) analog and digital units for use on boats or in heavy industry. Most countries allow the sale of walkie-talkies for, at least, business, marine communications, and some limited personal uses such as CB radio, as well as for amateur radio designs. Walkie-talkies, thanks to increasing use of miniaturized electronics, can be made very small, with some personal two-way UHF radio models being smaller than a deck of cards (though VHF and HF units can be substantially larger due to the need for larger antennas and battery packs). In addition, as costs come down, it is possible to add advanced squelch capabilities such as CTCSS (analog squelch) and DCS (digital squelch) (often marketed as "privacy codes") to inexpensive radios, as well as voice scrambling and trunking capabilities. Some units (especially amateur HTs) also include DTMF keypads for remote operation of various devices such as repeaters. Some models include VOX capability for hands-free operation, as well as the ability to attach external microphones and speakers.
Consumer and commercial equipment differ in a number of ways; commercial gear is generally ruggedized, with metal cases, and often has only a few specific frequencies programmed into it (often, though not always, with a computer or other outside programming device; older units can simply swap crystals), since a given business or public safety agent must often abide by a specific frequency allocation. Consumer gear, on the other hand, is generally made to be small, lightweight, and capable of accessing any channel within the specified band, not just a subset of assigned channels.
Military
Military organizations use handheld radios for a variety of purposes. Modern units such as the AN/PRC-148 Multiband Inter/Intra Team Radio (MBITR) can communicate on a variety of bands and modulation schemes and include encryption capabilities.
Amateur radio
Walkie-talkies (also known as HTs or "handheld transceivers") are widely used among amateur radio operators. While converted commercial gear by companies such as Motorola are not uncommon, many companies such as Yaesu, Icom, and Kenwood design models specifically for amateur use. While superficially similar to commercial and personal units (including such things as CTCSS and DCS squelch functions, used primarily to activate amateur radio repeaters), amateur gear usually has a number of features that are not common to other gear, including:
Wide-band receivers, often including radio scanner functionality, for listening to non-amateur radio bands.
Multiple bands; while some operate only on specific bands such as 2 meters or 70 cm, others support several UHF and VHF amateur allocations available to the user.
Since amateur allocations usually are not channelized, the user can dial in any frequency desired in the authorized band (whereas commercial HTs usually only allow the user to tune the radio into a number of already programmed channels). This is known as VFO mode.
Multiple modulation schemes: a few amateur HTs may allow modulation modes other than FM, including AM, SSB, and CW, and digital modes such as radioteletype or PSK31. Some may have TNCs built in to support packet radio data transmission without additional hardware.
Digital voice modes are available on some amateur HTs. For example, a newer addition to the Amateur Radio service is Digital Smart Technology for Amateur Radio or D-STAR. Handheld radios with this technology have several advanced features, including narrower bandwidth, simultaneous voice and messaging, GPS position reporting, and callsign routed radio calls over a wide-ranging international network.
As mentioned, commercial walkie-talkies can sometimes be reprogrammed to operate on amateur frequencies. Amateur radio operators may do this for cost reasons or due to a perception that commercial gear is more solidly constructed or better designed than purpose-built amateur gear.
Personal use
The personal walkie-talkie has become popular also because of licence-free services (such as the U.S. FRS, Europe's PMR446 and Australia's UHF CB) in other countries. While FRS walkie-talkies are also sometimes used as toys because mass-production makes them low cost, they have proper superheterodyne receivers and are a useful communication tool for both business and personal use. The boom in licence-free transceivers has, however, been a source of frustration to users of licensed services that are sometimes interfered with. For example, FRS and GMRS overlap in the United States, resulting in substantial pirate use of the GMRS frequencies. Use of the GMRS frequencies (USA) requires a license; however most users either disregard this requirement or are unaware. Canada reallocated frequencies for licence-free use due to heavy interference from US GMRS users. The European PMR446 channels fall in the middle of a United States UHF amateur allocation, and the US FRS channels interfere with public safety communications in the United Kingdom. Designs for personal walkie-talkies are in any case tightly regulated, generally requiring non-removable antennas (with a few exceptions such as CB radio and the United States MURS allocation) and forbidding modified radios.
Most personal walkie-talkies sold are designed to operate in UHF allocations, and are designed to be very compact, with buttons for changing channels and other settings on the face of the radio and a short, fixed antenna. Most such units are made of heavy, often brightly colored plastic, though some more expensive units have ruggedized metal or plastic cases. Commercial-grade radios are often designed to be used on allocations such as GMRS or MURS (the latter of which has had very little readily available purpose-built equipment). In addition, CB walkie-talkies are available, but less popular due to the propagation characteristics of the 27 MHz band and the general bulkiness of the gear involved.
Personal walkie-talkies are generally designed to give easy access to all available channels (and, if supplied, squelch codes) within the device's specified allocation.
Personal two-way radios are also sometimes combined with other electronic devices; Garmin's Rino series combine a GPS receiver in the same package as an FRS/GMRS walkie-talkie (allowing Rino users to transmit digital location data to each other) Some personal radios also include receivers for AM and FM broadcast radio and, where applicable, NOAA Weather Radio and similar systems broadcasting on the same frequencies. Some designs also allow the sending of text messages and pictures between similarly equipped units.
While jobsite and government radios are often rated in power output, consumer radios are frequently and controversially rated in mile or kilometer ratings. Because of the line of sight propagation of UHF signals, experienced users consider such ratings to be wildly exaggerated, and some manufacturers have begun printing range ratings on the package based on terrain as opposed to simple power output.
While the bulk of personal walkie-talkie traffic is in the 27 MHz and 400-500 MHz area of the UHF spectrum, there are some units that use the "Part 15" 49 MHz band (shared with cordless phones, baby monitors, and similar devices) as well as the "Part 15" 900 MHz band; in the US at least, units in these bands do not require licenses as long as they adhere to FCC Part 15 power output rules. A company called TriSquare is, as of July 2007, marketing a series of walkie-talkies in the United States, based on frequency-hopping spread spectrum technology operating in this frequency range under the name eXRS (eXtreme Radio Service—despite the name, a proprietary design, not an official allocation of the US FCC). The spread-spectrum scheme used in eXRS radios allows up to 10 billion virtual "channels" and ensures private communications between two or more units.
Recreation
Low-power versions, exempt from licence requirements, are also popular children's toys such as the Fisher Price Walkie-Talkie for children illustrated in the top image on the right. Prior to the change of CB radio from licensed to "permitted by part" (FCC rules Part 95) status, the typical toy walkie-talkie available in North America was limited to 100 milliwatts of power on transmit and using one or two crystal-controlled channels in the 27 MHz citizens' band using amplitude modulation (AM) only. Later toy walkie-talkies operated in the 49 MHz band, some with frequency modulation (FM), shared with cordless phones and baby monitors. The lowest cost devices are very simple electronically (single-frequency, crystal-controlled, generally based on a simple discrete transistor circuit where "grown-up" walkie-talkies use chips), may employ superregenerative receivers, and may lack even a volume control, but they may nevertheless be elaborately decorated, often superficially resembling more "grown-up" radios such as FRS or public safety gear. Unlike more costly units, low-cost toy walkie-talkies may not have separate microphones and speakers; the receiver's speaker sometimes doubles as a microphone while in transmit mode.
An unusual feature, common on children's walkie-talkies but seldom available otherwise even on amateur models, is a "code key", that is, a button allowing the operator to transmit Morse code or similar tones to another walkie-talkie operating on the same frequency. Generally the operator depresses the PTT button and taps out a message using a Morse Code crib sheet attached as a sticker to the radio. However, as Morse Code has fallen out of wide use outside amateur radio circles, some such units either have a grossly simplified code label or no longer provide a sticker at all.
In addition, Family Radio Service UHF radios will sometimes be bought and used as toys, though they are not generally explicitly marketed as such (but see Hasbro's ChatNow line, which transmits both voice and digital data on the FRS band).
Smartphone apps & connected devices
A variety of mobile apps exist that mimic a walkie-talkie/Push-to-talk style interaction. They are marketed as low-latency, asynchronous communication. The advantages touted over two-way voice calls include: the asynchronous nature not requiring full user interaction (like SMS) and it is voice over IP (VOIP) so it does not use minutes on a cellular plan.
Applications on the market that offer this walkie-talkie style interaction for audio include Voxer, Zello, Orion Labs, Motorola Wave, and HeyTell, among others.
Other smartphone-based walkie-talkie products are made by companies like goTenna, Fantom Dynamics and BearTooth, and offer a radio interface. Unlike mobile data dependent applications, these products work by pairing to an app on the user's smartphone and working over a radio interface.
Specialized uses
In addition to land mobile use, waterproof walkie talkie designs are also used for marine VHF and aviation communications, especially on smaller boats and ultralight aircraft where mounting a fixed radio might be impractical or expensive. Often such units will have switches to provide quick access to emergency and information channels. They are also used in recreational UTVs to coordinate logistics, keep riders out of the dust and are usually connected to an intercom and headsets
Intrinsically safe walkie-talkies are often required in heavy industrial settings where the radio may be used around flammable vapors. This designation means that the knobs and switches in the radio are engineered to avoid producing sparks as they are operated.
Accessories
There are various accessories available for walkie-talkies such as rechargeable batteries, drop in rechargers, multi-unit rechargers for charging as many as six units at a time, and an audio accessory jack that can be used for headsets or speaker microphones. Newer models allow the connection to wireless headsets via Bluetooth.
ITU classification
In line to the ITU Radio Regulations, article 1.73, a walkie-talkie is classified as radio station / land mobile station.
See also
Mobile radio telephone
AN/PRC-6
MOTO Talk
Push to talk
Serval project
Signal Corps Radio
Survival radio
Vehicular communication systems
References
Footnotes
Notations
Further reading
Dunlap, Orrin E., Jr. Marconi: The man and his wireless. (Arno Press., New York: 1971)
Harlow, Alvin F., Old Waves and New Wires: The History of the Telegraph, Telephone, and Wireless. (Appleton-Century Co., New York: 1936)
Herrick, Clyde N., Radidselopments in Telecommunications 2nd Ed., (Prentice Hall Inc., New Jersey: 1977)
Martin, James. The Wired Society. (Prentice Hall Inc., New Jersey: 1978)
Silver, H. Ward. Two-Way Radios and Scanners for Dummies. (Wiley Publishing, Hoboken, NH, 2005, )
External links
SCR-300-A Technical Manual
U.S. Army Signal Corp Museum - exhibits and collections
Canadian inventions
Mobile telecommunications user equipment
Radio communications
Law enforcement equipment
20th-century inventions |
371264 | https://en.wikipedia.org/wiki/Nero%20Multimedia%20Suite | Nero Multimedia Suite | Nero Multimedia Suite is a software suite for Microsoft Windows that is developed and marketed by Nero AG. Version 2017 of this product was released in October 2016.
Version differences
Since its Version 10, Nero provides two variants of the suite – the Classic and the Platinum. The Platinum version includes the following additional functions:
Music recorder - to record MP3s from worldwide radios stations (since Nero 2016)
Video editing in Ultra-HD (since version 2014) and in HEVC (Since Nero 2017 but only available in Nero Video and not available in Nero Recode)
Additional video, transition and picture-in-picture effects
SecurDisc 4.0 Technology with 256 bit-encryption (since Nero 2017)
Additional film templates and disc menu templates
Blu-ray disc ripping and conversion
Blu-ray playback (not possible with Nero 2016, only with previous versions)
Included products
The following applications are included in Nero 2016:
Disc authoring and copying
Nero Burning ROM: an optical disc authoring program for CDs, DVDs, and Blu-rays by Nero AG and one of its first products. The product is also available separately. It includes several functions for audio format conversion, the creation of audio CDs, etc.. As of version 2015, Nero AirBurn App and Nero Burning ROM enable users to burn media via their mobile devices.
Nero Express: a simplified version of Nero Burning ROM targeted at novices. The Nero AirBurn app does not work with Nero Express.
Video editing and creating of video discs
Nero Video: is a software tool for creating and editing videos. It combines editing tools (including the addition of effects, music and themes) and video export as well as DVD and Blu-ray authoring. The product provides simple editing functions for novices (in Express mode) as well as advanced video editing (Advanced mode). It is available separately as a download.
Data conversion
Nero Recode: converts video and audio files as well as non-copyright-protected video DVDs and Blu-Rays into multiple video formats.
Nero Disc to Device is an easy-to-use application for converting video discs and audio CDs for playback on mobile devices or in the cloud.
Nero MediaHome enables users to manage and play their images, videos and music files. Alongside ripping and the creation of playlists and slideshows, MediaHome includes features that let users sort their media, including tagging, face recognition system in photos, geo location support and manual geotagging for photos and videos. Streaming to TVs and home media players is also included. Starting with Version 2015, users can stream media directly to their mobile devices (iOS, Android, Amazon) with the Nero MediaHome Receiver App. Nero MediaHome is also available with limited functionality as a free download from nero.com.
Nero Media Browser: simple tools for retrieving media content. It makes the Nero MediaHome media library also available to non-Nero programs using simple drag and drop.
Nero Blu-ray Player (not in Nero 2016 Platinum, only in the previous Platinum version.): media player for Blu-ray discs
Data rescue
Nero RescueAgent: helps users to recover files from damaged or partially unreadable media (discs, hard disks, USB thumb drives and flash drives) and to restore files that were accidentally deleted.
Nero BackItUp: a system backup utility. It was integral part of Nero 6, 7, 8, 10 and 11 suites. With the introduction of Nero 9 Nero BackItUp 4 became a standalone backup product, while its successor, Nero BackItUp 5, was the main application of Nero BackItUp & Burn Later, the product was re-integrated into Nero Multimedia Suite 10 and is now included in Nero 11. The product can also be purchased separately.
Free Tools
The following components were part of the suite until Version 11. They are now available for download separately and are free of charge.
Nero CoverDesigner: enables users to design and print disc covers and labels
Nero BurnRights: enables administrators to provide other users with access to drives
Nero DiscSpeed: a disc speed measurement and performance diagnostics tool, including benchmarking and surface error scanning.
Nero WaveEditor: an audio editing tool capable of recording, editing, filtering, and exporting music files.
Nero SoundTrax: a tool for recording, mixing and digitizing music tracks.
Nero InCD Reader 5: this tools enables users to read CDs created with InCD
Nero SecurDisc Viewer: this tool enables users to read discs created with SecurDisc.
Nero InfoTool: Detailed system information browser
Notes and references
External links
Official Website
Optical disc authoring software
Shareware
Windows CD/DVD writing software
Linux CD/DVD writing software |
371301 | https://en.wikipedia.org/wiki/Electronic%20voting | Electronic voting | Electronic voting (also known as e-voting) is voting that uses electronic means to either aid or take care of casting and counting votes.
Depending on the particular implementation, e-voting may use standalone electronic voting machines (also called EVM) or computers connected to the Internet. It may encompass a range of Internet services, from basic transmission of tabulated results to full-function online voting through common connectable household devices. The degree of automation may be limited to marking a paper ballot, or may be a comprehensive system of vote input, vote recording, data encryption and transmission to servers, and consolidation and tabulation of election results.
A worthy e-voting system must perform most of these tasks while complying with a set of standards established by regulatory bodies, and must also be capable to deal successfully with strong requirements associated with security, accuracy, integrity, swiftness, privacy, auditability, accessibility, cost-effectiveness, scalability and ecological sustainability.
Electronic voting technology can include punched cards, optical scan voting systems and specialized voting kiosks (including self-contained direct-recording electronic voting systems, or DRE). It can also involve transmission of ballots and votes via telephones, private computer networks, or the Internet.
In general, two main types of e-voting can be identified:
e-voting which is physically supervised by representatives of governmental or independent electoral authorities (e.g. electronic voting machines located at polling stations);
remote e-voting via the Internet (also called i-voting) where the voter submits his or her vote electronically to the election authorities, from any location.
Benefits
Electronic voting technology intends to speed the counting of ballots, reduce the cost of paying staff to count votes manually and can provide improved accessibility for disabled voters. Also in the long term, expenses are expected to decrease.
Results can be reported and published faster.
Voters save time and cost by being able to vote independently from their location. This may increase overall voter turnout. The citizen groups benefiting most from electronic elections are the ones living abroad, citizens living in rural areas far away from polling stations and the disabled with mobility impairments.
Concerns
It has been demonstrated that as voting systems become more complex and include software, different methods of election fraud become possible. Others also challenge the use of electronic voting from a theoretical point of view, arguing that humans are not equipped for verifying operations occurring within an electronic machine and that because people cannot verify these operations, the operations cannot be trusted. Furthermore, some computing experts have argued for the broader notion that people cannot trust any programming they did not author.
The use of electronic voting in elections remains a contentious issue. Some countries such as Netherlands and Germany have stopped using it after it was shown to be unreliable, while the Indian Election commission recommends it. The involvement of numerous stakeholders including companies that manufacture these machines as well as political parties that stand to gain from rigging complicates this further.
Critics of electronic voting, including security analyst Bruce Schneier, note that "computer security experts are unanimous on what to do (some voting experts disagree, but it is the computer security experts who need to be listened to; the problems here are with the computer, not with the fact that the computer is being used in a voting application)... DRE machines must have a voter-verifiable paper audit trails... Software used on DRE machines must be open to public scrutiny" to ensure the accuracy of the voting system. Verifiable ballots are necessary because computers can and do malfunction, and because voting machines can be compromised.
Many insecurities have been found in commercial voting machines, such as using a default administration password. Cases have also been reported of machines making unpredictable, inconsistent errors. Key issues with electronic voting are therefore the openness of a system to public examination from outside experts, the creation of an authenticatable paper record of votes cast and a chain of custody for records. And, there is a risk that commercial voting machines results are changed by the company providing the machine. There is no guarantee that results are collected and reported accurately.
There has been contention, especially in the United States, that electronic voting, especially DRE voting, could facilitate electoral fraud and may not be fully auditable. In addition, electronic voting has been criticised as unnecessary and expensive to introduce. While countries like India continue to use electronic voting, several countries have cancelled e-voting systems or decided against a large-scale rollout, notably the Netherlands, Ireland, Germany and the United Kingdom due to issues in reliability of EVMs.
Moreover, people without internet access and/or the skills to use it are excluded from the service. The so-called digital divide describes the gap between those who have access to the internet and those who do not. Depending on the country or even regions in a country the gap differs. This concern is expected to become less important in future since the number of internet users tends to increase.
The main psychological issue is trust. Voters fear that their vote could be changed by a virus on their PC or during transmission to governmental servers.
Expenses for the installation of an electronic voting system are high. For some governments they may be too high so that they do not invest. This aspect is even more important if it is not sure whether electronic voting is a long-term solution.
Types of system
Electronic voting systems for electorates have been in use since the 1960s when punched card systems debuted. Their first widespread use was in the USA where 7 counties switched to this method for the 1964 presidential election. The newer optical scan voting systems allow a computer to count a voter's mark on a ballot. DRE voting machines which collect and tabulate votes in a single machine, are used by all voters in all elections in Brazil and India, and also on a large scale in Venezuela and the United States. They have been used on a large scale in the Netherlands but have been decommissioned after public concerns. In Brazil, the use of DRE voting machines has been associated with a decrease error-ridden and uncounted votes, promoting a larger enfranchisement of mainly less educated people in the electoral process, shifting government spending toward public healthcare, particularly beneficial to the poor.
Internet voting systems have gained popularity and have been used for government elections and referendums in Estonia, and Switzerland as well as municipal elections in Canada and party primary elections in the United States and France. Internet voting has also been widely used in sub-national participatory budgeting processes, including in Brazil, France, United States, Portugal and Spain.
There are also hybrid systems that include an electronic ballot marking device (usually a touch screen system similar to a DRE) or other assistive technology to print a voter verified paper audit trail, then use a separate machine for electronic tabulation.
Paper-based electronic voting system
Paper-based voting systems originated as a system where votes are cast and counted by hand, using paper ballots. With the advent of electronic tabulation came systems where paper cards or sheets could be marked by hand, but counted electronically. These systems included punched card voting, marksense and later digital pen voting systems.
These systems can include a ballot marking device or electronic ballot marker that allows voters to make their selections using an electronic input device, usually a touch screen system similar to a DRE. Systems including a ballot marking device can incorporate different forms of assistive technology. In 2004, Open Voting Consortium demonstrated the 'Dechert Design', a General Public License open source paper ballot printing system with open source bar codes on each ballot.
Direct-recording electronic (DRE) voting system
A direct-recording electronic (DRE) voting machine records votes by means of a ballot display provided with mechanical or electro-optical components that can be activated by the voter (typically buttons or a touchscreen); that processes data with computer software; and that records voting data and ballot images in memory components. After the election it produces a tabulation of the voting data stored in a removable memory component and as a printed copy. The system may also provide a means for transmitting individual ballots or vote totals to a central location for consolidating and reporting results from precincts at the central location. These systems use a precinct count method that tabulates ballots at the polling place. They typically tabulate ballots as they are cast and print the results after the close of polling.
In 2002, in the United States, the Help America Vote Act mandated that one handicapped accessible voting system be provided per polling place, which most jurisdictions have chosen to satisfy with the use of DRE voting machines, some switching entirely over to DRE. In 2004, 28.9% of the registered voters in the United States used some type of direct recording electronic voting system, up from 7.7% in 1996.
In 2004, India adopted Electronic Voting Machines (EVM) for its elections to its parliament with 380 million voters casting their ballots using more than one million voting machines. The Indian EVMs are designed and developed by two government-owned defence equipment manufacturing units, Bharat Electronics Limited (BEL) and Electronics Corporation of India Limited (ECIL). Both systems are identical, and are developed to the specifications of Election Commission of India. The system is a set of two devices running on 7.5 volt batteries. One device, the voting Unit is used by the voter, and another device called the control unit is operated by the electoral officer. Both units are connected by a five-metre cable. The voting unit has a blue button for each candidate. The unit can hold 16 candidates, but up to four units can be chained, to accommodate 64 candidates. The control unit has three buttons on the surface – one button to release a single vote, one button to see the total number of votes cast till now, and one button to close the election process. The result button is hidden and sealed. It cannot be pressed unless the close button has already been pressed. A controversy was raised when the voting machine malfunctioned which was shown in Delhi assembly. On 9 April 2019, the Supreme Court ordered the ECI to increase voter-verified paper audit trail (VVPAT) slips vote count to five randomly selected EVMs per assembly constituency, which means ECI has to count VVPAT slips of 20,625 EVMs before it certifies the final election results.
Public network DRE voting system
A public network DRE voting system is an election system that uses electronic ballots and transmits vote data from the polling place to another location over a public network. Vote data may be transmitted as individual ballots as they are cast, periodically as batches of ballots throughout the election day, or as one batch at the close of voting. This includes Internet voting as well as telephone voting.
Public network DRE voting system can utilize either precinct count or central count method. The central count method tabulates ballots from multiple precincts at a central location.
Internet voting can use remote locations (voting from any Internet capable computer) or can use traditional polling locations with voting booths consisting of Internet connected voting systems.
Corporations and organizations routinely use Internet voting to elect officers and board members and for other proxy elections. Internet voting systems have been used privately in many modern nations and publicly in the United States, the UK, Switzerland and Estonia. In Switzerland, where it is already an established part of local referendums, voters get their passwords to access the ballot through the postal service. Most voters in Estonia can cast their vote in local and parliamentary elections, if they want to, via the Internet, as most of those on the electoral roll have access to an e-voting system, the largest run by any European Union country. It has been made possible because most Estonians carry a national identity card equipped with a computer-readable microchip and it is these cards which they use to get access to the online ballot. All a voter needs is a computer, an electronic card reader, their ID card and its PIN, and they can vote from anywhere in the world. Estonian e-votes can only be cast during the days of advance voting. On election day itself people have to go to polling stations and fill in a paper ballot.
Online voting
Security experts have found security problems in every attempt at online voting, including systems in Australia, Estonia Switzerland, Russia, and the United States.
It has been argued political parties that have more support from the less fortunate—who are unfamiliar with the Internet—may suffer in the elections due to e-voting, which tends to increase voting in the upper/middle class. It is unsure as to whether narrowing the digital divide would promote equal voting opportunities for people across various social, economic, and ethnic backgrounds. In the long run, this is contingent not only on internet accessibility but also depends on people's level of familiarity with the Internet.
The effects of internet voting on overall voter turnout are unclear. A 2017 study of online voting in two Swiss cantons found that it had no effect on turnout, and a 2009 study of Estonia's national election found similar results. To the contrary, however, the introduction of online voting in municipal elections in the Canadian province of Ontario resulted in an average increase in turnout of around 3.5 percentage points. Similarly, a further study of the Swiss case found that while online voting did not increase overall turnout, it did induce some occasional voters to participate who would have abstained were online voting not an option.
A paper on “remote electronic voting and turnout in the Estonian 2007 parliamentary elections” showed that rather than eliminating inequalities, e-voting might have enhanced the digital divide between higher and lower socioeconomic classes. People who lived greater distances from polling areas voted at higher levels with this service now available. The 2007 Estonian elections yielded a higher voter turnout from those who lived in higher income regions and who received formal education. Still regarding the Estonian Internet voting system, it was proved to be more cost-efficient than the rest of the voting systems offered in 2017 local elections.
Electronic voting is perceived to be favored moreover by a certain demographic, namely the younger generation such as Generation X and Y voters. However, in recent elections about a quarter of e-votes were cast by the older demographic, such as individuals over the age of 55. Including this, about 20% of e-votes came from voters between the ages of 45 and 54. This goes to show that e-voting is not supported exclusively by the younger generations, but finding some popularity amongst Gen X and Baby Boomers as well.
Online voting is widely used privately for shareholder votes. The election management companies do not promise accuracy or privacy. In fact one company uses an individual's past votes for research, and to target ads.
Analysis
Electronic voting systems may offer advantages compared to other voting techniques. An electronic voting system can be involved in any one of a number of steps in the setup, distributing, voting, collecting, and counting of ballots, and thus may or may not introduce advantages into any of these steps. Potential disadvantages exist as well including the potential for flaws or weakness in any electronic component.
Charles Stewart of the Massachusetts Institute of Technology estimates that 1 million more ballots were counted in the 2004 USA presidential election than in 2000 because electronic voting machines detected votes that paper-based machines would have missed.
In May 2004 the U.S. Government Accountability Office released a report titled "Electronic Voting Offers Opportunities and Presents Challenges", analyzing both the benefits and concerns created by electronic voting. A second report was released in September 2005 detailing some of the concerns with electronic voting, and ongoing improvements, titled "Federal Efforts to Improve Security and Reliability of Electronic Voting Systems Are Under Way, but Key Activities Need to Be Completed".
Electronic ballots
Electronic voting systems may use electronic ballot to store votes in computer memory. Systems which use them exclusively are called DRE voting systems. When electronic ballots are used there is no risk of exhausting the supply of ballots. Additionally, these electronic ballots remove the need for printing of paper ballots, a significant cost. When administering elections in which ballots are offered in multiple languages (in some areas of the United States, public elections are required by the National Voting Rights Act of 1965), electronic ballots can be programmed to provide ballots in multiple languages for a single machine. The advantage with respect to ballots in different languages appears to be unique to electronic voting. For example, King County, Washington's demographics require them under U.S. federal election law to provide ballot access in Chinese. With any type of paper ballot, the county has to decide how many Chinese-language ballots to print, how many to make available at each polling place, etc. Any strategy that can assure that Chinese-language ballots will be available at all polling places is certain, at the very least, to result in a significant number of wasted ballots. (The situation with lever machines would be even worse than with paper: the only apparent way to reliably meet the need would be to set up a Chinese-language lever machine at each polling place, few of which would be used at all.)
Critics argue the need for extra ballots in any language can be mitigated by providing a process to print ballots at voting locations. They argue further, the cost of software validation, compiler trust validation, installation validation, delivery validation and validation of other steps related to electronic voting is complex and expensive, thus electronic ballots are not guaranteed to be less costly than printed ballots.
Accessibility
Electronic voting machines can be made fully accessible for persons with disabilities. Punched card and optical scan machines are not fully accessible for the blind or visually impaired, and lever machines can be difficult for voters with limited mobility and strength. Electronic machines can use headphones, sip and puff, foot pedals, joy sticks and other adaptive technology to provide the necessary accessibility.
Organizations such as the Verified Voting Foundation have criticized the accessibility of electronic voting machines and advocate alternatives. Some disabled voters (including the visually impaired) could use a tactile ballot, a ballot system using physical markers to indicate where a mark should be made, to vote a secret paper ballot. These ballots can be designed identically to those used by other voters. However, other disabled voters (including voters with dexterity disabilities) could be unable to use these ballots.
Cryptographic verification
The concept of election verifiability through cryptographic solutions has emerged in the academic literature to introduce transparency and trust in electronic voting systems. It allows voters and election observers to verify that votes have been recorded, tallied and declared correctly, in a manner independent from the hardware and software running the election. Three aspects of verifiability are considered: individual, universal, and eligibility. Individual verifiability allows a voter to check that her own vote is included in the election outcome, universal verifiability allows voters or election observers to check that the election outcome corresponds to the votes cast, and eligibility verifiability allows voters and observers to check that each vote in the election outcome was cast by a uniquely registered voter.
Voter intent
Electronic voting machines are able to provide immediate feedback to the voter detecting such possible problems as undervoting and overvoting which may result in a spoiled ballot. This immediate feedback can be helpful in successfully determining voter intent.
Transparency
It has been alleged by groups such as the UK-based Open Rights Group that a lack of testing, inadequate audit procedures, and insufficient attention given to system or process design with electronic voting leaves "elections open to error and fraud".
In 2009, the Federal Constitutional Court of Germany found that when using voting machines the "verification of the result must be possible by the citizen reliably and without any specialist knowledge of the subject." The DRE Nedap-computers used till then did not fulfill that requirement. The decision did not ban electronic voting as such, but requires all essential steps in elections to be subject to public examinability.
In 2013, The California Association of Voting Officials was formed to maintain efforts toward publicly owned General Public License open source voting systems
Coercion evidence
In 2013, researchers from Europe proposed that the electronic voting systems should be coercion evident. There should be a public evidence of the amount of coercion that took place in a particular elections. An internet voting system called "Caveat Coercitor" shows how coercion evidence in voting systems can be achieved.
Audit trails
A fundamental challenge with any voting machine is to produce evidence that the votes were recorded as cast and tabulated as recorded. Election results produced by voting systems that rely on voter-marked paper ballots can be verified with manual hand counts (either valid sampling or full recounts). Paperless ballot voting systems must support auditability in different ways. An independently auditable system, sometimes called an Independent Verification, can be used in recounts or audits. These systems can include the ability for voters to verify how their votes were cast or enable officials to verify that votes were tabulated correctly.
A discussion draft argued by researchers at the National Institute of Standards and Technology (NIST) states, "Simply put, the DRE architecture’s inability to provide for independent audits of its electronic records makes it a poor choice for an environment in which detecting errors and fraud is important." The report does not represent the official position of NIST, and misinterpretations of the report has led NIST to explain that "Some statements in the report have been misinterpreted. The draft report includes statements from election officials, voting system vendors, computer scientists and other experts in the field about what is potentially possible in terms of attacks on DREs. However, these statements are not report conclusions."
Various technologies can be used to assure DRE voters that their votes were cast correctly, and allow officials to detect possible fraud or malfunction, and to provide a means to audit the tabulated results. Some systems include technologies such as cryptography (visual or mathematical), paper (kept by the voter or verified and left with election officials), audio verification, and dual recording or witness systems (other than with paper).
Dr. Rebecca Mercuri, the creator of the Voter Verified Paper Audit Trail (VVPAT) concept (as described in her Ph.D. dissertation in October 2000 on the basic voter verifiable ballot system), proposes to answer the auditability question by having the voting machine print a paper ballot or other paper facsimile that can be visually verified by the voter before being entered into a secure location. Subsequently, this is sometimes referred to as the "Mercuri method." To be truly voter-verified, the record itself must be verified by the voter and able to be done without assistance, such as visually or audibly. If the voter must use a bar-code scanner or other electronic device to verify, then the record is not truly voter-verifiable, since it is actually the electronic device that is verifying the record for the voter. VVPAT is the form of Independent Verification most commonly found in elections in the United States and other countries such as Venezuela.
End-to-end auditable voting systems can provide the voter with a receipt that can be taken home. This receipt does not allow voters to prove to others how they voted, but it does allow them to verify that the system detected their vote correctly. End-to-end (E2E) systems include Punchscan, ThreeBallot and Prêt à Voter. Scantegrity is an add-on that extends current optical scan voting systems with an E2E layer. The city of Takoma Park, Maryland used Scantegrity II for its November, 2009 election.
Systems that allow the voter to prove how they voted are never used in U.S. public elections, and are outlawed by most state constitutions. The primary concerns with this solution are voter intimidation and vote selling.
An audit system can be used in measured random recounts to detect possible malfunction or fraud. With the VVPAT method, the paper ballot is often treated as the official ballot of record. In this scenario, the ballot is primary and the electronic records are used only for an initial count. In any subsequent recounts or challenges, the paper, not the electronic ballot, would be used for tabulation. Whenever a paper record serves as the legal ballot, that system will be subject to the same benefits and concerns as any paper ballot system.
To successfully audit any voting machine, a strict chain of custody is required.
The solution was first demonstrated (New York City, March 2001) and used (Sacramento, California 2002) by AVANTE International Technology, Inc.. In 2004 Nevada was the first state to successfully implement a DRE voting system that printed an electronic record. The $9.3 million voting system provided by Sequoia Voting Systems included more than 2,600 AVC EDGE touchscreen DREs equipped with the VeriVote VVPAT component.
The new systems, implemented under the direction of then Secretary of State Dean Heller replaced largely punched card voting systems and were chosen after feedback was solicited from the community through town hall meetings and input solicited from the Nevada Gaming Control Board.
Hardware
Inadequately secured hardware can be subject to physical tampering. Some critics, such as the group "Wij vertrouwen stemcomputers niet" ("We do not trust voting machines"), charge that, for instance, foreign hardware could be inserted into the machine, or between the user and the central mechanism of the machine itself, using a man in the middle attack technique, and thus even sealing DRE machines may not be sufficient protection. This claim is countered by the position that review and testing procedures can detect fraudulent code or hardware, if such things are present, and that a thorough, verifiable chain of custody would prevent the insertion of such hardware or software. Security seals are commonly employed in an attempt to detect tampering, but testing by Argonne National Laboratory and others demonstrates that existing seals can usually be quickly defeated by a trained person using low-tech methods.
Software
Security experts, such as Bruce Schneier, have demanded that voting machine source code should be publicly available for inspection. Others have also suggested publishing voting machine software under a free software license as is done in Australia.
Testing and certification
One method to detect errors with voting machines is parallel testing, which are conducted on the Election Day with randomly picked machines. The ACM published a study showing that, to change the outcome of the 2000 U.S. Presidential election, only 2 votes in each precinct would have needed to be changed.
Cost
Cost of having electronic machines receive the voter's choices, print a ballot and scan the ballots to tally results is higher than the cost of printing blank ballots, having voters mark them directly (with machine-marking only when voters want it) and scanning ballots to tally results, according to studies in Georgia, New York and Pennsylvania.
Popular culture
In the 2006 film Man of the Year starring Robin Williams, the character played by Williams—a comedic host of political talk show—wins the election for President of the United States when a software error in the electronic voting machines produced by the fictional manufacturer Delacroy causes votes to be tallied inaccurately.
In Runoff, a 2007 novel by Mark Coggins, a surprising showing by the Green Party candidate in a San Francisco Mayoral election forces a runoff between him and the highly favored establishment candidate—a plot line that closely parallels the actual results of the 2003 election. When the private-eye protagonist of the book investigates at the behest of a powerful Chinatown businesswoman, he determines that the outcome was rigged by someone who defeated the security on the city's newly installed e-voting system.
"Hacking Democracy" is a 2006 documentary film shown on HBO. Filmed over three years, it documents American citizens investigating anomalies and irregularities with electronic voting systems that occurred during America's 2000 and 2004 elections, especially in Volusia County, Florida. The film investigates the flawed integrity of electronic voting machines, particularly those made by Diebold Election Systems and culminates in the hacking of a Diebold election system in Leon County, Florida.
The central conflict in the MMO video game Infantry resulted from the global institution of direct democracy through the use of personal voting devices sometime in the 22nd century AD. The practice gave rise to a 'voting class' of citizens composed mostly of homemakers and retirees who tended to be at home all day. Because they had the most free time to participate in voting, their opinions ultimately came to dominate politics.
Electronic voting manufacturers
AccuPoll
Bharat Electronics Limited (India)
Dominion Voting Systems (Canada)
Electronics Corporation of India Ltd
ES&S (United States)
Hart InterCivic (United States)
Nedap (Netherlands)
Premier Election Solutions (formerly Diebold Election Systems) (United States)
Safevote
Sequoia Voting Systems (United States)
Scytl (Spain)
Smartmatic
Academic efforts
Bingo Voting
DRE-i and DRE-ip
Prêt à Voter
Punchscan
See also
Certification of voting machines
E-democracy
Electoral fraud
Soft error
Vote counting system
Voting machine
References
External links
Electronic Vote around the World – Smartmatic
Election Assistance Commission
Vote.NIST.gov the National Institute of Standards and Technology Help America Vote Act page
An Electronic Voting Case Study in KCA University, Kenya
The Election Technology Library research list a comprehensive list of research relating to technology use in elections
E-Voting information from ACE Project
How do we vote in India with Electronic Voting machine
NPR summary of current technology status in the states of the U.S., as of May 2008
Internet Voting in Estonia
Progetto Salento eVoting a project for an e-voting test in Melpignano e Martignano (Lecce – Italy) designed by Prof. Marco Mancarella University of Salento
a review of existing electronic voting systems and its verification systems in supervised environments
Open Counting
Systems Behind E-Voting
VoteBox(tm) UK Online Voting
Elections
Electronic voting
Voting |
372242 | https://en.wikipedia.org/wiki/Radical%20of%20a%20ring | Radical of a ring | In ring theory, a branch of mathematics, a radical of a ring is an ideal of "not-good" elements of the ring.
The first example of a radical was the nilradical introduced by , based on a suggestion of . In the next few years several other radicals were discovered, of which the most important example is the Jacobson radical. The general theory of radicals was defined independently by and .
Definitions
In the theory of radicals, rings are usually assumed to be associative, but need not be commutative and need not have a multiplicative identity. In particular, every ideal in a ring is also a ring.
A radical class (also called radical property or just radical) is a class σ of rings possibly without identities, such that:
the homomorphic image of a ring in σ is also in σ
every ring R contains an ideal S(R) in σ that contains every other ideal of R that is in σ
S(R/S(R)) = 0. The ideal S(R) is called the radical, or σ-radical, of R.
The study of such radicals is called torsion theory.
For any class δ of rings, there is a smallest radical class Lδ containing it, called the lower radical of δ. The operator L is called the lower radical operator.
A class of rings is called regular if every non-zero ideal of a ring in the class has a non-zero image in the class. For every regular class δ of rings, there is a largest radical class Uδ, called the upper radical of δ, having zero intersection with δ. The operator U is called the upper radical operator.
A class of rings is called hereditary if every ideal of a ring in the class also belongs to the class.
Examples
The Jacobson radical
Let R be any ring, not necessarily commutative. The Jacobson radical of R is the intersection of the annihilators of all simple right R-modules.
There are several equivalent characterizations of the Jacobson radical, such as:
J(R) is the intersection of the regular maximal right (or left) ideals of R.
J(R) is the intersection of all the right (or left) primitive ideals of R.
J(R) is the maximal right (or left) quasi-regular right (resp. left) ideal of R.
As with the nilradical, we can extend this definition to arbitrary two-sided ideals I by defining J(I) to be the preimage of J(R/I) under the projection map R → R/I.
If R is commutative, the Jacobson radical always contains the nilradical. If the ring R is a finitely generated Z-algebra, then the nilradical is equal to the Jacobson radical, and more generally: the radical of any ideal I will always be equal to the intersection of all the maximal ideals of R that contain I. This says that R is a Jacobson ring.
The Baer radical
The Baer radical of a ring is the intersection of the prime ideals of the ring R. Equivalently it is the smallest semiprime ideal in R. The Baer radical is the lower radical of the class of nilpotent rings. Also called the "lower nilradical" (and denoted Nil∗R), the "prime radical", and the "Baer-McCoy radical". Every element of the Baer radical is nilpotent, so it is a nil ideal.
For commutative rings, this is just the nilradical and closely follows the definition of the radical of an ideal.
The upper nil radical or Köthe radical
The sum of the nil ideals of a ring R is the upper nilradical Nil*R or Köthe radical and is the unique largest nil ideal of R. Köthe's conjecture asks whether any left nil ideal is in the nilradical.
Singular radical
An element of a (possibly non-commutative ring) is called left singular if it annihilates an essential left ideal, that is, r is left singular if Ir = 0 for some essential left ideal I. The set of left singular elements of a ring R is a two-sided ideal, called the left singular ideal, and is denoted . The ideal N of R such that is denoted by and is called the singular radical or the Goldie torsion of R. The singular radical contains the prime radical (the nilradical in the case of commutative rings) but may properly contain it, even in the commutative case. However, the singular radical of a Noetherian ring is always nilpotent.
The Levitzki radical
The Levitzki radical is defined as the largest locally nilpotent ideal, analogous to the Hirsch–Plotkin radical in the theory of groups. If the ring is Noetherian, then the Levitzki radical is itself a nilpotent ideal, and so is the unique largest left, right, or two-sided nilpotent ideal.
The Brown–McCoy radical
The Brown–McCoy radical (called the strong radical in the theory of Banach algebras) can be defined in any of the following ways:
the intersection of the maximal two-sided ideals
the intersection of all maximal modular ideals
the upper radical of the class of all simple rings with identity
The Brown–McCoy radical is studied in much greater generality than associative rings with 1.
The von Neumann regular radical
A von Neumann regular ring is a ring A (possibly non-commutative without identity) such that for every a there is some b with a = aba. The von Neumann regular rings form a radical class. It contains every matrix ring over a division algebra, but contains no nil rings.
The Artinian radical
The Artinian radical is usually defined for two-sided Noetherian rings as the sum of all right ideals that are Artinian modules. The definition is left-right symmetric, and indeed produces a two-sided ideal of the ring. This radical is important in the study of Noetherian rings, as outlined by .
See also
Related uses of radical that are not radicals of rings:
Radical of a module
Kaplansky radical
Radical of a bilinear form
References
Ideals (ring theory)
Ring theory |
372656 | https://en.wikipedia.org/wiki/Eavesdropping | Eavesdropping | Eavesdropping is the act of secretly or stealthily listening to the private conversation or communications of others without their consent in order to gather information.
Etymology
The verb eavesdrop is a back-formation from the noun eavesdropper ("a person who eavesdrops"), which was formed from the related noun eavesdrop ("the dripping of water from the eaves of a house; the ground on which such water falls").
An eavesdropper was someone who would hang from the eave of a building so as to hear what is said within. The PBS documentaries Inside the Court of Henry VIII (April 8, 2015) and Secrets of Henry VIII’s Palace (June 30, 2013) include segments that display and discuss "eavedrops", carved wooden figures Henry VIII had built into the eaves (overhanging edges of the beams in the ceiling) of Hampton Court to discourage unwanted gossip or dissension from the King's wishes and rule, to foment paranoia and fear, and demonstrate that everything said there was being overheard; literally, that the walls had ears.
Techniques
Eavesdropping vectors include telephone lines, cellular networks, email, and other methods of private instant messaging. VoIP communications software is also vulnerable to electronic eavesdropping via infections such as trojans.
Network attacks
Network eavesdropping is a network layer attack that focuses on capturing small packets from the network transmitted by other computers and reading the data content in search of any type of information. This type of network attack is generally one of the most effective as a lack of encryption services are used. It is also linked to the collection of metadata.
See also
Cellphone surveillance
Computer surveillance
ECHELON
Espionage
Fiber tapping
Katz v. United States (1967)
Global surveillance disclosures (2013–present)
Keystroke logging
Listening station
Magic (cryptography)
Man-in-the-middle attack
Mass surveillance
NSA warrantless surveillance controversy (December 2005 – 2006)
Opportunistic encryption
Party line
People watching
Privacy
Secure communication
Speke Hall, containing a physical eavesdrop for listening to people waiting at the door
Surveillance
Telephone tapping
Ultra
Covert listening device
References
External links
Espionage techniques
Fiction
Plot (narrative) |
374636 | https://en.wikipedia.org/wiki/Adobe%20ColdFusion | Adobe ColdFusion | Adobe ColdFusion is a commercial rapid web-application development computing platform created by J. J. Allaire in 1995. (The programming language used with that platform is also commonly called ColdFusion, though is more accurately known as CFML.) ColdFusion was originally designed to make it easier to connect simple HTML pages to a database. By version 2 (1996), it became a full platform that included an IDE in addition to a full scripting language.
Overview
One of the distinguishing features of ColdFusion is its associated scripting language, ColdFusion Markup Language (CFML). CFML compares to the scripting components of ASP, JSP, and PHP in purpose and features, but its tag syntax more closely resembles HTML, while its script syntax resembles JavaScript. ColdFusion is often used synonymously with CFML, but there are additional CFML application servers besides ColdFusion, and ColdFusion supports programming languages other than CFML, such as server-side Actionscript and embedded scripts that can be written in a JavaScript-like language known as CFScript.
Originally a product of Allaire and released on July 2, 1995, ColdFusion was developed by brothers Joseph J. Allaire and Jeremy Allaire. In 2001 Allaire was acquired by Macromedia, which in turn was acquired by Adobe Systems Inc in 2005.
ColdFusion is most often used for data-driven websites or intranets, but can also be used to generate remote services such as REST services, WebSockets, SOAP web services or Flash remoting. It is especially well-suited as the server-side technology to the client-side ajax.
ColdFusion can also handle asynchronous events such as SMS and instant messaging via its gateway interface, available in ColdFusion MX 7 Enterprise Edition.
Main features
ColdFusion provides a number of additional features out of the box. Main features include:
Simplified database access
Client and server cache management
Client-side code generation, especially for form widgets and validation
Conversion from HTML to PDF
Data retrieval from common enterprise systems such as Active Directory, LDAP, SMTP, POP, HTTP, FTP, Microsoft Exchange Server and common data formats such as RSS and Atom
File indexing and searching service based on Apache Solr
GUI administration
Server, application, client, session, and request scopes
XML parsing, querying (XPath), validation and transformation (XSLT)
Server clustering
Task scheduling
Graphing and reporting
Simplified file manipulation including raster graphics (and CAPTCHA) and zip archives (introduction of video manipulation is planned in a future release)
Simplified web service implementation (with automated WSDL generation / transparent SOAP handling for both creating and consuming services - as an example, ASP.NET has no native equivalent for <CFINVOKE WEBSERVICE="http://host/tempconf.cfc?wsdl" METHOD="Celsius2Fahrenheit" TEMP="#tempc#" RETURNVARIABLE="tempf">)
Other implementations of CFML offer similar or enhanced functionality, such as running in a .NET environment or image manipulation.
The engine was written in C and featured, among other things, a built-in scripting language (CFScript), plugin modules written in Java, and a syntax very similar to HTML. The equivalent to an HTML element, a ColdFusion tag begins with the letters "CF" followed by a name that is indicative of what the tag is interpreted to, in HTML. E.g. <cfoutput> to begin the output of variables or other content.
In addition to CFScript and plugins (as described), CFStudio provided a design platform with a WYSIWYG display. In addition to ColdFusion, CFStudio also supports syntax in other languages popular for backend programming, such as Perl. In addition to making backend functionality easily available to the non-programmer, (version 4.0 and forward in particular) integrated easily with the Apache Web Server and with Internet Information Services.
Other features
All versions of ColdFusion prior to 6.0 were written using Microsoft Visual C++. This meant that ColdFusion was largely limited to running on Microsoft Windows, although Allaire did successfully port ColdFusion to Sun Solaris starting with version 3.1.
The Allaire company was sold to Macromedia, then Macromedia was sold to Adobe. Earlier versions were not as robust as the versions available from version 4.0 forward.
With the release of ColdFusion MX 6.0, the engine had been re-written in Java and supported its own runtime environment, which was easily replaced through its configuration options with the runtime environment from Sun. Version 6.1 included the ability to code and debug Macromedia Flash.
Versions
Cold Fusion 3
Version 3, released in June 1997, brought custom tags, cfsearch/cfindex/cfcollection based on the Verity search engine, the server scope, and template encoding (called then "encryption"). Version 3.1, released in Jan 1998, added RDS support as well as a port to the Sun Solaris operating system, while Cold Fusion studio gained a live page preview and HTML syntax checker.
ColdFusion 4
Released in Nov 1998, version 4 is when the name was changed from "Cold Fusion" to "ColdFusion" - possibly to distinguish it from Cold fusion theory. The release also added the initial implementation of cfscript, support for locking (cflock), transactions (cftransaction), hierarchical exception handling (cftry/cfcatch), sandbox security, as well as many new tags and functions, including cfstoredproc, cfcache, cfswitch, and more.
ColdFusion 4.5
Version 4.5, released in Nov 1999, expanded the ability to access external system resources, including COM and CORBA, and added initial support for Java integration (including EJB's, Pojo's, servlets, and Java CFX's). IT also added the getmetricdata function (to access performance information), additional performance information in page debugging output, enhanced string conversion functions, and optional whitespace removal.
ColdFusion 5
Version 5 was released in June 2001, adding enhanced query support, new reporting and charting features, user-defined functions, and improved admin tools. It was the last to be legacy coded for a specific platform, and the first release from Macromedia after their acquisition of Allaire Corporation, which had been announced January 16, 2001.
ColdFusion MX 6
Prior to 2000, Edwin Smith, an Allaire architect on JRun and later the Flash Player, initiated a project codenamed "Neo". This project was later revealed as a ColdFusion Server re-written completely using Java. This made portability easier and provided a layer of security on the server, because it ran inside a Java Runtime Environment.
In June 2002 Macromedia released the version 6.0 product under a slightly different name, ColdFusion MX, allowing the product to be associated with both the Macromedia brand and its original branding. ColdFusion MX was completely rebuilt from the ground up and was based on the Java EE platform. ColdFusion MX was also designed to integrate well with Macromedia Flash using Flash Remoting.
With the release of ColdFusion MX, the CFML language API was released with an OOP interface.
ColdFusion MX 7
With the release of ColdFusion 7.0 on February 7, 2005, the naming convention was amended, rendering the product name "Macromedia ColdFusion MX 7" (the codename for CFMX7 was "Blackstone"). CFMX 7 added Flash-based and XForms-based web forms, and a report builder that output in Adobe PDF as well as FlashPaper, RTF and Excel. The Adobe PDF output is also available as a wrapper to any HTML page, converting that page to a quality printable document. The enterprise edition also added Gateways. These provide interaction with non-HTTP request services such as IM Services, SMS, Directory Watchers, and an asynchronous execution. XML support was boosted in this version to include native schema checking.
ColdFusion MX 7.0.1 (codename "Merrimack") added support for Mac OS X, improvements to Flash forms, RTF support for CFReport, the new CFCPRoxy feature for Java/CFC integration, and more. ColdFusion MX 7.0.2 (codenamed "Mystic") included advanced features for working with Adobe Flex 2 as well as more improvements for the CF Report Builder.
Adobe ColdFusion 8
On July 30, 2007, Adobe Systems released ColdFusion 8, dropping "MX" from its name. During beta testing the codename used was "Scorpio" (the eighth sign of the zodiac and the eighth iteration of ColdFusion as a commercial product). More than 14,000 developers worldwide were active in the beta process - many more testers than the 5,000 Adobe Systems originally expected. The ColdFusion development team consisted of developers based in Newton/Boston, Massachusetts and offshore in Bangalore, India.
Some of the new features are the CFPDFFORM tag, which enables integration with Adobe Acrobat forms, some image manipulation functions, Microsoft .NET integration, and the CFPRESENTATION tag, which allows the creation of dynamic presentations using Adobe Acrobat Connect, the Web-based collaboration solution formerly known as Macromedia Breeze. In addition, the ColdFusion Administrator for the Enterprise version ships with built-in server monitoring. ColdFusion 8 is available on several operating systems including Linux, Mac OS X and Windows Server 2003.
Other additions to ColdFusion 8 are built-in Ajax widgets, file archive manipulation (CFZIP), Microsoft Exchange server integration (CFEXCHANGE), image manipulation including automatic CAPTCHA generation (CFIMAGE), multi-threading, per-application settings, Atom and RSS feeds, reporting enhancements, stronger encryption libraries, array and structure improvements, improved database interaction, extensive performance improvements, PDF manipulation and merging capabilities (CFPDF), interactive debugging, embedded database support with Apache Derby, and a more ECMAScript compliant CFSCRIPT.
For development of ColdFusion applications, several tools are available: primarily Adobe Dreamweaver CS4, Macromedia HomeSite 5.x, CFEclipse, Eclipse and others. "Tag updaters" are available for these applications to update their support for the new ColdFusion 8 features.
Adobe ColdFusion 9
ColdFusion 9 (Codenamed: Centaur) was released on October 5, 2009. New features for CF9 include:
Ability to code ColdFusion Components (CFCs) entirely in CFScript.
An explicit "local" scope that does not require local variables to be declared at the top of the function.
Implicit getters/setters for CFC.
Implicit constructors via method called "init" or method with same name as CFC.
New CFFinally tag for Exception handling syntax and CFContinue tag for Control flow.
Object-relational mapping (ORM) Database integration through Hibernate (Java).
Server.cfc file with onServerStart and onServerEnd methods.
Tighter integration with Adobe Flex and Adobe AIR.
Integration with key Microsoft products including Word, Excel, SharePoint, Exchange, and PowerPoint.
In Memory Management - or Virtual File System: an ability to treat content in memory as opposed to using the HDD.
Exposed as Services - an ability to access, securely, functions of the server externally.
Adobe ColdFusion 10
ColdFusion 10 (Codenamed: Zeus) was released on May 15, 2012. New or improved features available in all editions (Standard, Enterprise, and Developer) include (but are not limited to):
Security enhancements
Hotfix installer and notification
Improved scheduler (based on a version of quartz)
Improved web services support (WSDL 2.0, SOAP 1.2)
Support for HTML5 web sockets
Tomcat integration
Support for RESTful web services
Language enhancements (closures, and more)
Search integration with Apache Solr
HTML5 video player and Adobe Flash Player
Flex and Adobe AIR lazy loading
XPath integration
HTML5 enhancements
Additional new or improved features in ColdFusion Enterprise or Developer editions include (but are not limited to):
Dynamic and interactive HTML5 charting
Improved and revamped scheduler (additional features over what is added in CF10 Standard)
Object relational mapping enhancements
The lists above were obtained from the Adobe web site pages describing "new features", as listed first in the links in the following list.
CF10 was originally referred to by the codename Zeus, after first being confirmed as coming by Adobe at Adobe MAX 2010, and during much of its prerelease period. It was also commonly referred to as "ColdFusion next" and "ColdFusion X" in blogs, on Twitter, etc., before Adobe finally confirmed it would be "ColdFusion 10". For much of 2010, ColdFusion Product Manager Adam Lehman toured the US setting up countless meetings with customers, developers, and user groups to formulate a master blueprint for the next feature set. In September 2010, he presented the plans to Adobe where they were given full support and approval by upper management.
The first public beta of ColdFusion 10 was released via Adobe Labs on 17 February 2012.
Adobe ColdFusion 11
ColdFusion 11 (Codenamed: Splendor) was released on April 29, 2014.
New or improved features available in all editions (Standard, Enterprise, and Developer) include:
End-to-end mobile development
A new lightweight edition (ColdFusion Express)
Language enhancements
WebSocket enhancements
PDF generation enhancements
Security enhancements
Social enhancements
REST enhancements
Charting enhancements
Compression enhancements
ColdFusion 11 also removed many features previously identified simply as "deprecated" or no longer supported in earlier releases. For example, the CFLOG tag long offered date and time attributes which were deprecated (and redundant, as the date and time is always logged). As of CF11, their use would not cause the CFLOG tag to fail.
Adobe ColdFusion (2016 release)
Adobe ColdFusion (2016 release), Codenamed: Raijin (and also known generically as ColdFusion 2016) was released on February 16, 2016.
New or improved features available in all editions (Standard, Enterprise, and Developer) include:
Language enhancements
Command Line Interface (CLI)
PDF generation enhancements
Security enhancements
External session storage (Redis)
Swagger document generation
NTLM support
API Manager
Adobe ColdFusion (2018 Release)
Adobe ColdFusion (2018 release), known generically as ColdFusion 2018, was released on July 12, 2018. ColdFusion 2018 was codenamed Aether during prerelease.
As of July 2020, Adobe had released 10 updates for ColdFusion 2018.
New or improved features available in all editions (Standard, Enterprise, and Developer) include:
Language enhancements (including NULL, abstract classes and methods, covariants and finals, closures in tags, and more)
Asynchronous programming, using Futures
Command line REPL
Auto lockdown capability
Distributed cache support (Redis, memcached, JCS)
REST playground capability
Modernized Admin UI
Performance Monitoring Toolset
Adobe ColdFusion (2021 Release)
Adobe ColdFusion (2021 Release) was released on Nov 11th, 2020. ColdFusion 2021 was code named Project Stratus during pre-release.
New or improved features available in all editions (Standard, Enterprise, and Developer) include:
Lightweight installer
ColdFusion Package Manager
Cloud storage services
Messaging services
No-SQL database
Single sign-on
Core language changes
Performance Monitoring Tool set
Development roadmap
In Sep 2017, Adobe announced the roadmap anticipating releases in 2018 and 2020. Among the key features anticipated for the 2016 release were a new performance monitor, enhancements to asynchronous programming, revamped REST support, and enhancements to the API Manager, as well as support for CF2016 projected into 2024. As for the 2020 release, the features anticipated at that time (in 2017) were configurability (modularity) of CF application services, revamped scripting and object-oriented support, and further enhancements to the API Manager.
Features
PDF generation
ColdFusion can generate PDF documents using standard HTML (i.e. no additional coding is needed to generate documents for print). CFML authors place HTML and CSS within a pair of cfdocument tags (or new in ColdFusion 11, cfhtmltopdf tags). The generated document can then either be saved to disk or sent to the client's browser. ColdFusion 8 introduced also the cfpdf tag to allow for control over PDF documents including PDF forms, and merging of PDFs. These tags however do not use Adobe's PDF engine but cfdocument uses a combination of the commercial JPedal Java PDF library and the free and open source Java library iText, and cfhtmltopdf uses an embedded WebKit implementation.
ColdFusion Components (Objects)
ColdFusion was originally not an object-oriented programming language like PHP versions 3 and below. ColdFusion falls into the category of OO languages that do not support multiple inheritance (along with Java, Smalltalk, etc.). With the MX release (6+), ColdFusion introduced basic OO functionality with the component language construct which resembles classes in OO languages. Each component may contain any number of properties and methods. One component may also extend another (Inheritance). Components only support single inheritance. Object handling feature set and performance enhancing has occurred with subsequent releases. With the release of ColdFusion 8, Java-style interfaces are supported. ColdFusion components use the file extension cfc to differentiate them from ColdFusion templates (.cfm).
Remoting
Component methods may be made available as web services with no additional coding and configuration. All that is required is for a method's access to be declared 'remote'. ColdFusion automatically generates a WSDL at the URL for the component in this manner: http://path/to/components/Component.cfc?wsdl. Aside from SOAP, the services are offered in Flash Remoting binary format.
Methods which are declared remote may also be invoked via an HTTP GET or POST request. Consider the GET request as shown.
http://path/to/components/Component.cfc?method=search&query=your+query&mode=strict
This will invoke the component's search function, passing "your query" and "strict" as arguments.
This type of invocation is well-suited for Ajax-enabled applications. ColdFusion 8 introduced the ability to serialize ColdFusion data structures to JSON for consumption on the client.
The ColdFusion server will automatically generate documentation for a component if you navigate to its URL and insert the appropriate code within the component's declarations. This is an application of component introspection, available to developers of ColdFusion components. Access to a component's documentation requires a password. A developer can view the documentation for all components known to the ColdFusion server by navigating to the ColdFusion URL. This interface resembles the Javadoc HTML documentation for Java classes.
Custom Tags
ColdFusion provides several ways to implement custom markup language tags, i.e. those not included in the core ColdFusion language. These are especially useful for providing a familiar interface for web designers and content authors familiar with HTML but not imperative programming.
The traditional and most common way is using CFML. A standard CFML page can be interpreted as a tag, with the tag name corresponding to the file name prefixed with "cf_". For example, the file IMAP.cfm can be used as the tag "cf_imap". Attributes used within the tag are available in the ATTRIBUTES scope of the tag implementation page. CFML pages are accessible in the same directory as the calling page, via a special directory in the ColdFusion web application, or via a CFIMPORT tag in the calling page. The latter method does not necessarily require the "cf_" prefix for the tag name.
A second way is the developments of CFX tags using Java or C++. CFX tags are prefixed with "cfx_", for example "cfx_imap". Tags are added to the ColdFusion runtime environment using the ColdFusion administrator, where JAR or DLL files are registered as custom tags.
Finally, ColdFusion supports JSP tag libraries from the JSP 2.0 language specification. JSP tags are included in CFML pages using the CFIMPORT tag.
Interactions with other programming languages
ColdFusion and Java
The standard ColdFusion installation allows the deployment of ColdFusion as a WAR file or EAR file for deployment to standalone application servers, such as Macromedia JRun, and IBM WebSphere. ColdFusion can also be deployed to servlet containers such as Apache Tomcat and Mortbay Jetty, but because these platforms do not officially support ColdFusion, they leave many of its features inaccessible. As of ColdFusion 10 Macromedia JRun was replaced by Apache Tomcat.
Because ColdFusion is a Java EE application, ColdFusion code can be mixed with Java classes to create a variety of applications and use existing Java libraries. ColdFusion has access to all underlying Java classes, supports JSP custom tag libraries, and can access JSP functions after retrieving the JSP page context (GetPageContext()).
Prior to ColdFusion 7.0.1, ColdFusion components could only be used by Java or .NET by declaring them as web services. However, beginning in ColdFusion MX 7.0.1, ColdFusion components can now be used directly within Java classes using the CFCProxy class.
Recently, there has been much interest in Java development using alternate languages such as Jython, Groovy and JRuby. ColdFusion was one of the first scripting platforms to allow this style of Java development.
ColdFusion and .NET
ColdFusion 8 natively supports .NET within the CFML syntax. ColdFusion developers can simply call any .NET assembly without needing to recompile or alter the assemblies in any way. Data types are automatically translated between ColdFusion and .NET (example: .NET DataTable → ColdFusion Query).
A unique feature for a Java EE vendor, ColdFusion 8 offers the ability to access .NET Assemblies remotely through proxy (without the use of .NET Remoting). This allows ColdFusion users to leverage .NET without having to be installed on a Windows operating system.
Acronyms
The acronym for the ColdFusion Markup Language is CFML. When ColdFusion templates are saved to disk, they are traditionally given the extension .cfm or .cfml. The .cfc extension is used for ColdFusion Components. The original extension was DBM or DBML, which stood for Database Markup Language. When talking about ColdFusion, most users use the acronym CF and this is used for numerous ColdFusion resources such as user groups (CFUGs) and sites.
CFMX is the common abbreviation for ColdFusion versions 6 and 7 (a.k.a. ColdFusion MX).
Alternative server environments
ColdFusion originated as proprietary technology based on Web technology industry standards. However, it is becoming a less closed technology through the availability of competing products. Such alternative products include (in alphabetical order):
BlueDragon - Proprietary .NET-based CFML engine and free open source Java-based CFML engine (Open BlueDragon).
Coral Web Builder
IgniteFusion
OpenBD - The open source version of BlueDragon was released as Open BlueDragon (OpenBD) in December 2008.
Lucee - Free, open source CFML engine forked from Railo. Lucee's aim is to provide the functionality of CFML using less resources and giving better performance and to move CFML past its roots and into a modern and dynamic web programming platform. Lucee is backed by community supporters and members of the Lucee Association.
Railo - Free, open source CFML engine. It comes in three main product editions, and other versions.
SmithProject
The argument can be made that ColdFusion is even less platform-bound than raw Java EE or .NET, simply because ColdFusion will run on top of a .NET app server (New Atlanta), or on top of any servlet container or Java EE application server (JRun, WebSphere, JBoss, Geronimo, Tomcat, Resin Server, Jetty (web server), etc.). In theory, a ColdFusion application could be moved unchanged from a Java EE application server to a .NET application server.
Vulnerabilities
In March 2013, a known issue affecting ColdFusion 8, 9 and 10 left the National Vulnerability Database open to attack. The vulnerability had been identified and a patch released by Adobe for CF9 and CF10 in January.
In April 2013, a ColdFusion vulnerability was blamed by Linode for an intrusion into the Linode Manager control panel website. A security bulletin and hotfix for this had been issued by Adobe a week earlier.
In May 2013, Adobe identified another critical vulnerability, reportedly already being exploited in the wild, which targets all recent versions of ColdFusion on any servers where the web-based administrator and API have not been locked down. The vulnerability allows unauthorized users to upload malicious scripts and potentially gain full control over the server. A security bulletin and hotfix for this was issued by Adobe 6 days later.
In April 2015, Adobe fixed a cross-site scripting (XSS) vulnerability
in Adobe ColdFusion 10 before Update 16, and in ColdFusion 11 before Update 5,
that allowed remote attackers to inject arbitrary web script or HTML; however, it's exploitable only by users who have authenticated through the administration panel.
In September 2019, Adobe fixed two command injection vulnerabilities (CVE-2019-8073) that enabled arbitrary code and an alleyway traversal (CVE-2019-8074).
See also
Adobe ColdFusion Builder - Builder Software
Comparison of programming languages
4GL
References
External links
Adobe software
Scripting languages
Macromedia software
Web development software
CFML compilers
CFML programming language
JVM programming languages |
382082 | https://en.wikipedia.org/wiki/Medical%20privacy | Medical privacy | Medical privacy or health privacy is the practice of maintaining the security and confidentiality of patient records. It involves both the conversational discretion of health care providers and the security of medical records. The terms can also refer to the physical privacy of patients from other patients and providers while in a medical facility, and to modesty in medical settings. Modern concerns include the degree of disclosure to insurance companies, employers, and other third parties. The advent of electronic medical records (EMR) and patient care management systems (PCMS) have raised new concerns about privacy, balanced with efforts to reduce duplication of services and medical errors.
Many countries - including Australia, Canada, Turkey, the United Kingdom, the United States, New Zealand, and the Netherlands - have enacted laws that try to protect people's privacy. However, many of these laws have proven to be less effective in practice than in theory. The United States passed the Health Insurance Portability and Accountability Act (HIPAA) in 1996 in an attempt to increase privacy precautions within medical institutions.
History of medical privacy
Prior to the technological boom, medical institutions relied on the paper medium to file individual's medical data. Nowadays, more and more information is stored within electronic databases. Research shows that it is safer to have information stored within a paper medium as it is harder to physically steal data, whilst digital records are vulnerable to access by hackers.
In order to reform the healthcare privacy issues in the early 1990s, researchers looked into the use of credit cards and smart cards to allow access to their medical information without fear of stolen information. The "smart" card allowed the storage and processing of information to be stored in a singular microchip, yet people were fearful of having so much information stored in a single spot that could easily be accessed. This "smart" card included an individual's social security number as an important piece of identification that can lead to identity theft if databases are breached. Additionally, there was the fear that people would target these medical cards because they have information that can be of value to many different third parties including employers, pharmaceutical companies, drug marketers, and insurance reviewers.
In response to the lack of medical privacy, there was a movement to create better medical privacy protection, but nothing has been officially passed. The Medical Information Bureau was thus created to prevent insurance fraud, yet it has since become a significant source of medical information for over 750 life insurance companies; thus, it is very dangerous as it is a target of privacy breachers. Although the electronic filing system of medical information has increased efficiency and administration costs have been reduced, there are negative aspects to consider. The electronic filing system allows for individual's information to be more susceptible to outsiders; even though their information is stored on a singular card. Therefore, the medical card serves as a false sense of security as it does not protect their information completely.
Patient care management systems (PCMS)
With the technological boom, there has been an expansion of the record filing system and many hospitals have therefore adopted new PCMS. PCMS are large medical records that hold many individuals' personal data. These have become critical to the efficiency of storing medical information because of high volumes of paperwork, the ability to quickly share information between medical institutions, and the increased mandatory reporting to the government. PCMS have ultimately increased the productivity of data record utilization and have created a large dependence on technology within the medical field.
It has also led to social and ethical issues because of the basic human rights that can be a casualty for this expansion of knowledge. Hospitals and health information services are now more likely to share information with third party companies. Thus, there needs to be a reform to specify the rules of the hospital personnel who have the access to medical records. This has led to the discussion of privacy rights and created safeguards that will help data keepers understand situations where it is ethical to share an individual's medical information, provide ways for individuals to gain access to their own records, and determine who has ownership of those records. Additionally, it is used to ensure that a person's identity is kept confidential in research or statistical purposes and understand the process to make individual's aware that their health information is being used. Thus, a balance between privacy and confidentiality must occur in order to limit the amount of information disclosed and keep the occupations of physicians in check by constricting the information flow
Electronic Medical Records (EMR)
Electronic medical records (EMR) are a more efficient way of storing medical information, yet there are many negative aspects of this type of filing system as well. Hospitals are willing to adopt this type of filing system, yet only if they are able to ensure the protection of patient information.
Researchers found that state legislation and regulation of medical privacy laws reduce the amount of hospitals that adopt EMR by more than 24%. This is ultimately due to the decreasing positive network externalities that are created by additional state protections. With increases in restrictions against the diffusion of medical information, hospitals have neglected to adopt the new EMR's because privacy laws restrict health information exchanges. With decreasing numbers of medical institutions adopting the EMR filing system, the federal government's plan to have a national health network will be stopped. The national network will ultimately cost the US$156 billion in investments, yet in order for this to happen, the government needs to place higher emphasis on protecting individual medical privacy. Many politicians and business leaders find that EMR's allow for more efficiency in both time and money, yet they neglect to address the decreasing privacy protections, demonstrating the significant trade off between EMR's and individual privacy.
Privacy and Electronic Health Records (EHRs)
The three goals of information security, including electronic information security, are confidentiality, integrity, and availability.Organizations are attempting to meet these goals, referred to as the C.I.A. Triad, which is the "practice of defending information from unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction."
In a 2004 editorial in the Washington Post, U.S. Senators Bill Frist and Hillary Clinton supported this observation, stating "[patients] need...information, including access to their own health records... At the same time, we must ensure the privacy of the systems, or they will undermine the trust they are designed to create". A 2005 report by the California Health Care Foundation found that "67 percent of national respondents felt 'somewhat' or 'very concerned' about the privacy of their personal medical records".
The importance of privacy in electronic health records became prominent with the passage of the American Recovery and Reinvestment Act (ARRA) in 2009. One of the provisions (known as the Health Information Technology for Economic and Clinical Health [HITECH] Act) of the ARRA mandated incentives to clinicians for the implementation of electronic health records by 2015.Privacy advocates in the United States have raised concerns about unauthorized access to personal data as more medical practices switch from paper to electronic medical records. The Office of the National Coordinator for Health Information Technology (ONC) explained that some of the safety measures that EHR systems can utilize are passwords and pin numbers that control access to such systems, encryption of information, and an audit trail to keep track of the changes made to records.
Providing patient access to EHRs is strictly mandated by HIPAA's Privacy Rule. One study found that each year there are an estimated 25 million compelled authorizations for the release of personal health records. Researchers, however, have found new security threats open up as a result. Some of these security and privacy threats include hackers, viruses, and worms. These privacy threats are made more prominent by the emergence of "cloud computing", which is the use of shared computer processing power. Health care organizations are increasingly using cloud computing as a way to handle large amounts of data. This type of data storage, however, is susceptible to natural disasters, cybercrime and technological terrorism, and hardware failure. Health information breaches accounted for the 39 percent of all breaches in 2015. IT Security costs and implementations are needed to protect health institutions against security and data breaches.
Health screening cases
Although privacy issues with the health screening is a great concern among individuals and organizations, there has been little focus on the amount of work being done within the law to maintain the privacy expectation that people desire. Many of these issues lie within the abstractness of the term “privacy” as there are many different interpretations of the term, especially in the context of the law. Prior to 1994, there had been no cases regarding screening practices and the implications towards an individual's medical privacy, unless it was regarding HIV and drug testing. Within Glover v Eastern Nebraska Community Office of Retardation, an employee sued her employer against violating her 4th amendment rights because of unnecessary HIV testing. The court ruled in favor of the employer and argued that it was unreasonable search to have it tested. However, this was only one of the few precedents that people have to use. With more precedents, the relationships between employees and employers will be better defined. Yet with more requirements, testing among patients will lead to additional standards for meeting health care standards. Screening has become a large indicator for diagnostic tools, yet there are concerns with the information that can be gained and subsequently shared with other people other than the patient and healthcare provider
Third party issues
One of the main dangers to an individual's privacy are private corporations because of the profits they can receive from privacy-violating information. Privacy merchants are made up of two groups - one that tries to collect people's personal information while the other focuses on using client's information to market company products. Subsequently, privacy merchants purchase information from other companies, such as health insurance companies, if there is not sufficient information from their own research. Privacy merchants target health insurance companies because, nowadays, they collect huge amounts of personal information and keep them in large databases. They often require patients to provide more information that is needed for purposes other than that of doctors and other medical workers.
Additionally, people's information can be linked to other information outside of the medical field. For example, many employers use insurance information and medical records as an indicator of work ability and ethic. The selling of privacy information can also lead employers to make much money; however, this happens to many people without their consent or knowledge.
Within the United States, in order to define clear privacy laws regarding medical privacy, Title 17 thoroughly explains the ownership of one's data and adjusted the law so that people have more control over their own property. The Privacy Act of 1974 offers more restrictions regarding what corporations can access outside of an individual's consent.
States have created additional supplements to medical privacy laws. With HIPAA, many individuals were pleased to see the federal government take action in protecting the medical information of individuals. Yet when people looked into it, there was proof that the government was still protecting the rights of corporations. Many rules were seen as more of suggestions and the punishment for compromising the privacy of its patients were minimal. Even if release of medical information requires consent, blank authorizations can be allowed and will not ask for individuals for additional consent later on.
Although there is a large group of people who oppose the selling of individual's medical information, there are groups such as the Health Benefits Coalition, the Healthcare Leadership Council, and the Health Insurance Association of America that are against the new reforms for data protection as it can ruin their work and profits. Previous controversies, such as Google's "Project Nightingale" in 2019 have demonstrated potential holes in regulations of patient data and medical information. Project Nightingale, a joint effort between Google and the healthcare network Ascension, saw to the selling of millions of patients' identifiable medical information without their consent. Though Google claimed that their process was legal in obtaining the information, there was concern between researchers on this claim.
Efforts to protect health information
With the lack of help from the Department of Health and Human Services there is a conflict of interest that has been made clear. Some wish to place individual betterment as more important, while others focus more on external benefits from outside sources. The issues that occur when there are problems between the two groups are also not adequately solved which leads to controversial laws and effects. Individual interests take precedence over the benefits of society as a whole and are often viewed as selfish and for the gain of capital value. If the government does not make any more future changes to the current legislation, countless organizations and people will have access to individual medical information.
In 1999, the Gramm-Leach-Biley Act (GLBA) addressed the insurance privacy debate regarding medical privacy. Yet, there were many issues with the implementation. One issue was that there were inconsistent regulation requirements within the different states due to preexisting laws. Secondly, it was difficult to combine the pre-existing laws with the new framework. And thirdly, in order for the federal government to implement these new rules, they needed state legislature to pass it.
GLBA aimed to regulate financial institutions so that corporations could not affect people's insurance. Because of the difficulty of the implementation of the GLBA, state legislatures are able to interpret the laws themselves and create initiatives to protect the medical privacy. When states are creating their own independent legislature, they create standards that understand the impact of the legislation. If they stray from the standard laws, they must be valid and fair. The new legislation must protect the rights of businesses and allow them to continue to function despite federally regulated competition. Patients gain benefits from these new services and standards through the flow of information that is considerate with medical privacy expectations.
These regulations should focus more on the consumer versus the benefits and political exploitation. Many times, regulations are for the personal gain of the corporation, therefore, state legislatures be wary of this and try to prevent it to the best of their abilities. Medical privacy is not a new issue within the insurance industry, yet the problems regarding exploitation continue to reoccur; there is more focus on taking advantage of the business environment for personal gain.
In 2001, President George W. Bush passed additional regulations to HIPAA in order to better protect the privacy of individual medical information. These new regulations were supposed to safeguard health information privacy by creating extensive solutions for the privacy of patients. The new regulation goals included being notified once an individual's information is inspected, amend any medical records, and request communication opportunities to discuss information disclosure.
However, there are exceptions to when the disclosure of PHI can be inspected. This includes specific conditions among law enforcement, judicial and administrative proceedings, parents, significant others, public health, health research, and commercial marketing. These aspects of lack of privacy have caused an alarming number of gaps within privacy measures.
Ultimately, there is still an issue on how to ensure privacy securities; in response, the government has created new regulations that makes trade offs between an individual's privacy and public benefit. These new regulations, however, still cover individually identifiable health information - any data that contains information unique to an individual. However, non-identifiable data is not covered as the government claims it will cause minimal damage to a person's privacy. It also covers all health care organizations and covers businesses as well.
Additionally, under new HIPAA additions, the state legislation is more protective than national laws because it created more obligations for organizations to follow. Ultimately, the new rules called for expansive requirements that created better safety measures for individuals. Yet, there are still ways that businesses and healthcare organizations can be exempt from disclosure rules for all individuals. Thus, the HHS needs to find more ways to balance personal and public trade offs within medical laws. This creates a need for extra government intervention to enforce legislation and new standards to decrease the number of threats against an individual's privacy of health data.
Effects of changing medical privacy laws
Physician-patient relationships
Patients want to be able to share medical information with their physicians, yet they worry about potential privacy breaches that can occur when they release financial and confidential medical information. In order to ensure better protection, the government has created frameworks for keeping information confidential - this includes being transparent about procedures, disclosure and protection of information, and monitoring of these new rules to ensure that people's information.
Effects of Technological Advances
Recently physicians and patients have started to use email as an additional communication tool for treatment and medical interactions. This way of communication is not “new”, but its effects on doctor patient relationships has created new questions regarding legal, moral, and financial problems.
The American Medical Informatics Association has characterized medical emails as way to communicate “medical advice, treatment, and information exchanged professionally”; yet, the “spontaneity, permanence, and information power characterizing” role is significant because of its unknown affects. However, the use of emails allows for increased access, immediate aid, and increased interactions between patients and doctors. There are many benefits and negative aspects of using emails; doctors feel a new sense of negative responsibility to respond to emails outside of the office, but also find benefits with facilitating rapid responses to patient's questions.
Additionally, the use of email between physicians and their patients will continue to grow because of the increasing use of the Internet. With the Internet, patients are able to ask for medical advice and treatment, yet issues regarding confidentiality and legal issues come up. Ultimately, emails between a physician and patient are supposed to be used as a supplement for face to face interactions, not for casual messages. If used properly, physicians could use emails as a way to supplement interactions and provide more medical aid to those who need it immediately.
Traditional beliefs on doctor-patient relationship
Although many people believe that the technological changes are the reason for fear of sharing medical privacy, there is a theory that states that institutional ideals between doctors and their patients have created the fear of sharing medical privacy information. Although levels of confidentiality are changing, individuals often feel the need to share more information with their doctors in order to get diagnosed correctly. Because of this, people are concerned with how much information their physicians have. This information could be transferred to other third party companies. However, there is a call for smaller emphasis on sharing and confidentiality in order to rid patients from their fears of information breaching. There is a common belief that the confidentiality of one's information also only protects the doctors and not the patients, therefore there is a negative stigma towards revealing too much information. Thus it causes patients to not share vital information relevant to their illnesses.
Medical privacy standards and laws by country
Australia – eHealth
On July 1, 2012, the Australian Government launched the Personally Controlled Electronic Health Record (PCEHR) (eHealth) system. The full implementation incorporates an electronic summary prepared by nominated healthcare providers along with consumer-provided notes. Further, the summary will include information on the individual's allergies, adverse reactions, medications, immunizations, diagnoses, and treatments. The consumer notes will operate as a personal medical diary that only the individual can view and edit. The opt-in system gives people the option to choose whether to register for the eHealth record or not.
As of January 2016, the Commonwealth Department of Health changed the name PCEHR to My Health Record.
Privacy – Governance
The Personally Controlled Electronic Health Records Act 2012 and Privacy Act 1988 governs how eHealth record information is managed and protected. The PCEHR System Operator abides by the Information Privacy Principles in the Privacy Act 1988 (Commonwealth) as well as any applicable State or Territory privacy laws. A Privacy Statement sets out the application of the collection of personal information by the System Operator. The statement includes an explanation of the types of personal information collected, what the information is used for, and how the information is stored. The statement covers measures in place to protect personal information from misuse, loss, unauthorized access, modification, and disclosure.
Privacy – Security measures
Security measures include audit trails so that patients can see who has accessed their medical records along with the time the records were accessed. Other measures include the use of encryption as well as secure logins and passwords. Patient records are identified using an Individual Health Identifier (IHI), assigned by Medicare, the IHI service provider.
Privacy – Issues
A 2012 nationwide survey in Australia assessed privacy concerns on patients' health care decisions, which could impact patient care. Results listed that 49.1% of Australian patients stated they have withheld or would withhold information from their health care provider based on privacy concerns.
How does consent impact privacy?
One concern is that personal control of the eHealth record via consent does not guarantee the protection of privacy. It is argued that a narrow definition, 'permission' or 'agreement', does not provide protection for privacy and is not well represented in Australian legislation. The PCEHR allows clinicians to assume consent by consumer participation in the system; however, the needs of the consumer may not be met. Critics argue that the broader definition of 'informed consent' is required, as it encompasses the provision of relevant information by the healthcare practitioner, and understanding of that information by the patient.
Is it legitimate to use personal information for public purposes?
Data from the PCEHR is to be predominantly used in patient healthcare, but other uses are possible, for policy, research, audit and public health purposes. The concern is that in the case of research, what is allowed goes beyond existing privacy legislation.
What are ‘illegitimate’ uses of health information?
The involvement of pharmaceutical companies is viewed as potentially problematic. If they are perceived by the public to be more concerned with profit than public health, public acceptance of their use of PCEHRs could be challenged. Also perceived as problematic, is the potential for parties other than health care practitioners, such as insurance companies, employers, police or the government, to use information in a way which could result in discrimination or disadvantage.
What are the potential implications of unwanted disclosure of patient information?
Information 'leakage' is seen as having the potential to discourage both patient and clinician from participating in the system. Critics argue the PCEHR initiative can only work, if a safe, effective continuum of care within a trusting patient/clinician relationship is established. If patients lose trust in the confidentiality of their eHealth information, they may withhold sensitive information from their health care providers. Clinicians may be reluctant to participate in a system where they are uncertain about the completeness of the information.
Are there sufficient safeguards for the protection of patient information?
Security experts have questioned the registration process, where those registering only have to provide a Medicare card number, and names and birth dates of family members to verify their identity. Concerns have also been raised by some stakeholders, about the inherent complexities of the limited access features. They warn that access to PCEHR record content, may involve transfer of information to a local system, where PCEHR access controls would no longer apply.
Canada
The privacy of patient information is protected at both the federal level and provincial level in Canada. The health information legislation established the rules that must be followed for the collection, use, disclosure and protection of health information by healthcare workers known as "custodians". These custodians have been defined to include almost all healthcare professionals (including all physicians, nurses, chiropractors, operators of ambulances and operators of nursing homes). In addition to the regulatory bodies of specific healthcare workers, the provincial privacy commissions are central to the protection of patient information.
Turkey
The privacy of patient information is guaranteed by articles 78 and 100 of legal code 5510.
On the other hand, the Social Security Institution (SGK), which regulates and administers state-sponsored social security / insurance benefits, sells patient information after allegedly anonymizing the data, confirmed on October 25, 2014.
United Kingdom
The National Health Service is increasingly using electronic health records, but until recently, the records held by individual NHS organisations, such as General Practitioners, NHS Trusts, dentists and pharmacies, were not linked. Each organization was responsible for the protection of patient data it collected. The care.data programme, which proposed to extract anonymised data from GP surgeries into a central database, aroused considerable opposition.
In 2003, the NHS made moves to create a centralized electronic registry of medical records. The system is protected by the UK's Government Gateway, which was built by Microsoft. This program is known as the Electronic Records Development and the Implementation Programme (ERDIP). The NHS National Program for IT was criticized for its lack of security and lack of patient privacy. It was one of the projects that caused the Information Commissioner to warn about the danger of the country "sleepwalking" into a surveillance society. Pressure groups opposed to ID cards also campaigned against the centralized registry.
Newspapers feature stories about lost computers and memory sticks but a more common and longstanding problem is about staff accessing records that they have no right to see. It has always been possible for staff to look at paper records, and in most cases, there is no track of record. Therefore, electronic records make it possible to keep track of who has accessed which records. NHS Wales has created the National Intelligent Integrated Audit System which provides "a range of automatically generated reports, designed to meet the needs of our local health boards and trusts, instantly identifying any potential issues when access has not been legitimate". Maxwell Stanley Consulting will use a system called Patient Data Protect (powered by VigilancePro) which can spot patterns – such as whether someone is accessing data about their relatives or colleagues.
United States
Since 1974, numerous federal laws have been passed in the United States to specify the privacy rights and protections of patients, physicians, and other covered entities to medical data. Many states have passed its own laws to try and better protect the medical privacy of their citizens.
An important national law regarding medical privacy is the Health Insurance Portability and Accountability Act of 1996 (HIPAA), yet there are many controversies regarding the protection rights of the law.
Health Insurance Portability and Accountability Act of 1996 (HIPAA)
The most comprehensive law passed is the Health Insurance Portability and Accountability Act of 1996 (HIPAA), which was later revised after the Final Omnibus Rule in 2013. HIPAA provides a federal minimum standard for medical privacy, sets standards for uses and disclosures of protected health information (PHI), and provides civil and criminal penalties for violations.
Prior to HIPAA, only certain groups of people were protected under medical laws such as individuals with HIV or those who received Medicare aid. HIPAA provides protection of health information and supplements additional state and federal laws; yet it should be understood that the law's goal is to balance public health benefits, safety, and research while protecting the medical information of individuals. Yet many times, privacy is compromised for the benefits of the research and public health.
According to HIPAA, the covered entities that must follow the law's set mandates are health plans, health care clearinghouses, and health care providers that electronically transmit PHI. Business associates of these covered entities are also subject to HIPAA's rules and regulations.
In 2008, Congress passed the Genetic Information Nondiscrimination Act of 2008 (GINA), which aimed to prohibit genetic discrimination for individuals seeking health insurance and employment. The law also included a provision which mandated that genetic information held by employers be maintained in a separate file and prohibited disclosure of genetic information except in limited circumstances.
In 2013, after GINA was passed, the HIPAA Omnibus Rule amended HIPAA regulations to include genetic information in the definition of Protected Health Information (PHI). This rule also expanded HIPAA by broadening the definition of business associates to include any entity that sends or accesses PHI such as health IT vendors.
Controversies
The Health Insurance Portability and Accountability Act (HIPAA) is critiqued for not providing strong medical privacy protections as it only provides regulations that disclose certain information.
The government authorizes the access of an individual's health information for “treatment, payment, and health care options without patient consent”. Additionally, HIPAA rules are very broad and do not protect an individual from unknown privacy threats. Additionally, a patient would not be able to identify the reason for breach due to inconsistent requirements. Because of limited confidentiality, HIPAA facilitates the sharing of medical information as there is little limitation from different organizations. Information can easily be exchanged between medical institutions and other non-medical institutions because of the little regulation of HIPAA - some effects include job loss due to credit score sharing or loss of insurance.
Additionally, doctors are not required to keep patients information confidential because in many cases patient consent is now optional. Patients are often unaware of the lack of privacy they have as medical processes and forms do not explicitly state the extent of how protected they are. Physicians believe that overall, HIPAA will cause unethical and non-professional mandates that can affect a person's privacy and therefore, they in response have to provide warnings about their privacy concerns. Because physicians are not able to ensure a person's privacy, there is a higher chance that patients will be less likely to get treatment and share what their medical concerns are. Individuals have asked for better consent requirements by asking if physicians can warn them prior to the sharing of any personal information. Patients want to be able to share medical information with their physicians, yet they worry about potential breaches that can release financial information and other confidential information and with that fear, they are wary of who may have access.
In order to ensure better protection, the government has created frameworks for keeping information confidential - some of which include being transparent about procedures, disclosure and protection of information, and monitoring of these new rules to ensure that people's information is not affected by breaches. Although there are many frameworks to ensure the protection of basic medical data, many organizations do not have these provisions in check. HIPAA gives a false hope to patients and physicians as they are unable to protect their own information. Patients have little rights regarding their medical privacy rights and physicians cannot guarantee those.
Hurricane Katrina
HIPAA does not protect the information of individuals as the government is able to publish certain information when they find it necessary. The government is exempted from privacy rules regarding national security. HIPAA additionally allows the authorization of protected health information (PHI) in order to aid in threats to public health and safety as long as it follows the good faith requirement - the idea that disclosing of information is necessary to the benefit of the public. The Model State Emergency Powers Act (MSEHPA) gives the government the power to “suspend regulations, seize property, quarantine individuals and enforce vaccinations” and requires that healthcare providers give information regarding potential health emergencies".
In regards to Hurricane Katrina, many people in Louisiana relied on Medicaid and their PHI was subsequently affected. People's medical privacy rights were soon waived in order for patient's to get the treatment they needed. Yet, many patients were unaware that their rights had been waived. In order to prevent the sharing of personal information in future natural disasters, a website was created in order to protect people's medical data. Ultimately, Katrina showed that the government was unprepared to face a national health scare.
Medical data outside of HIPAA
Many patients mistakenly believe that HIPAA protects all health information. HIPAA does not usually cover fitness trackers, social media sites and other health data created by the patient. Health information can be disclosed by patients in emails, blogs, chat groups, or social media sites including those dedicated to specific illnesses, "liking" web pages about diseases, completing online health and symptom checkers, and donating to health causes. In addition, credit card payments for physician visit co-pays, purchase of over the counter (OTC) medications, home testing products, tobacco products, and visits to alternative practitioners are also not covered by HIPAA.
A 2015 study reported over 165,000 health apps available to consumers. Disease treatment and management account for nearly a quarter of consumer apps. Two-thirds of the apps target fitness and wellness, and ten percent of these apps can collect data from a device or sensor. Since the Food and Drug Administration (FDA) only regulates medical devices and most of these applications are not medical devices, they do not require FDA approval. The data from most apps are outside HIPAA regulations because they do not share data with healthcare providers. "Patients may mistakenly assume that mobile apps are under the scope of HIPAA since the same data, such as heart rate, may be collected by an application that is accessible to their physician and covered by HIPAA, or on a mobile app that is not accessible to the physician and not covered by HIPAA.
Changes
In 2000, there was a new surge to add new regulations to HIPAA. It included the following goals: to protect individual medical information by providing secure access and control of their own information, improving healthcare quality by creating a more trust between consumers and their healthcare providers and third party organizations, and improve the efficiency of the medical system through new rules and regulations put forth by the local governments, individuals, and organizations.
The implementation of these new goals was complicated by the change in administrations (Clinton to Bush), so it was difficult for the changes to be successfully implemented. HIPAA, in theory, should apply to all insurance companies, services, and organizations, yet there are exceptions to who actually qualifies under these categories.
Yet, within each category, there are specific restrictions that are different in every category. There are no universal laws that can be easily applied that are easy for organizations can follow. Thus, many states have neglected to implement these new policies. Additionally, there are new patient rights that call for better protection and disclosure of health information. However, like the new rules regarding insurance companies, the enforcement of the legislation is limited and not effective as they are too broad and complex. Therefore, it is difficult for many organizations to ensure the privacy of these people. Enforcing these new requirements also causes companies to spend many resources that they are not willing to use and enforce, which ultimately leads to further problems regarding the invasion of an individual's medical privacy.
Oregon-specific laws
The Oregon Genetic Privacy Act (GPA) states that “an individual’s genetic information is the property of the individual”. The idea of an individual's DNA being compared to property occurred when research caused an individual's privacy to be threatened. Many individuals believed that their genetic information was “more sensitive, personal, and potentially damaging than other types of medical information.” Thus, people started calling for more protections. People started to question the how their DNA would be able to stay anonymous within research studies and argued that the identity of an individual could be exposed if the research was later shared. As a result, there was a call for individuals to treat their DNA as property and protect it through property rights. Therefore, individuals can control the disclosure of their information without extra questioning and research. Many people believed that comparing one's DNA to property was inappropriate, yet individuals argued that property and privacy are interconnected because they both want to protect the right to control one's body.
Many research and pharmaceutical companies showed opposition because they were worried about conflicts that might arise regarding privacy issues within their work. Individuals, on the other hand, continued to support the act because they wanted protection over their own DNA. As a result, lawmakers created a compromise that included a property clause, that would give individuals protection rights, but also included provisions that would allow research to be done without much consent, limiting the benefits of the provisions. Afterwards, a committee was created to study the effects of the act and how it affected the way it was analyzed and stored. They found that the act benefited many individuals who did not want their privacy being shared with others and therefore the law was officially implemented in 2001.
Connecticut-specific laws
In order to solve HIPAA issues within Connecticut, state legislatures tried to create better provisions to protect the people living within the state. One of the issues that Connecticut tried to solve were issues with consent. Within the consent clause, health plans and health care clearinghouses do not need to receive consent from individuals because of a general provider consent form with gives healthcare providers permission to disclose all medical information. The patient thus does not get notification when their information is being shared afterwards.
Connecticut, like many other states, tried to protect individual's information from disclosure of information through additional clauses that would protect them from businesses initiatives. In order to do so, Connecticut legislature passed the Connecticut Insurance Information and Privacy Protect Act, which provides additional protections of individual medical information. If third parties neglect to follow this law, they will be fined, may face jail time, and may have their licenses suspended. Yet, even in these additional provisions, there were many holes within this legislation that allowed for businesses agreements to be denied and subsequently, information was compromised. Connecticut is still working to shift its divergent purposes to creating more stringent requirements that create better protections through clear provisions of certain policies.
California-specific laws
In California, the Confidentiality of Medical Information Act (CMIA), provides more stringent protections than the federal statutes. HIPAA expressly provides that more stringent state laws like CMIA, will override HIPAA's requirements and penalties. More specifically, CMIA prohibits providers, contractors and health care service plans from disclosing PHI without prior authorization.
These medical privacy laws also set a higher standard for health IT vendors or vendors of an individual's personal health record (PHR) by applying such statutes to vendors, even if they are not business associates of a covered entity. CMIA also outlines penalties for violating the law. These penalties range from liability to the patient (compensatory damages, punitive damages, attorneys’ fees, costs of litigation) to civil and even criminal liability.
Likewise, California's Insurance Information and Privacy Protection Act (IIPPA) protects against unauthorized disclosure of PHI by prohibiting unapproved information sharing for information collected from insurance applications and claims resolution.
New Zealand
In New Zealand, the Health Information Privacy Code (1994) sets specific rules for agencies in the health sector to better ensure the protection of individual privacy. The code addresses the health information collected, used, held and disclosed by health agencies. For the health sector, the code takes the place of the information privacy principles.
Netherlands
The introduction of a nationwide system for the exchange of medical information and access to electronic patient records led to much discussion in the Netherlands.
Privacy for research participants
In the course of having or being part of a medical practice, doctors may obtain information that they wish to share with the medical or research community. If this information is shared or published, the privacy of the patients must be respected. Likewise, participants in medical research that are outside the realm of direct patient care have a right to privacy as well.
See also
STD notifications in dating services
Electronic health record (EHR)
Electronic medical record (EMR)
Exemptions on the GDPR: national security
Genetic privacy
Modesty in medical settings
National Electronic Health Transition Authority (NEHTA)
Personal health record
Personally Controlled Electronic Health Record (PCEHR)
Protected health information
Intentional contagion of infection
References
Further reading
External links
European Standards on Confidentiality and Privacy in Healthcare
Opt out of the NHS Spine, or the NHS Confidentiality campaign
Electronic Frontier Foundation on medical privacy
Medical law
Data laws |
383972 | https://en.wikipedia.org/wiki/Eben%20Moglen | Eben Moglen | Eben Moglen (born 1959) is an American legal scholar who is professor of law and legal history at Columbia University, and is the founder, Director-Counsel and Chairman of Software Freedom Law Center.
Professional biography
Moglen started out as a computer programming language designer and then received his bachelor's degree from Swarthmore College in 1980. In 1985, he received a Master of Philosophy in history and a JD from Yale University. He has held visiting appointments at Harvard University, Tel Aviv University and the University of Virginia since 1987.
He was a law clerk to Justice Thurgood Marshall (1986–87 term). He joined the faculty of Columbia Law School in 1987, and was admitted to the New York bar in 1988. He received a Ph.D. in history from Yale University in 1993. Moglen serves as a director of the Public Patent Foundation.
Moglen was part of Philip Zimmermann's defense team, when Zimmermann was being investigated over the export of Pretty Good Privacy, a public key encryption system, under US export laws.
In 2003 he received the EFF Pioneer Award. In February 2005, he founded the Software Freedom Law Center.
Moglen was closely involved with the Free Software Foundation, serving as general counsel from 1994-2016 and board member from 2000 to 2007. As counsel, Moglen was tasked with enforcing the GNU General Public License (GPL) on behalf of the FSF, and later became heavily involved with drafting version 3 of the GPL. On April 23, 2007 he announced in a blog post that he would be stepping down from the board of directors of the Free Software Foundation. Moglen stated that after the GPLv3 Discussion Draft 3 had been released, he wanted to devote more time to writing, teaching, and the Software Freedom Law Center.
Freedom Box Foundation
In February 2011, Moglen created the Freedom Box Foundation to design software for a very small server called the FreedomBox. The FreedomBox aims to be an affordable personal server which runs only free software, with a focus on anonymous and secure communication. FreedomBox launched version 0.1 in 2012.
Stances on free software
Moglen says that free software is a fundamental requirement for a democratic and free society in which we are surrounded by and dependent upon technical devices. Only if controlling these devices is open to all via free software, can we balance power equally.
Moglen's Metaphorical Corollary to Faraday's Law is the idea that the information appearance and flow between the human minds connected via the Internet works like electromagnetic induction. Hence Moglen's phrase "Resist the resistance!" (i.e. remove anything that inhibits the flow of information).
Statements and perspectives
Moglen believes the idea of proprietary software is as ludicrous as having "proprietary mathematics" or "proprietary geometry." This would convert the subjects from "something you can learn" into "something you must buy", he has argued. He points out that software is among the "things which can be copied infinitely over and over again, without any further costs."
Moglen has criticized what he calls the "reification of selfishness." He has said, "A world full of computers which you can't understand, can't fix and can't use (because it is controlled by inaccessible proprietary software) is a world controlled by machines."
He has called on lawyers to help the Free Software movement, saying: "Those who want to share their code can make products and share their work without additional legal risks." He urged his legal colleagues, "It's worth giving up a little in order to produce a sounder ecology for all. Think kindly about the idea of sharing."
Moglen has criticized trends which result in "excluding people from knowledge." On the issue of Free Software versus proprietary software, he has argued that "much has been said by the few who stand to lose." Moglen calls for a "sensible respect for both the creators and users" of software code. In general, this concept is a part of what Moglen has termed a "revolution" against the privileged owners of media, distribution channels, and software. On March 13, 2009, in a speech given at Seattle University, Moglen said of the free software movement that, "'When everybody owns the press, then freedom of the press belongs to everybody' seems to be the inevitable inference, and that’s where we are moving, and when the publishers get used to that, they’ll become us, and we’ll become them, and the first amendment will mean: 'Congress shall make no law [...] abridging freedom of speech, or of the press [...].', not – as they have tended to argue in the course of the 20th century – 'Congress shall make no law infringing the sacred right of the Sulzbergers to be different.'"
On the subject of Digital Rights Management, Moglen once said, "We also live in a world in which the right to tinker is under some very substantial threat. This is said to be because movie and record companies must eat. I will concede that they must eat. Though, like me, they should eat less."
References
External links
Eben Moglen's webpage at Columbia University
Law clerks of the Supreme Court of the United States
Members of the Free Software Foundation board of directors
American legal scholars
GNU people
Columbia University faculty
Yale Law School alumni
Swarthmore College alumni
Harvard University staff
University of Virginia School of Law faculty
Copyright scholars
Copyright activists
American lawyers
American bloggers
Living people
Columbia Law School faculty
1959 births
Free software people
Articles containing video clips |
384064 | https://en.wikipedia.org/wiki/Health%20Insurance%20Portability%20and%20Accountability%20Act | Health Insurance Portability and Accountability Act | The Health Insurance Portability and Accountability Act of 1996 (HIPAA or the Kennedy–Kassebaum Act) is a United States federal statute enacted by the 104th United States Congress and signed into law by President Bill Clinton on August 21, 1996. It modernized the flow of healthcare information, stipulates how personally identifiable information maintained by the healthcare and healthcare insurance industries should be protected from fraud and theft, and addressed some limitations on healthcare insurance coverage. It generally prohibits healthcare providers and healthcare businesses, called covered entities, from disclosing protected information to anyone other than a patient and the patient's authorized representatives without their consent. With limited exceptions, it does not restrict patients from receiving information about themselves. It does not prohibit patients from voluntarily sharing their health information however they choose, nor – if they disclose medical information to family members, friends, or other individuals not a part of a covered entity – legally require them to maintain confidentiality.
The act consists of five titles. Title I of HIPAA protects health insurance coverage for workers and their families when they change or lose their jobs. Title II of HIPAA, known as the Administrative Simplification (AS) provisions, requires the establishment of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. Title III sets guidelines for pre-tax medical spending accounts, Title IV sets guidelines for group health plans, and Title V governs company-owned life insurance policies.
Titles
There are five sections to the act, known as titles.
Title I: Health Care Access, Portability, and Renewability
Title I of HIPAA regulates the availability and breadth of group health plans and certain individual health insurance policies. It amended the Employee Retirement Income Security Act, the Public Health Service Act, and the Internal Revenue Code. Furthermore, Title I addresses the issue of "job lock" which is the inability for an employee to leave their job because they would lose their health coverage. To combat the job lock issue, the Title protects health insurance coverage for workers and their families if they lose or change their jobs.
Title I requires the coverage of and also limits restrictions that a group health plan can place on benefits for preexisting conditions. Group health plans may refuse to provide benefits in relation to preexisting conditions for either 12 months following enrollment in the plan or 18 months in the case of late enrollment. Title I allows individuals to reduce the exclusion period by the amount of time that they have had "creditable coverage" before enrolling in the plan and after any "significant breaks" in coverage. "Creditable coverage" is defined quite broadly and includes nearly all group and individual health plans, Medicare, and Medicaid. A "significant break" in coverage is defined as any 63-day period without any creditable coverage. Along with an exception, allowing employers to tie premiums or co-payments to tobacco use, or body mass index.
Title I also requires insurers to issue policies without exclusion to those leaving group health plans with creditable coverage (see above) exceeding 18 months, and renew individual policies for as long as they are offered or provide alternatives to discontinued plans for as long as the insurer stays in the market without exclusion regardless of health condition.
Some health care plans are exempted from Title I requirements, such as long-term health plans and limited-scope plans like dental or vision plans offered separately from the general health plan. However, if such benefits are part of the general health plan, then HIPAA still applies to such benefits. For example, if the new plan offers dental benefits, then it must count creditable continuous coverage under the old health plan towards any of its exclusion periods for dental benefits.
An alternate method of calculating creditable continuous coverage is available to the health plan under Title I. That is, 5 categories of health coverage can be considered separately, including dental and vision coverage. Anything not under those 5 categories must use the general calculation (e.g., the beneficiary may be counted with 18 months of general coverage, but only 6 months of dental coverage, because the beneficiary did not have a general health plan that covered dental until 6 months prior to the application date). Since limited-coverage plans are exempt from HIPAA requirements, the odd case exists in which the applicant to a general group health plan cannot obtain certificates of creditable continuous coverage for independent limited-scope plans, such as dental to apply towards exclusion periods of the new plan that does include those coverages.
Hidden exclusion periods are not valid under Title I (e.g., "The accident, to be covered, must have occurred while the beneficiary was covered under this exact same health insurance contract"). Such clauses must not be acted upon by the health plan. Also, they must be re-written so they can comply with HIPAA.
Title II: Preventing Health Care Fraud and Abuse; Administrative Simplification; Medical Liability Reform
Title II of HIPAA establishes policies and procedures for maintaining the privacy and the security of individually identifiable health information, outlines numerous offenses relating to health care, and establishes civil and criminal penalties for violations. It also creates several programs to control fraud and abuse within the health-care system. However, the most significant provisions of Title II are its Administrative Simplification rules. Title II requires the Department of Health and Human Services (HHS) to increase the efficiency of the health-care system by creating standards for the use and dissemination of health-care information.
These rules apply to "covered entities", as defined by HIPAA and the HHS. Covered entities include health plans, health care clearinghouses (such as billing services and community health information systems), and health care providers that transmit health care data in a way regulated by HIPAA.
Per the requirements of Title II, the HHS has promulgated five rules regarding Administrative Simplification: the Privacy Rule, the Transactions and Code Sets Rule, the Security Rule, the Unique Identifiers Rule, and the Enforcement Rule.
Privacy Rule
The HIPAA Privacy Rule is composed of national regulations for the use and disclosure of Protected Health Information (PHI) in healthcare treatment, payment and operations by covered entities.
The effective compliance date of the Privacy Rule was April 14, 2003, with a one-year extension for certain "small plans". The HIPAA Privacy Rule regulates the use and disclosure of protected health information (PHI) held by "covered entities" (generally, health care clearinghouses, employer-sponsored health plans, health insurers, and medical service providers that engage in certain transactions). By regulation, the HHS extended the HIPAA privacy rule to independent contractors of covered entities who fit within the definition of "business associates". PHI is any information that is held by a covered entity regarding health status, provision of health care, or health care payment that can be linked to any individual. This is interpreted rather broadly and includes any part of an individual's medical record or payment history. Covered entities must disclose PHI to the individual within 30 days upon request. Also, they must disclose PHI when required to do so by law such as reporting suspected child abuse to state child welfare agencies.
Covered entities may disclose protected health information to law enforcement officials for law enforcement purposes as required by law (including court orders, court-ordered warrants, subpoenas) and administrative requests; or to identify or locate a suspect, a fugitive, a material witness, or a missing person.
A covered entity may disclose PHI to certain parties to facilitate treatment, payment, or health care operations without a patient's express written authorization. Any other disclosures of PHI require the covered entity to obtain written authorization from the individual for the disclosure. In any case, when a covered entity discloses any PHI, it must make a reasonable effort to disclose only the minimum necessary information required to achieve its purpose.
The Privacy Rule gives individuals the right to request a covered entity to correct any inaccurate PHI. Also, it requires covered entities to take some reasonable steps on ensuring the confidentiality of communications with individuals. For example, an individual can ask to be called at their work number instead of home or cell phone numbers.
The Privacy Rule requires covered entities to notify individuals of uses of their PHI. Covered entities must also keep track of disclosures of PHI and document privacy policies and procedures. They must appoint a Privacy Official and a contact person responsible for receiving complaints and train all members of their workforce in procedures regarding PHI.
An individual who believes that the Privacy Rule is not being upheld can file a complaint with the Department of Health and Human Services Office for Civil Rights (OCR). In 2006 the Wall Street Journal reported that the OCR had a long backlog and ignores most complaints. "Complaints of privacy violations have been piling up at the Department of Health and Human Services. Between April of 2003 and November 2006, the agency fielded 23,886 complaints related to medical-privacy rules, but it has not yet taken any enforcement actions against hospitals, doctors, insurers or anyone else for rule violations. A spokesman for the agency says it has closed three-quarters of the complaints, typically because it found no violation or after it provided informal guidance to the parties involved." However, in July 2011, the University of California, Los Angeles agreed to pay $865,500 in a settlement regarding potential HIPAA violations. An HHS Office for Civil Rights investigation showed that from 2005 to 2008, unauthorized employees repeatedly and without legitimate cause looked at the electronic protected health information of numerous UCLAHS patients.
It is a misconception that the Privacy Rule creates a right for any individual to refuse to disclose any health information (such as chronic conditions or immunization records) if requested by an employer or business. HIPAA Privacy Rule requirements merely place restrictions on disclosure by covered entities and their business associates without the consent of the individual whose records are being requested; they do not place any restrictions upon requesting health information directly from the subject of that information.
2013 Final Omnibus Rule Update
In January 2013, HIPAA was updated via the Final Omnibus Rule. The updates included changes to the Security Rule and Breach Notification portions of the HITECH Act. The most significant changes related to the expansion of requirements to include business associates, where only covered entities had originally been held to uphold these sections of the law.
In addition, the definition of "significant harm" to an individual in the analysis of a breach was updated to provide more scrutiny to covered entities with the intent of disclosing breaches that previously were unreported. Previously, an organization needed proof that harm had occurred whereas now organizations must prove that harm had not occurred.
Protection of PHI was changed from indefinite to 50 years after death. More severe penalties for violation of PHI privacy requirements were also approved.
The HIPAA Privacy rule may be waived during natural disaster. This was the case with Hurricane Harvey in 2017.
HITECH Act: Privacy Requirements
See the Privacy section of the Health Information Technology for Economic and Clinical Health Act (HITECH Act).
Right to access one's PHI
The Privacy Rule requires medical providers to give individuals access to their PHI. After an individual requests information in writing (typically using the provider's form for this purpose), a provider has up to 30 days to provide a copy of the information to the individual. An individual may request the information in electronic form or hard-copy, and the provider is obligated to attempt to conform to the requested format. For providers using an electronic health record (EHR) system that is certified using CEHRT (Certified Electronic Health Record Technology) criteria, individuals must be allowed to obtain the PHI in electronic form. Providers are encouraged to provide the information expediently, especially in the case of electronic record requests.
Individuals have the broad right to access their health-related information, including medical records, notes, images, lab results, and insurance and billing information. Explicitly excluded are the private psychotherapy notes of a provider, and information gathered by a provider to defend against a lawsuit.
Providers can charge a reasonable amount that relates to their cost of providing the copy, however, no charge is allowable when providing data electronically from a certified EHR using the "view, download, and transfer" feature which is required for certification. When delivered to the individual in electronic form, the individual may authorize delivery using either encrypted or unencrypted email, delivery using media (USB drive, CD, etc., which may involve a charge), direct messaging (a secure email technology in common use in the healthcare industry), or possibly other methods. When using un-encrypted email, the individual must understand and accept the risks to privacy using this technology (the information may be intercepted and examined by others). Regardless of delivery technology, a provider must continue to fully secure the PHI while in their system and can deny the delivery method if it poses additional risk to PHI while in their system.
An individual may also request (in writing) that their PHI is delivered to a designated third party such as a family care provider.
An individual may also request (in writing) that the provider send PHI to a designated service used to collect or manage their records, such as a Personal Health Record application. For example, a patient can request in writing that her ob-gyn provider digitally transmit records of her latest pre-natal visit to a pregnancy self-care app that she has on her mobile phone.
Disclosure to relatives
According to their interpretations of HIPAA, hospitals will not reveal information over the phone to relatives of admitted patients. This has in some instances impeded the location of missing persons. After the Asiana Airlines Flight 214 San Francisco crash, some hospitals were reluctant to disclose the identities of passengers that they were treating, making it difficult for Asiana and the relatives to locate them. In one instance, a man in Washington state was unable to obtain information about his injured mother.
Janlori Goldman, director of the advocacy group Health Privacy Project, said that some hospitals are being "overcautious" and misapplying the law, the Times reports. Suburban Hospital in Bethesda, Md., has interpreted a federal regulation that requires hospitals to allow patients to opt out of being included in the hospital directory as meaning that patients want to be kept out of the directory unless they specifically say otherwise. As a result, if a patient is unconscious or otherwise unable to choose to be included in the directory, relatives and friends might not be able to find them, Goldman said.
Transactions and Code Sets Rule
HIPAA was intended to make the health care system in the United States more efficient by standardizing health care transactions. HIPAA added a new Part C titled "Administrative Simplification" to Title XI of the Social Security Act. This is supposed to simplify healthcare transactions by requiring all health plans to engage in health care transactions in a standardized way.
The HIPAA/EDI (electronic data interchange) provision was scheduled to take effect from October 16, 2003, with a one-year extension for certain "small plans". However, due to widespread confusion and difficulty in implementing the rule, CMS granted a one-year extension to all parties. On January 1, 2012 newer versions, ASC X12 005010 and NCPDP D.0 become effective, replacing the previous ASC X12 004010 and NCPDP 5.1 mandate. The ASC X12 005010 version provides a mechanism allowing the use of ICD-10-CM as well as other improvements.
After July 1, 2005 most medical providers that file electronically had to file their electronic claims using the HIPAA standards in order to be paid.
Under HIPAA, HIPAA-covered health plans are now required to use standardized HIPAA electronic transactions. See, 42 USC § 1320d-2 and 45 CFR Part 162. Information about this can be found in the final rule for HIPAA electronic transaction standards (74 Fed. Reg. 3296, published in the Federal Register on January 16, 2009), and on the CMS website.
Key EDI (X12) transactions used for HIPAA compliance are:
EDI Health Care Claim Transaction set (837) is used to submit health care claim billing information, encounter information, or both, except for retail pharmacy claims (see EDI Retail Pharmacy Claim Transaction). It can be sent from providers of health care services to payers, either directly or via intermediary billers and claims clearinghouses. It can also be used to transmit health care claims and billing payment information between payers with different payment responsibilities where coordination of benefits is required or between payers and regulatory agencies to monitor the rendering, billing, and/or payment of health care services within a specific health care/insurance industry segment.
For example, a state mental health agency may mandate all healthcare claims, Providers and health plans who trade professional (medical) health care claims electronically must use the 837 Health Care Claim: Professional standard to send in claims. As there are many different business applications for the Health Care claim, there can be slight derivations to cover off claims involving unique claims such as for institutions, professionals, chiropractors, and dentists etc.
EDI Retail Pharmacy Claim Transaction (NCPDP Telecommunications Standard version 5.1) is used to submit retail pharmacy claims to payers by health care professionals who dispense medications, either directly or via intermediary billers and claims clearinghouses. It can also be used to transmit claims for retail pharmacy services and billing payment information between payers with different payment responsibilities where coordination of benefits is required or between payers and regulatory agencies to monitor the rendering, billing, and/or payment of retail pharmacy services within the pharmacy health care/insurance industry segment.
EDI Health Care Claim Payment/Advice Transaction Set (835) can be used to make a payment, send an Explanation of Benefits (EOB), send an Explanation of Payments (EOP) remittance advice, or make a payment and send an EOP remittance advice only from a health insurer to a health care provider either directly or via a financial institution.
EDI Benefit Enrollment and Maintenance Set (834) can be used by employers, unions, government agencies, associations or insurance agencies to enroll members to a payer. The payer is a healthcare organization that pays claims, administers insurance or benefit or product. Examples of payers include an insurance company, healthcare professional (HMO), preferred provider organization (PPO), government agency (Medicaid, Medicare etc.) or any organization that may be contracted by one of these former groups.
EDI Payroll Deducted and another group Premium Payment for Insurance Products (820) is a transaction set for making a premium payment for insurance products. It can be used to order a financial institution to make a payment to a payee.
EDI Health Care Eligibility/Benefit Inquiry (270) is used to inquire about the health care benefits and eligibility associated with a subscriber or dependent.
EDI Health Care Eligibility/Benefit Response (271) is used to respond to a request inquiry about the health care benefits and eligibility associated with a subscriber or dependent.
EDI Health Care Claim Status Request (276) This transaction set can be used by a provider, recipient of health care products or services or their authorized agent to request the status of a health care claim.
EDI Health Care Claim Status Notification (277) This transaction set can be used by a healthcare payer or authorized agent to notify a provider, recipient or authorized agent regarding the status of a health care claim or encounter, or to request additional information from the provider regarding a health care claim or encounter. This transaction set is not intended to replace the Health Care Claim Payment/Advice Transaction Set (835) and therefore, is not used for account payment posting. The notification is at a summary or service line detail level. The notification may be solicited or unsolicited.
EDI Health Care Service Review Information (278) This transaction set can be used to transmit health care service information, such as subscriber, patient, demographic, diagnosis or treatment data for the purpose of the request for review, certification, notification or reporting the outcome of a health care services review.
EDI Functional Acknowledgement Transaction Set (997) this transaction set can be used to define the control structures for a set of acknowledgments to indicate the results of the syntactical analysis of the electronically encoded documents. Although it is not specifically named in the HIPAA Legislation or Final Rule, it is necessary for X12 transaction set processing. The encoded documents are the transaction sets, which are grouped in functional groups, used in defining transactions for business data interchange. This standard does not cover the semantic meaning of the information encoded in the transaction sets.
Brief 5010 Transactions and Code Sets Rules Update Summary
Transaction Set (997) will be replaced by Transaction Set (999) "acknowledgment report".
The size of many fields {segment elements} will be expanded, causing a need for all IT providers to expand corresponding fields, element, files, GUI, paper media, and databases.
Some segments have been removed from existing Transaction Sets.
Many segments have been added to existing Transaction Sets allowing greater tracking and reporting of cost and patient encounters.
Capacity to use both "International Classification of Diseases" versions 9 (ICD-9) and 10 (ICD-10-CM) has been added.
Security Rule
The Final Rule on Security Standards was issued on February 20, 2003. It took effect on April 21, 2003, with a compliance date of April 21, 2005, for most covered entities and April 21, 2006, for "small plans". The Security Rule complements the Privacy Rule. While the Privacy Rule pertains to all Protected Health Information (PHI) including paper and electronic, the Security Rule deals specifically with Electronic Protected Health Information (EPHI). It lays out three types of security safeguards required for compliance: administrative, physical, and technical. For each of these types, the Rule identifies various security standards, and for each standard, it names both required and addressable implementation specifications. Required specifications must be adopted and administered as dictated by the Rule. Addressable specifications are more flexible. Individual covered entities can evaluate their own situation and determine the best way to implement addressable specifications. Some privacy advocates have argued that this "flexibility" may provide too much latitude to covered entities. Software tools have been developed to assist covered entities in the risk analysis and remediation tracking. The standards and specifications are as follows:
Administrative Safeguards – policies and procedures designed to clearly show how the entity will comply with the act
Covered entities (entities that must comply with HIPAA requirements) must adopt a written set of privacy procedures and designate a privacy officer to be responsible for developing and implementing all required policies and procedures.
The policies and procedures must reference management oversight and organizational buy-in to compliance with the documented security controls.
Procedures should clearly identify employees or classes of employees who have access to electronic protected health information (EPHI). Access to EPHI must be restricted to only those employees who have a need for it to complete their job function.
The procedures must address access authorization, establishment, modification, and termination.
Entities must show that an appropriate ongoing training program regarding the handling of PHI is provided to employees performing health plan administrative functions.
Covered entities that out-source some of their business processes to a third party must ensure that their vendors also have a framework in place to comply with HIPAA requirements. Companies typically gain this assurance through clauses in the contracts stating that the vendor will meet the same data protection requirements that apply to the covered entity. Care must be taken to determine if the vendor further out-sources any data handling functions to other vendors and monitor whether appropriate contracts and controls are in place.
A contingency plan should be in place for responding to emergencies. Covered entities are responsible for backing up their data and having disaster recovery procedures in place. The plan should document data priority and failure analysis, testing activities, and change control procedures.
Internal audits play a key role in HIPAA compliance by reviewing operations with the goal of identifying potential security violations. Policies and procedures should specifically document the scope, frequency, and procedures of audits. Audits should be both routine and event-based.
Procedures should document instructions for addressing and responding to security breaches that are identified either during the audit or the normal course of operations.
Physical Safeguards – controlling physical access to protect against inappropriate access to protected data
Controls must govern the introduction and removal of hardware and software from the network. (When equipment is retired it must be disposed of properly to ensure that PHI is not compromised.)
Access to equipment containing health information should be carefully controlled and monitored.
Access to hardware and software must be limited to properly authorized individuals.
Required access controls consist of facility security plans, maintenance records, and visitor sign-in and escorts.
Policies are required to address proper workstation use. Workstations should be removed from high traffic areas and monitor screens should not be in direct view of the public.
If the covered entities utilize contractors or agents, they too must be fully trained on their physical access responsibilities.
Technical Safeguards – controlling access to computer systems and enabling covered entities to protect communications containing PHI transmitted electronically over open networks from being intercepted by anyone other than the intended recipient.
Information systems housing PHI must be protected from intrusion. When information flows over open networks, some form of encryption must be utilized. If closed systems/networks are utilized, existing access controls are considered sufficient and encryption is optional.
Each covered entity is responsible for ensuring that the data within its systems has not been changed or erased in an unauthorized manner.
Data corroboration, including the use of a checksum, double-keying, message authentication, and digital signature may be used to ensure data integrity.
Covered entities must also authenticate entities with which they communicate. Authentication consists of corroborating that an entity is who it claims to be. Examples of corroboration include password systems, two or three-way handshakes, telephone callback, and token systems.
Covered entities must make documentation of their HIPAA practices available to the government to determine compliance.
In addition to policies and procedures and access records, information technology documentation should also include a written record of all configuration settings on the components of the network because these components are complex, configurable, and always changing.
Documented risk analysis and risk management programs are required. Covered entities must carefully consider the risks of their operations as they implement systems to comply with the act. (The requirement of risk analysis and risk management implies that the act's security requirements are a minimum standard and places responsibility on covered entities to take all reasonable precautions necessary to prevent PHI from being used for non-health purposes.)
Unique Identifiers Rule (National Provider Identifier)
HIPAA covered entities such as providers completing electronic transactions, healthcare clearinghouses, and large health plans must use only the National Provider Identifier (NPI) to identify covered healthcare providers in standard transactions by May 23, 2007. Small health plans must use only the NPI by May 23, 2008.
Effective from May 2006 (May 2007 for small health plans), all covered entities using electronic communications (e.g., physicians, hospitals, health insurance companies, and so forth) must use a single new NPI. The NPI replaces all other identifiers used by health plans, Medicare, Medicaid, and other government programs. However, the NPI does not replace a provider's DEA number, state license number, or tax identification number. The NPI is 10 digits (may be alphanumeric), with the last digit being a checksum. The NPI cannot contain any embedded intelligence; in other words, the NPI is simply a number that does not itself have any additional meaning. The NPI is unique and national, never re-used, and except for institutions, a provider usually can have only one. An institution may obtain multiple NPIs for different "sub-parts" such as a free-standing cancer center or rehab facility.
Enforcement Rule
On February 16, 2006, HHS issued the Final Rule regarding HIPAA enforcement. It became effective on March 16, 2006. The Enforcement Rule sets civil money penalties for violating HIPAA rules and establishes procedures for investigations and hearings for HIPAA violations. For many years there were few prosecutions for violations.
As of March 2013, the U.S. Dept. of Health and Human Services (HHS) has investigated over 19,306 cases that have been resolved by requiring changes in privacy practice or by corrective action. If noncompliance is determined by HHS, entities must apply corrective measures. Complaints have been investigated against many different types of businesses such as national pharmacy chains, major health care centers, insurance groups, hospital chains and other small providers. There were 9,146 cases where the HHS investigation found that HIPAA was followed correctly. There were 44,118 cases that HHS did not find eligible cause for enforcement; for example, a violation that started before HIPAA started; cases withdrawn by the pursuer; or an activity that does not actually violate the Rules. According to the HHS website, the following lists the issues that have been reported according to frequency:
Misuse and disclosures of PHI
No protection in place of health information
Patient unable to access their health information
Using or disclosing more than the minimum necessary protected health information
No safeguards of electronic protected health information.
The most common entities required to take corrective action to be in voluntary compliance according to HHS are listed by frequency:
Private Practices
Hospitals
Outpatient Facilities
Group plans such as insurance groups
Pharmacies
Title III: Tax-related health provisions governing medical savings accounts
Title III standardizes the amount that may be saved per person in a pre-tax medical savings account. Beginning in 1997, medical savings
account ("MSA") are available to employees covered under an employer-sponsored high deductible plan of a small employer and
self-employed individuals.
Title IV: Application and enforcement of group health insurance requirements
Title IV specifies conditions for group health plans regarding coverage of persons with pre-existing conditions, and modifies continuation of coverage requirements. It also clarifies continuation coverage requirements and includes COBRA clarification.
Title V: Revenue offset governing tax deductions for employers
Title V includes provisions related to company-owned life insurance for employers providing company-owned life insurance premiums, prohibiting the tax-deduction of interest on life insurance loans, company endowments, or contracts related to the company. It also repeals the financial institution rule to interest allocation rules. Finally, it amends provisions of law relating to people who give up United States citizenship or permanent residence, expanding the expatriation tax to be assessed against those deemed to be giving up their U.S. status for tax reasons, and making ex-citizens' names part of the public record through the creation of the Quarterly Publication of Individuals Who Have Chosen to Expatriate.
Effects on research and clinical care
The enactment of the Privacy and Security Rules has caused major changes in the way physicians and medical centers operate. The complex legalities and potentially stiff penalties associated with HIPAA, as well as the increase in paperwork and the cost of its implementation, were causes for concern among physicians and medical centers. An August 2006 article in the journal Annals of Internal Medicine detailed some such concerns over the implementation and effects of HIPAA.
Effects on research
HIPAA restrictions on researchers have affected their ability to perform retrospective, chart-based research as well as their ability to prospectively evaluate patients by contacting them for follow-up. A study from the University of Michigan demonstrated that implementation of the HIPAA Privacy rule resulted in a drop from 96% to 34% in the proportion of follow-up surveys completed by study patients being followed after a heart attack. Another study, detailing the effects of HIPAA on recruitment for a study on cancer prevention, demonstrated that HIPAA-mandated changes led to a 73% decrease in patient accrual, a tripling of time spent recruiting patients, and a tripling of mean recruitment costs.
In addition, informed consent forms for research studies now are required to include extensive detail on how the participant's protected health information will be kept private. While such information is important, the addition of a lengthy, legalistic section on privacy may make these already complex documents even less user-friendly for patients who are asked to read and sign them.
These data suggest that the HIPAA privacy rule, as currently implemented, may be having negative impacts on the cost and quality of medical research. Dr. Kim Eagle, professor of internal medicine at the University of Michigan, was quoted in the Annals article as saying, "Privacy is important, but research is also important for improving care. We hope that we will figure this out and do it right."
Effects on clinical care
The complexity of HIPAA, combined with potentially stiff penalties for violators, can lead physicians and medical centers to withhold information from those who may have a right to it. A review of the implementation of the HIPAA Privacy Rule by the U.S. Government Accountability Office found that health care providers were "uncertain about their legal privacy responsibilities and often responded with an overly guarded approach to disclosing information ... than necessary to ensure compliance with the Privacy rule". Reports of this uncertainty continue.
Costs of implementation
In the period immediately prior to the enactment of the HIPAA Privacy and Security Acts, medical centers and medical practices were charged with getting "into compliance". With an early emphasis on the potentially severe penalties associated with violation, many practices and centers turned to private, for-profit "HIPAA consultants" who were intimately familiar with the details of the legislation and offered their services to ensure that physicians and medical centers were fully "in compliance". In addition to the costs of developing and revamping systems and practices, the increase in paperwork and staff time necessary to meet the legal requirements of HIPAA may impact the finances of medical centers and practices at a time when insurance companies' and Medicare reimbursement is also declining.
Education and training
Education and training of healthcare providers is a requirement for correct implementation of both the HIPAA Privacy Rule and Security Rule.
Although the acronym HIPAA matches the title of the 1996 Public Law 104-191, Health Insurance Portability and Accountability Act, HIPAA is sometimes incorrectly referred to as "Health Information Privacy and Portability Act (HIPPA)."
Violations
According to the US Department of Health and Human Services Office for Civil Rights, between April 2003 and January 2013, it received 91,000 complaints of HIPAA violations, in which 22,000 led to enforcement actions of varying kinds (from settlements to fines) and 521 led to referrals to the US Department of Justice as criminal actions. Examples of significant breaches of protected information and other HIPAA violations include:
The largest loss of data that affected 4.9 million people by Tricare Management of Virginia in 2011
The largest fines of $5.5 million levied against Memorial Healthcare Systems in 2017 for accessing confidential information of 115,143 patients and of $4.3 million levied against Cignet Health of Maryland in 2010 for ignoring patients' requests to obtain copies of their own records and repeated ignoring of federal officials' inquiries
The first criminal indictment was lodged in 2011 against a Virginia physician who shared information with a patient's employer "under the false pretenses that the patient was a serious and imminent threat to the safety of the public, when in fact he knew that the patient was not such a threat."
According to Koczkodaj et al., 2018, the total number of individuals affected since October 2009 is 173,398,820.
The differences between civil and criminal penalties are summarized in the following table:
Legislative information
In 1994, President Clinton had ambitions to renovate the state of the nation's health care. Despite his efforts to revamp the system, he did not receive the support he needed at the time. The Congressional Quarterly Almanac of 1996 explains how two senators, Nancy Kassebaum (R-KS) and Edward Kennedy (D-MA) came together and created a bill called the Health Insurance Reform Act of 1995 or more commonly known as the Kassebaum-Kennedy Bill. This bill was stalled despite making it out of the Senate. Undeterred by this, Clinton pushed harder for his ambitions and eventually in 1996 after the State of the Union address, there was some headway as it resulted in bipartisan cooperation. After much debate and negotiation, there was a shift in momentum once a “compromise between Kennedy and Ways and Means Committee Chairman Bill Archer” was accepted after alterations were made of the original Kassebaum-Kennedy Bill. Soon after this, the bill was signed into law by President Clinton and was named the Health Insurance Portability and Accountability Act of 1996 (HIPAA).
,
; H. Rept. 104-469, part 1; H. Rept. 104-736
; ; S. Rept. 104-156
HHS Security Standards, , , and
HHS Standards for Privacy of Individually Identifiable Health Information, and
References
External links
California Office of HIPAA Implementation (CalOHI)
"HIPAA", Centers for Medicare and Medicaid Services
Congressional Research Service (CRS) reports regarding HIPAA, University of North Texas Libraries
Full text of the Health Insurance Portability and Accountability Act (PDF/TXT) U.S. Government Printing Office
Office for Civil Rights page on HIPAA
Acts of the 104th United States Congress
Data erasure
Insurance legislation
Medical privacy legislation
Medicare and Medicaid (United States)
Privacy law in the United States
Security compliance
United States federal health legislation
United States federal privacy legislation |
384081 | https://en.wikipedia.org/wiki/Health%20Level%207 | Health Level 7 | Health Level Seven or HL7 refers to a set of international standards for transfer of clinical and administrative data between software applications used by various healthcare providers. These standards focus on the application layer, which is "layer 7" in the OSI model. The HL7 standards are produced by Health Level Seven International, an international standards organization, and are adopted by other standards issuing bodies such as American National Standards Institute and International Organization for Standardization.
Hospitals and other healthcare provider organizations typically have many different computer systems used for everything from billing records to patient tracking. All of these systems should communicate with each other (or "interface") when they receive new information, or when they wish to retrieve information, but not all do so.
HL7 International specifies a number of flexible standards, guidelines, and methodologies by which various healthcare systems can communicate with each other. Such guidelines or data standards are a set of rules that allow information to be shared and processed in a uniform and consistent manner. These data standards are meant to allow healthcare organizations to easily share clinical information. Theoretically, this ability to exchange information should help to minimize the tendency for medical care to be geographically isolated and highly variable.
HL7 International considers the following standards to be its primary standards – those standards that are most commonly used and implemented:
Version 2.x Messaging Standard – an interoperability specification for health and medical transactions
Version 3 Messaging Standard – an interoperability specification for health and medical transactions
Clinical Document Architecture (CDA) – an exchange model for clinical documents, based on HL7 Version 3
Continuity of Care Document (CCD) – a US specification for the exchange of medical summaries, based on CDA.
Structured Product Labeling (SPL) – the published information that accompanies a medicine, based on HL7 Version 3
Clinical Context Object Workgroup (CCOW) – an interoperability specification for the visual integration of user applications
Other HL7 standards/methodologies include:
Fast Healthcare Interoperability Resources (FHIR) – a standard for the exchange of resources
Arden Syntax – a grammar for representing medical conditions and recommendations as a Medical Logic Module (MLM)
Claims Attachments – a Standard Healthcare Attachment to augment another healthcare transaction
Functional Specification of Electronic Health Record (EHR) / Personal Health Record (PHR) systems – a standardized description of health and medical functions sought for or available in such software applications
GELLO – a standard expression language used for clinical decision support
Primary standards
HL7's primary standards are those standards that Health Level Seven International considers to be most commonly used and implemented.
Version 2 messaging
The HL7 version 2 standard (also known as Pipehat) has the aim to support hospital workflows. It was originally created in 1989.
HL7 version 2 defines a series of electronic messages to support administrative, logistical, financial as well as clinical processes. Since 1987 the standard has been updated regularly, resulting in versions 2.1, 2.2, 2.3, 2.3.1, 2.4, 2.5, 2.5.1, 2.6, 2.7, 2.7.1, 2.8, 2.8.1 and 2.8.2. The v2.x standards are backward compatible (e.g., a message based on version 2.3 will be understood by an application that supports version 2.6).
HL7 v2.x messages use a non-XML encoding syntax based on segments (lines) and one-character delimiters. Segments have composites (fields) separated by the composite delimiter. A composite can have sub-composites (components) separated by the sub-composite delimiter, and sub-composites can have sub-sub-composites (subcomponents) separated by the sub-sub-composite delimiter. The default delimiters are carriage return for the segment separator, vertical bar or pipe (|) for the field separator, caret (^) for the component separator, ampersand (&) for the subcomponent separator, and number sign (#) for the default truncation separator. The tilde (~) is the default repetition separator. Each segment starts with a 3-character string that identifies the segment type. Each segment of the message contains one specific category of information. Every message has MSH as its first segment, which includes a field that identifies the message type. The message type determines the expected segment types in the message. The segment types used in a particular message type are specified by the segment grammar notation used in the HL7 standards.
The following is an example of an admission message. MSH is the header segment, PID the Patient Identity, PV1 is the Patient Visit information, etc. The 5th field in the PID segment is the patient's name, in the order, family name, given name, second name (or their initials), suffix, etc. Depending on the HL7 V2.x standard version, more fields are available in the segment for additional patient information.
HL7 v2.x has allowed for the interoperability between electronic Patient Administration Systems (PAS), Electronic Practice Management (EPM) systems, Laboratory Information Systems (LIS), Dietary, Pharmacy and Billing systems as well as Electronic Medical Record (EMR) or Electronic Health Record (EHR) systems. Currently, the HL7 v2.x messaging standard is supported by every major medical information systems vendor in the United States.
Version 3 messaging
The HL7 version 3 standard has the aim to support all healthcare workflows. Development of version 3 started around 1995, resulting in an initial standard publication in 2005. The v3 standard, as opposed to version 2, is based on a formal methodology (the HDF) and object-oriented principles.
RIM - ISO/HL7 21731
The Reference Information Model (RIM) is the cornerstone of the HL7 Version 3 development process and an essential part of the HL7 V3 development methodology. RIM expresses the data content needed in a specific clinical or administrative context and provides an explicit representation of the semantic and lexical connections that exist between the information carried in the fields of HL7 messages.
HL7 Development Framework - ISO/HL7 27931
The HL7 Version 3 Development Framework (HDF) is a continuously evolving process that seeks to develop specifications that facilitate interoperability between healthcare systems. The HL7 RIM, vocabulary specifications, and model-driven process of analysis and design combine to make HL7 Version 3 one methodology for development of consensus-based standards for healthcare information system interoperability. The HDF is the most current edition of the HL7 V3 development methodology.
The HDF not only documents messaging, but also the processes, tools, actors, rules, and artifacts relevant to development of all HL7 standard specifications. Eventually, the HDF will encompass all of the HL7 standard specifications, including any new standards resulting from analysis of electronic health record architectures and requirements.
HL7 specifications draw upon codes and vocabularies from a variety of sources. The V3 vocabulary work ensures that the systems implementing HL7 specifications have an unambiguous understanding of the code sources and code value domains they are using.
V3 Messaging
The HL7 version 3 messaging standard defines a series of Secure Text messages (called interactions) to support all healthcare workflows.
HL7 v3 messages are based on an XML encoding syntax, as shown in this example:
<POLB_IN224200 ITSVersion="XML_1.0" xmlns="urn:hl7-org:v3"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<id root="2.16.840.1.113883.19.1122.7" extension="CNTRL-3456"/>
<creationTime value="200202150930-0400"/>
<!-- The version of the datatypes/RIM/vocabulary used is that of May 2006 -->
<versionCode code="2006-05"/>
<!-- interaction id= Observation Event Complete, w/o Receiver Responsibilities -->
<interactionId root="2.16.840.1.113883.1.6" extension="POLB_IN224200"/>
<processingCode code="P"/>
<processingModeCode nullFlavor="OTH"/>
<acceptAckCode code="ER"/>
<receiver typeCode="RCV">
<device classCode="DEV" determinerCode="INSTANCE">
<id extension="GHH LAB" root="2.16.840.1.113883.19.1122.1"/>
<asLocatedEntity classCode="LOCE">
<location classCode="PLC" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.2" extension="ELAB-3"/>
</location>
</asLocatedEntity>
</device>
</receiver>
<sender typeCode="SND">
<device classCode="DEV" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.1" extension="GHH OE"/>
<asLocatedEntity classCode="LOCE">
<location classCode="PLC" determinerCode="INSTANCE">
<id root="2.16.840.1.113883.19.1122.2" extension="BLDG24"/>
</location>
</asLocatedEntity>
</device>
</sender>
<!-- Trigger Event Control Act & Domain Content -->
</POLB_IN224200>
Clinical Document Architecture (CDA)
The HL7 Clinical Document Architecture (CDA) is an XML-based markup standard intended to specify the encoding, structure and semantics of clinical documents for exchange. The standard was jointly published with ISO as ISO/HL7 27932.
Continuity of Care Document (CCD)
CCD is a US specification for the exchange of medical summaries, based on CDA.
Structured Product Labeling (SPL)
SPL describes the published information that accompanies a medicine, based on HL7 Version 3.
CCOW
CCOW, or "Clinical Context Object Workgroup," is a standard protocol designed to enable disparate applications to share user context and patient context in real-time, and at the user-interface level. CCOW implementations typically require a CCOW vault system to manage user security between applications.
Other standards and methods
Fast Healthcare Interoperability Resources (FHIR)
Fast Healthcare Interoperability Resources is a draft standard from HL7 International designed to be easier to implement, more open and more extensible than version 2.x or version 3. It leverages a modern web-based suite of API technology, including a HTTP-based RESTful protocol, HTML and Cascading Style Sheets for user interface integration, a choice of JSON or XML for data representation, OAuth for authorization and ATOM for query results.
Services Aware Interoperability Framework
The HL7 Services-Aware Enterprise Architecture Framework (SAIF) provides consistency between all HL7 artifacts, and enables a standardized approach to Enterprise Architecture (EA) development and implementation, and a way to measure the consistency.
SAIF is a way of thinking about producing specifications that explicitly describe the governance, conformance, compliance, and behavioral semantics that are needed to achieve computable semantic working interoperability. The intended information transmission technology might use a messaging, document exchange, or services approach.
SAIF is the framework that is required to rationalize interoperability of other standards. SAIF is an architecture for achieving interoperability, but it is not a whole-solution design for enterprise architecture management.
Arden syntax
The Arden syntax is a language for encoding medical knowledge. HL7 International adopted and oversees the standard beginning with Arden syntax 2.0. These Medical Logic Modules (MLMs) are used in the clinical setting as they can contain sufficient knowledge to make single medical decisions. They can produce alerts, diagnoses, and interpretations along with quality assurance function and administrative support. An MLM must run on a computer that meets the minimum system requirements and has the correct program installed. Then, the MLM can give advice for when and where it is needed.
MLLP
A large portion of HL7 messaging is transported by Minimal Lower Layer Protocol (MLLP), also known as Lower Layer Protocol (LLP) or Minimum Layer Protocol (MLP). For transmitting via TCP/IP, header and trailer characters are added to the message to identify the beginning and ending of the message because TCP/IP is a continuous stream of bytes. Hybrid Lower Layer Protocol (HLLP) is a variation of MLLP that includes a checksum to help verify message integrity. Amongst other software vendors, MLLP is supported by Microsoft, Oracle, Cleo.
MLLP contains no inherent security or encryption but relies on lower layer protocols such as Transport Layer Security (TLS) or IPsec for safeguarding Protected health information outside of a secure network.
Functional EHR and PHR specifications
Functional specifications for an electronic health record.
Message details
The OBR segment
An OBR Segment carries information about an exam, diagnostic study/observation. It is a required segment in an ORM (order message) or an ORU (Observation Result) message.
See also
CDISC
DICOM
DVTk
Electronic medical record
eHealth
EHRcom
European Institute for Health Records (European Union)
Fast Healthcare Interoperability Resources
Health Informatics
Health Informatics Service Architecture (HISA)
Healthcare Services Specification Project (HSSP)
Integrating the Healthcare Enterprise(IHE)
ISO TC 215
LOINC
NextGen Connect
openEHR Foundation
Public Health Information Network
SNOMED, SNOMED CT
Nomenclature for Properties and Units terminology
References
External links
HL7.org site
What does HL7 education mean?
HL7 International is a member of the Joint Initiative on SDO Global Health Informatics Standardization
HL7 Tools Page
Australian Healthcare Messaging Laboratory (AHML) - Online HL7 Message Testing and Certification
Comprehensive Implementation of HL7 v3 Specifications in Java
NIST HL7 Conformance Testing Framework
ICH-HL7 Regulated Product Submissions
HL7 Tutorial Directory
HL7 Programming Tutorials, Short Tutorials on many HL7 concepts for Programmers.
Critical reviews
HL7 RIM: An Incoherent Standard
HL7 RIM Under Scrutiny (attempted rebuttal)(publication date?)
HL7 WATCH
Update 2013: Human Action in the Healthcare Domain: A Critical Analysis of HL7’s Reference Information Model
International standards
Multi-agent systems
American National Standards Institute standards
Standards for electronic health records
Data coding framework |
385892 | https://en.wikipedia.org/wiki/Identity-based%20encryption | Identity-based encryption | ID-based encryption, or identity-based encryption (IBE), is an important primitive of ID-based cryptography. As such it is a type of public-key encryption in which the public key of a user is some unique information about the identity of the user (e.g. a user's email address). This means that a sender who has access to the public parameters of the system can encrypt a message using e.g. the text-value of the receiver's name or email address as a key. The receiver obtains its decryption key from a central authority, which needs to be trusted as it generates secret keys for every user.
ID-based encryption was proposed by Adi Shamir in 1984. He was however only able to give an instantiation of identity-based signatures. Identity-based encryption remained an open problem for many years.
The pairing-based Boneh–Franklin scheme and Cocks's encryption scheme based on quadratic residues both solved the IBE problem in 2001.
Usage
Identity-based systems allow any party to generate a public key from a known identity value such as an ASCII string. A trusted third party, called the Private Key Generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the corresponding master private key (referred to as master key). Given the master public key, any party can compute a public key corresponding to the identity by combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identity ID contacts the PKG, which uses the master private key to generate the private key for identity ID.
As a result, parties may encrypt messages (or verify signatures) with no prior distribution of keys between individual participants. This is extremely useful in cases where pre-distribution of authenticated keys is inconvenient or infeasible due to technical restraints. However, to decrypt or sign messages, the authorized user must obtain the appropriate private key from the PKG. A caveat of this approach is that the PKG must be highly trusted, as it is capable of generating any user's private key and may therefore decrypt (or sign) messages without authorization. Because any user's private key can be generated through the use of the third party's secret, this system has inherent key escrow. A number of variant systems have been proposed which remove the escrow including certificate-based encryption, secure key issuing cryptography and certificateless cryptography.
The steps involved are depicted in this diagram:
Protocol framework
Dan Boneh and Matthew K. Franklin defined a set of four algorithms that form a complete IBE system:
Setup: This algorithm is run by the PKG one time for creating the whole IBE environment. The master key is kept secret and used to derive users' private keys, while the system parameters are made public. It accepts a security parameter (i.e. binary length of key material) and outputs:
A set of system parameters, including the message space and ciphertext space and ,
a master key .
Extract: This algorithm is run by the PKG when a user requests his private key. Note that the verification of the authenticity of the requestor and the secure transport of are problems with which IBE protocols do not try to deal. It takes as input , and an identifier and returns the private key for user .
Encrypt: Takes , a message and and outputs the encryption .
Decrypt: Accepts , and and returns .
Correctness constraint
In order for the whole system to work, one has to postulate that:
Encryption schemes
The most efficient identity-based encryption schemes are currently based on bilinear pairings on elliptic curves, such as the Weil or Tate pairings. The first of these schemes was developed by Dan Boneh and Matthew K. Franklin (2001), and performs probabilistic encryption of arbitrary ciphertexts using an Elgamal-like approach. Though the Boneh-Franklin scheme is provably secure, the security proof rests on relatively new assumptions about the hardness of problems in certain elliptic curve groups.
Another approach to identity-based encryption was proposed by Clifford Cocks in 2001. The Cocks IBE scheme is based on well-studied assumptions (the quadratic residuosity assumption) but encrypts messages one bit at a time with a high degree of ciphertext expansion. Thus it is highly inefficient and impractical for sending all but the shortest messages, such as a session key for use with a symmetric cipher.
A third approach to IBE is through the use of lattices.
Identity-based encryption algorithms
The following lists practical identity-based encryption algorithms
Boneh–Franklin (BF-IBE).
Sakai–Kasahara (SK-IBE).
Boneh–Boyen (BB-IBE).
All these algorithms have security proofs.
Advantages
One of the major advantages of any identity-based encryption scheme is that if there are only a finite number of users, after all users have been issued with keys the third party's secret can be destroyed. This can take place because this system assumes that, once issued, keys are always valid (as this basic system lacks a method of key revocation). The majority of derivatives of this system which have key revocation lose this advantage.
Moreover, as public keys are derived from identifiers, IBE eliminates the need for a public key distribution infrastructure. The authenticity of the public keys is guaranteed implicitly as long as the transport of the private keys to the corresponding user is kept secure (authenticity, integrity, confidentiality).
Apart from these aspects, IBE offers interesting features emanating from the possibility to encode additional information into the identifier. For instance, a sender might specify an expiration date for a message. He appends this timestamp to the actual recipient's identity (possibly using some binary format like X.509). When the receiver contacts the PKG to retrieve the private key for this public key, the PKG can evaluate the identifier and decline the extraction if the expiration date has passed. Generally, embedding data in the ID corresponds to opening an additional channel between sender and PKG with authenticity guaranteed through the dependency of the private key on the identifier.
Drawbacks
If a Private Key Generator (PKG) is compromised, all messages protected over the entire lifetime of the public-private key pair used by that server are also compromised. This makes the PKG a high-value target to adversaries. To limit the exposure due to a compromised server, the master private-public key pair could be updated with a new independent key pair. However, this introduces a key-management problem where all users must have the most recent public key for the server.
Because the Private Key Generator (PKG) generates private keys for users, it may decrypt and/or sign any message without authorization. This implies that IBS systems cannot be used for non-repudiation. This may not be an issue for organizations that host their own PKG and are willing to trust their system administrators and do not require non-repudiation.
The issue of implicit key escrow does not exist with the current PKI system, wherein private keys are usually generated on the user's computer. Depending on the context key escrow can be seen as a positive feature (e.g., within Enterprises). A number of variant systems have been proposed which remove the escrow including certificate-based encryption, secret sharing, secure key issuing cryptography and certificateless cryptography.
A secure channel between a user and the Private Key Generator (PKG) is required for transmitting the private key on joining the system. Here, a SSL-like connection is a common solution for a large-scale system. It is important to observe that users that hold accounts with the PKG must be able to authenticate themselves. In principle, this may be achieved through username, password or through public key pairs managed on smart cards.
IBE solutions may rely on cryptographic techniques that are insecure against code breaking quantum computer attacks (see Shor's algorithm)
See also
ID-based cryptography
Identity-based conditional proxy re-encryption
Attribute-based encryption
References
External links
Seminar 'Cryptography and Security in Banking'/'Alternative Cryptology', Ruhr University Bochum, Germany
RFC 5091 - the IETF RFC defining two common IBE algorithms
HP Role-Based Encryption
The Pairing-Based Crypto Lounge
The Voltage Security Network - IBE encryption web service
Analyst report on the cost of IBE versus PKI
Public-key cryptography
Identity-based cryptography
fr:Schéma basé sur l'identité
ko:신원 기반 암호
ja:IDベース暗号 |
386152 | https://en.wikipedia.org/wiki/WPA | WPA | WPA may refer to:
Computing
Wi-Fi Protected Access, a wireless encryption standard
Windows Product Activation, in Microsoft software licensing
Wireless Public Alerting (Alert Ready), emergency alerts over LTE in Canada
Windows Performance Analyzer
Organizations
Wisconsin Philosophical Association
World Pool-Billiard Association
World Psychiatric Association
Western Provident Association, United Kingdom
United States
Works Progress Administration or Work Projects Administration, a former American New Deal agency
Washington Project for the Arts
Western Psychological Association
Women's Prison Association
Other
WPA, a 2009 album by Works Progress Administration (band)
Win probability added, a baseball statistic
Water pinch analysis
Whistleblower Protection Act, a law protecting certain whistleblowers in the United States
Woomera Prohibited Area, a vast tract of land in South Australia covering more than 120,000 sq km of 'outback' arid lands
Waterfowl production area, land protected through easements or purchase to conserve habitat for waterfowl in the United States
An abbreviation for Western Pennsylvania |
386257 | https://en.wikipedia.org/wiki/Operation%20Gold | Operation Gold | Operation Gold (also known as Operation Stopwatch by the British) was a joint operation conducted by the American Central Intelligence Agency (CIA) and the British MI6 Secret Intelligence Service (SIS) in the 1950s to tap into landline communication of the Soviet Army headquarters in Berlin using a tunnel into the Soviet-occupied zone. This was a much more complex variation of the earlier Operation Silver project in Vienna.
The plan was activated in 1954 because of fears that the Soviets might be launching a nuclear attack at any time, having already detonated a hydrogen bomb in August 1953 as part of the Soviet atomic bomb project. Construction of the tunnel began in September 1954 and was completed in eight months. The Americans wanted to hear any warlike intentions being discussed by their military and were able to listen to telephone conversations for nearly a year, eventually recording roughly 90,000 communications. The Soviet authorities were informed about Operation Gold from the very beginning by their mole George Blake but decided not to "discover" the tunnel until 21 April 1956, in order to protect Blake from exposure.
Some details of the project are still classified and whatever authoritative information could be found was scant, until recently. This was primarily because the then-Director of Central Intelligence (DCI), Allen Dulles had ordered "as little as possible" be "reduced to writing" when the project was authorized. In 2019, additional specifics became available.
Background
After the Red Army followed the Soviet diplomatic department, and transferred its most secure communications from radio to telephone landline, the post-World War II Western Allies lost a major Cold War source of information. Operation Gold was hence at least the third tunnel built to aid intelligence in the Cold War period post the end of World War II. From 1948 onwards under Operation Silver, British SIS had undertaken a number of such operations in then still occupied Vienna, the information from which enabled the restoration of Austrian sovereignty in 1955. The KGB later commissioned the Red Army to construct a tunnel to tap into a cable that served the major US Army garrison for Berlin.
Operational agreement
In early 1951, the CIA undertook an assessment process for replacing lost Soviet radio communications intelligence. Revealing their plans to the British, the SIS, having read the report which included the idea of tapping Soviet telephone lines, revealed the existence of Operation Silver in Vienna.
On the reassignment of CIA agent Bill Harvey to Berlin to explore available options, Reinhard Gehlen, the head of the Bundesnachrichtendienst, alerted the CIA to the location of a crucial telephone junction, less than underground, where three cables came together close to the border of the American sector of West Berlin. Operation Gold was planned jointly by the SIS and the CIA. Initial planning meetings were held at No. 2 Carlton Gardens, London, from which the West German government were excluded, due to the "highly infiltrated nature" of their service. The resulting agreement was that the US would supply most of the financing and construct the tunnel (as the closest access point was in their sector), whilst the British would use their expertise from Operation Silver to tap the cables and provide the required electronic communications equipment.
One of those who attended the early meetings was George Blake, a mole in the British intelligence apparatus. Blake apparently alerted the KGB immediately, as two of Gehlen's agents were caught trying to get a potential tapping wire across a Berlin canal. The KGB decided to let Operation Gold proceed since, in order to attack the tunnel, the Soviets would have to compromise Blake and they found it preferable to sacrifice some information rather than their valuable agent. According to the author of a 2019 book about the operation, the Soviets "value[d] Blake so much, they fear[ed] his exposure more than they fear[ed] a breach of their secrets".
The KGB did not inform anyone in Germany, including the East Germans or the Soviet users of the cables, about the taps. According to a CIA report, "there were no known attempts to feed disinformation to the CIA". Although the British SIS suspected the opposite, the CIA report states that "the Soviet military continued to use the cables for communications of intelligence value".
Construction
In December 1953 the operation was placed under the direction of William King Harvey, a former U.S. Federal Bureau of Investigation (FBI) official who transferred to the CIA. Captain Williamson of the United States Army Corps of Engineers was placed in charge of construction.
The first project was the construction of a "warehouse", which acted as a disguise for a US Army ELINT station. The warehouse, in the Neukölln/Rudow district of the US sector of Berlin, had an unconventionally deep basement at to serve as the staging area for the tunnel. Digging the initial vertical shaft for the tunnel began on 2 September 1954 and was completed on 25 February the following year.
The covert construction of the tunnel under the world's most heavily patrolled border to intersect a series of cable less than below a busy street was an exceptional engineering challenge. Using a shield method of construction, which pushed forward on hydraulic rams, the resultant space was lined with sand and 1,700 cast-iron lining plates. A wooden railway track acted as a guide for the rubber-wheeled construction vehicles, which by end of construction had removed of material. This included a number of evacuations, including when the diggers broke through into an undocumented pre-World War II cesspool and flooded the tunnel. Throughout all stages of construction and in operational use, the entire tunnel was rigged with explosives, designed to ensure its complete destruction. Once complete, the tunnel ran into the Altglienicke area of the Treptow borough, where British Army Captain Peter Lunn—a former alpine skier, who was actually the head of the SIS in Berlin—personally undertook the tapping of the three cables. The British also installed most of the electronic handling equipment in the tunnel, which was manufactured and badged as British made.
The final cost of the completed tunnel was over US$6.5M, or equivalent to the final procurement cost of two Lockheed U-2 spy planes.
Operations
The tunnel ran 1,476 feet and was six feet in diameter and operated for 11 months and 11 days according to a 2019 book by Washington Post journalist Steve Vogel, who reviewed all of the available documents and interviewed 40 of the project's participants. Betrayal in Berlin: The True Story of the Cold War's Most Audacious Espionage Operation includes a "virtually month-by-month account of the tunnel’s excavation and operation", according to one review. As well, after that book was published, the CIA released a less redacted version of their documents about the tunnel.
Inside, the British and the Americans listened and recorded the messages flowing to and from Soviet military headquarters in Zossen, near Berlin: conversations between Moscow and the Soviet embassy in East Berlin and conversations between East German and Soviet officials.
The West was unable to break Soviet encryption at this time. Instead they took advantage of valuable intelligence gained "from unguarded telephone conversations over official channels." "Sixty-seven thousand hours of Russian and German conversations, were sent to London for transcription by a special section staffed by 317 Russian emigres and German linguists. Teleprinter signals, many of them multiplexed, were also collected on magnetic tape and forwarded to Frank Rowlett's Staff D for processing."
To protect Blake, the KGB was forced to keep the flow of information as normal as possible with the result that the tunnel was a bonanza of intelligence collection for the US and Britain in a world that had yet to witness the U-2 or satellite imagery. For an overview of the types of intelligence collected by the tunnel taps, see Appendix B in CIA's declassified (in 1977 and further in 2007) history of the Berlin Tunnel [Clandestine Services History Paper (CSHP), number 150, published internally in CIA in 1968]. Also, the book Battleground Berlin reprises in Appendix 5 (1997) the summary of the collection originally compiled in CSHP 150.
According to Budiansky, "The KGB's own high-level communications went on a separate system of overhead lines that could not be tapped without its being obvious, and, concerned above all with protecting Blake as a valuable source inside SIS and unwilling to share its secrets with rival agencies, the KGB had simply left both the GRU and the Stasi in the dark about the tunnel's existence."
Discovery by the Soviets
When Blake received a transfer in 1955, the Soviets were free to "discover" the tunnel. On 21 April 1956, months after the tunnel went into operation, Soviet and East German soldiers broke into the eastern end of the tunnel. One source indicates that the wiretap had been in service for roughly 18 months. The Soviets announced the discovery to the press and called it a "breach of the norms of international law" and "a gangster act". Newspapers around the world ran photographs of the underground partition of the tunnel directly under the inter-German frontier. The wall had a sign in German and Russian reading "Entry is Forbidden by the Commanding General."
In the planning phase, the CIA and SIS had estimated that the Soviets would cover-up any discovery of the tunnel, through embarrassment and any potential repercussions. However, most world media portrayed the tunnel project as a brilliant piece of engineering. The CIA may have gained more than did the Soviets from the "discovery" of the tunnel. In part, this was because the tunnel was discovered during Soviet First Secretary Nikita Khrushchev's state visit to the United Kingdom, and specifically the day before a state banquet with HM Queen Elizabeth II at Windsor Castle. It is suspected that the Soviets and the British agreed to mute media coverage of British participation in the project even though the equipment shown in most photographs was British-built and clearly labelled as such.
Only in 1961, when Blake was arrested, tried and convicted, did Western officials realize that the tunnel had been compromised long before construction had begun. Although DCI Allen Dulles has publicly celebrated the success of Operation Gold in providing order of battle and other information about Soviet and East Bloc activities behind the Iron Curtain, a declassified NSA history implied that NSA may have thought less of the value of the tunnel collection than did the CIA.
In 1996 the Berlin city government contracted a local construction company to excavate from the former American Berlin sector the tunnel approximately, 83 meters, to make way for a new housing development.
In 1997 a 12 meter section was excavated under the guidance of William Durie from what had been the Soviet Berlin sector. This section of tunnel is displayed at the Allied Museum. The museum’s claim that this section was retrieved from the American sector is false.
The CIA museum received outer tunnel shell elements in 1999 and the International Museum in Washington in 2001.
In fiction
Operation Gold forms the background to the novels The Innocent by Ian McEwan, Voices Under Berlin: The Tale of a Monterey Mary by T.H.E. Hill and to the film The Innocent by John Schlesinger.
Notes
References
David Stafford, Spies Beneath Berlin – the Extraordinary Story of Operation Stopwatch/Gold, the CIA's Spy Tunnel Under the Russian Sector of Cold War Berlin, Overlook Press, 2002.
David E. Murphy, Sergei A. Kondrashev, George Bailey. Battleground Berlin: CIA vs. KGB in the Cold War, Yale University Press, 1999.
CIA Clandestine Services History Paper (CSHP) number 150, "The Berlin Tunnel Operation", 1968
Rory MacLean, Berlin: Imagine a City / Berlin: Portrait of a City Through the Centuries, Weidenfeld & Nicolson / Picador 2014.
External links
Turning a Cold War Scheme into Reality – Engineering the Berlin Tunnel, www.cia.gov
The Berlin Tunnel, article at The Cold War Museum
The International Spy Museum, located in downtown, Washington, DC at 800 F Street, NW, has a Berlin tunnel exhibit
Gold
Operation Gold
Gold
Operation Gold
Soviet Union–United Kingdom relations
Soviet Union–United States relations
1956 in international relations
Clandestine operations
Tunnels in Berlin |
386623 | https://en.wikipedia.org/wiki/Internet%20Security%20Association%20and%20Key%20Management%20Protocol | Internet Security Association and Key Management Protocol | Internet Security Association and Key Management Protocol (ISAKMP) is a protocol defined by RFC 2408 for establishing Security association (SA) and cryptographic keys in an Internet environment. ISAKMP only provides a framework for authentication and key exchange and is designed to be key exchange independent; protocols such as Internet Key Exchange (IKE) and Kerberized Internet Negotiation of Keys (KINK) provide authenticated keying material for use with ISAKMP. For example: IKE describes a protocol using part of Oakley and part of SKEME in conjunction with ISAKMP to obtain authenticated keying material for use with ISAKMP, and for other security associations such as AH and ESP for the IETF IPsec DOI.
Overview
ISAKMP defines the procedures for authenticating a communicating peer, creation and management of Security Associations, key generation techniques and threat mitigation (e.g. denial of service and replay attacks). As a framework, ISAKMP typically utilizes IKE for key exchange, although other methods have been implemented such as Kerberized Internet Negotiation of Keys. A Preliminary SA is formed using this protocol; later a fresh keying is done.
ISAKMP defines procedures and packet formats to establish, negotiate, modify and delete Security Associations. SAs contain all the information required for execution of various network security services, such as the IP layer services (such as header authentication and payload encapsulation), transport or application layer services or self-protection of negotiation traffic. ISAKMP defines payloads for exchanging key generation and authentication data. These formats provide a consistent framework for transferring key and authentication data which is independent of the key generation technique, encryption algorithm and authentication mechanism.
ISAKMP is distinct from key exchange protocols in order to cleanly separate the details of security association management (and key management) from the details of key exchange. There may be many different key exchange protocols, each with different security properties. However, a common framework is required for agreeing to the format of SA attributes and for negotiating, modifying and deleting SAs. ISAKMP serves as this common framework.
ISAKMP can be implemented over any transport protocol. All implementations must include send and receive capability for ISAKMP using UDP on port 500.
Implementation
OpenBSD first implemented ISAKMP in 1998 via its isakmpd(8) software.
The IPsec Services Service in Microsoft Windows handles this functionality.
The KAME project implements ISAKMP for Linux and most other open source BSDs.
Modern Cisco routers implement ISAKMP for VPN negotiation.
Vulnerabilities
Leaked NSA presentations released by Der Spiegel''' indicate that ISAKMP is being exploited in an unknown manner to decrypt IPSec traffic, as is IKE. The researchers who discovered the Logjam attack state that breaking a 1024-bit Diffie–Hellman group would break 66% of VPN servers, 18% of the top million HTTPS domains, and 26% of SSH servers, which is consistent with the leaks according to the researchers.
See also
Oakley protocol
IPsec
IKE
GDOI
References
External links
— Internet Security Association and Key Management Protocol — The Internet IP Security Domain of Interpretation for ISAKMP''
IPsec
Cryptographic protocols
Key management |
389637 | https://en.wikipedia.org/wiki/QTFairUse | QTFairUse | QTFairUse is a software application first released in November 2003 by Jon Lech Johansen. It dumps the raw output of a QuickTime AAC stream to a file, which could bypass the digital rights management (DRM) algorithm called FairPlay used to encrypt music content of media files such as those distributed by the iTunes Store, Apple's online music store. Although these resulting raw AAC files were unplayable by most media players at the time of release, they represented the first attempt at circumventing Apple's encryption. These early versions of QTFairUse would save only the "raw" AAC (not contained in an MPEG-4 (MP4) container), but later incarnations properly supported full conversions.
Released under the terms of the GNU General Public License, QTFairUse is free software.
Technical approach
Functionally, the purpose of QTFairUse is to convert protected audio files (.m4p extension) purchased from Apple's iTunes Store into M4a files, without DRM. To accomplish this task it uses a rather uncommon approach: instead of removing the already present DRM, it waits for iTunes to play back the protected file and intercepts the unencrypted AAC data stream as it is sent to the sound card. During this process, it copies unencrypted data, frame-by-frame, into RAM and then inserts it into a new MP4 container that is free of any DRM.
iTunes versions
The current release (as of December 13, 2007) of QTFairUse6, version 6-2.5, supports iTunes 6.0.2 through 7.0.2. An updated config file compatible with iTunes 7.1.2 was released on May 14, 2007. An iTunes 7.3.1 compatible configuration file was released on June 12, the same day Apple released the new iTunes update. An iTunes 7.4.2.4 compatible configuration file was released on September 18, 2007. This config file is also compatible with iTunes 7.5.0.20. The config file needs to be updated with each new iTunes release; so far the author has released updates the same day as the new iTunes release. As of January 15, 2008, the author has not updated QTFairUse for the latest version of iTunes, and the current revision is not compatible with that version.
Cease and Desist
As of February 20, 2008, the QTFairUse project was given a cease and desist letter by Apple, Inc.
All files were subsequently removed from the main download site.
References
External links
iTunes Copy Protection 'Cracked' - BBC News (Posted October 25, 2006)
QTFairUse6: Is Hymn Finally Back To Strip FairPlay on iTunes 6? - Engadget (Posted August 29, 2006)
DVD Jon Unlocks iTunes Locked Music - TheRegister (Posted November 22, 2003)
Cease and Desist Order - Forum (Posted February 20, 2008)
Free audio software
ITunes
Windows-only free software
Digital rights management circumvention software |
391321 | https://en.wikipedia.org/wiki/Key%20authentication | Key authentication | Key authentication is used to solve the problem of authenticating the keys of the person (say "person B") to whom some other person ("person A") is talking to or trying to talk to. In other words, it is the process of assuring that the key of "person A" held by "person B" does in fact belong to "person A" and vice versa.
This is usually done after the keys have been shared among the two sides over some secure channel. However, some algorithms share the keys at the time of authentication.
The simplest solution for this kind of problem is for the two concerned users to communicate and exchange keys. However, for systems in which there are a large number of users or in which the users do not personally know each other (e.g., Internet shopping), this is not practical. There are various algorithms for both symmetric keys and asymmetric public key cryptography to solve this problem.
Authentication using Shared Keys
For key authentication using the traditional symmetric key cryptography, this is the problem of assuring that there is no man-in-the-middle attacker who is trying to read or spoof the communication. There are various algorithms used now-a-days to prevent such attacks. The most common among the algorithms are Diffie–Hellman key exchange, authentication using Key distribution center, kerberos and Needham–Schroeder protocol. Other methods that can be used include Password-authenticated key agreement protocols etc.
Authentication using Public Key Cryptography
Crypto systems using asymmetric key algorithms do not evade the problem either. That a public key can be known by all without compromising the security of an encryption algorithm (for some such algorithms, though not for all) is certainly useful, but does not prevent some kinds of attacks. For example, a spoofing attack in which public key A is claimed publicly to be that of user Alice, but is in fact a public key belonging to man-in-the-middle attacker Mallet, is easily possible. No public key is inherently bound to any particular user, and any user relying on a defective binding (including Alice herself when she sends herself protected messages) will have trouble.
The most common solution to this problem is the use of public key certificates and certificate authorities (CAs) for them in a public-key infrastructure (PKI) system. The certificate authority (CA) acts as a 'trusted third party' for the communicating users and, using cryptographic binding methods (e.g., digital signatures) represents to both parties involved that the public keys each holds which allegedly belong to the other, actually do so. A digital notary service, if you will. Such CAs can be private organizations providing such assurances, or government agencies, or some combination of the two. However, in a significant sense, this merely moves the key authentication problem back one level for any CA may make a good faith certification of some key but, through error or malice, be mistaken. Any reliance on a defective key certificate 'authenticating' a public key will cause problems. As a result, many people find all PKI designs unacceptably insecure.
Accordingly, key authentication methods are being actively researched.
See also
Public-key infrastructure (PKI)
Public-key cryptography
Key-agreement protocol
Access control
Certificate authority
ID-based cryptography
Identity based encryption (IBE)
Key escrow
PGP word list
Pretty Good Privacy
Pseudonymity
Public key fingerprint
Quantum cryptography
Secure Shell
Transport Layer Security
Threshold cryptosystem
References
External links
Honest Achmed asks for trust
Kerberos: The Network Authentication Protocol
Kerberos Authentication explained
Key management |
391352 | https://en.wikipedia.org/wiki/Certificate-based%20encryption | Certificate-based encryption | Certificate-based encryption is a system in which a certificate authority uses ID-based cryptography to produce a certificate. This system gives the users both implicit and explicit certification, the certificate can be used as a conventional certificate (for signatures, etc.), but also implicitly for the purpose of encryption.
Example
A user Alice can doubly encrypt a message using another user's (Bob) public key and his (Bob's) identity.
This means that the user (Bob) cannot decrypt it without a currently valid certificate and also that the certificate authority cannot decrypt the message as they don't have the user's private key (i.e., there is no implicit escrow as with ID-based cryptography, as the double encryption means they cannot decrypt it solely with the information they have).Certificate is the trust between two parties.
Key revocation
Key revocation can be added to the system by requiring a new certificate to be issued as frequently as the level of security requires. Because the certificate is "public information", it does not need to be transmitted over a secret channel. The downside of this is the requirement for regular communication between users and the certificate authority, which means the certificate authority is more vulnerable to electronic attacks (such as denial-of-service attacks) and also that such attacks could effectively stop the system from working. This risk can be partially but not completely reduced by having a hierarchy of multiple certificate authorities.
Practical applications
The best example of practical use of certificate-based encryption is Content Scrambling System (CSS), which is used to encode DVD movies in such a way as to make them playable only in a part of the world where they are sold. However, the fact that the region decryption key is stored on the hardware level in the DVD players substantially weakens this form of protection.
See also
X.509
Certificate server
References
Craig Gentry, Certificate-Based Encryption and the Certificate Revocation Problem, Lecture Notes in Computer Science, pp. 272 – 293, 2003 .
WhatsApp end-to-end data encryption
Public-key cryptography
Identity-based cryptography
Digital rights management systems |
395000 | https://en.wikipedia.org/wiki/Wireless%20gateway | Wireless gateway | A wireless gateway routes packets from a wireless LAN to another network, wired or wireless WAN. It may be implemented as software or hardware or a combination of both. Wireless gateways combine the functions of a wireless access point, a router, and often provide firewall functions as well. They provide network address translation (NAT) functionality, so multiple users can use the internet with a single public IP. It also acts like a dynamic host configuration protocol (DHCP) to assign IPs automatically to devices connected to the network.
There are two kinds of wireless gateways. The simpler kind must be connected to a DSL modem or cable modem to connect to the internet via the internet service provider (ISP). The more complex kind has a built-in modem to connect to the internet without needing another device. This converged device saves desk space and simplifies wiring by replacing two electronic packages with one. It has a wired connection to the ISP, at least one jack port for the LAN (usually four jacks), and an antenna for wireless users. The wireless gateway could support wireless 802.11b and 802.11g with speed up to 56Mbit/s, 802.11n with speed up to 300Mps and recently the 802.11ac with speed up to 1200Mbit/s. The LAN interface may support 100Mbit/s (Fast) or 1000Mbit/s (Gigabit) Ethernet.
All wireless gateways have the ability to protect the wireless network using security encryption methods such as WEP, WPA, and WPS. WPA2 with WPS disabled is the most secure method. There are many wireless gateway brands with models offering different features and quality. They can differ on the wireless range and speed, a number of LAN ports, speed, and extra functionality. Some available brands in the market are Motorola, Netgear, and Linksys. However, most internet providers offer a free wireless gateway with their services, thus limiting the user's choice. On the other hand, the device provided by the ISP has the advantage that it comes pre-configured and ready to be installed. Another advantage of using these devices is the ability of the company to troubleshoot and fix any problem via remote access, which is very convenient for most users.
See also
Wi-Fi
IEEE 802.11
Residential gateway
References
Networking hardware |
395371 | https://en.wikipedia.org/wiki/Screener%20%28promotional%29 | Screener (promotional) | A screener (SCR) is an advance screening of a film or television series sent to critics, awards voters, video stores (for their manager and employees), and other film industry professionals, including producers and distributors. Director John Boorman is credited with creating the first Oscar screeners to promote his film The Emerald Forest in 1985.
Overview
Screeners help critics and awards voters see smaller movies that do not have the marketing advantage or distribution of major studio releases. Positive mentions can result in awards consideration. A screener often has no post-processing. Nowadays, physical DVD copies still appear to be issued, but screeners are also distributed digitally to members of the Academy of Motion Picture Arts and Sciences, and the media/publicity sites of individual television networks for television shows. When screeners leak online, they are often tagged "DVDSCR", and often have an on-screen graphic watermarked with the receiver's name or email address. Another anti-piracy measure includes the encryption of DVD discs so that they will only play in machines given exclusively to voters.
History
According to the Los Angeles Times, Oscar screeners originated with the efforts of director John Boorman to promote his film The Emerald Forest, a 1985 Powers Boothe vehicle about an American child kidnapped by a tribe in the Amazon Rainforest. The film had been lauded by critics, but due to the business troubles of its distributor, Embassy Pictures, received no advertising campaign. Boorman paid for VHS copies of the film to be made available to Academy members for no charge at certain Los Angeles video rental stores. Despite the novelty of his campaign, however, Emerald Forest received no Oscar nominations.
In 2003, the MPAA announced that they would be ceasing distribution of screeners to Academy members, citing fears of copyright infringement. A group of independent film makers sued and won a decision against the MPAA. The MPAA later reinstated the screeners with the implementation of a new policy requiring recipients to sign a binding contract that they would not share the screeners with others.
In January 2004, academy member Carmine Caridi was announced as a person of interest in an ongoing FBI investigation into video piracy. He was subsequently expelled from the academy, after he was found to have sent as many as 60 screeners a year for at least three years to a contact called Russell Sprague in Illinois. Caridi was later ordered to pay Warner Bros. for copyright infringement of two of their films, Mystic River and The Last Samurai, a total of $300,000 ($150,000 per title).
In March 2016, TorrentFreak reported that original screener DVDs appear in dozens of eBay listings. According to eBay seller NoHo Trader, the sale of Emmy screener DVDs is lawful, although studios occasionally still take down Emmy DVD auctions and other lawful promotional materials. The Television Academy indicates the limited license governing the use of these screeners prohibits further distribution.
In 2019, the academy introduced a private video on demand platform known as the "Academy Screening Room", accessible online and via an Apple TV app, which allows distributors to host screeners online for a fee. In April 2020, citing sustainability concerns, the academy announced that physical screeners and other items mailed to voters will be discontinued entirely by the 94th Academy Awards in 2022, upon which films will be made available to voters solely through the Academy Screening Room app.
See also
Test screening
Film screening
Camming
Canary trap
Coded anti-piracy
UMG Recordings, Inc. v. Augusto
References
External links
Indie Film Makers win screener ban battle against MPAA
Warner Bros cancels promo screeners
Copyright law
Warez |
397763 | https://en.wikipedia.org/wiki/Near-field%20communication | Near-field communication | Near-field communication (NFC) is a set of communication protocols that enables communication between two electronic devices over a distance of 4 cm (1 in) or less. NFC offers a low-speed connection through a simple setup that can be used to bootstrap more-capable wireless connections.
NFC devices can act as electronic identity documents and keycards. They are used in contactless payment systems and allow mobile payment replacing or supplementing systems such as credit cards and electronic ticket smart cards. These are sometimes called NFC/CTLS or CTLS NFC, with contactless abbreviated as CTLS. NFC can be used to share small files such as contacts and for bootstrapping fast connections to share larger media such as photos, videos, and other files.
Overview
Near-field communication (NFC) describes a technology which can be used for contactless exchange of data over short distances. Two NFC-capable devices are connected via a point-to-point contact over a distance of 0 to 2 cm. This connection can be used to exchange data (such as process data and maintenance and service information) between the devices. This interface can be used for parameterization of the component as well.
NFC-enabled portable devices can be provided with application software, for example to read electronic tags or make payments when connected to an NFC-compliant system. These are standardized to NFC protocols, replacing proprietary technologies used by earlier systems.
Like other "proximity card" technologies, NFC is based on inductive coupling between two so-called antennas present on NFC-enabled devices—for example a smartphone and a printer—communicating in one or both directions, using a frequency of 13.56 MHz in the globally available unlicensed radio frequency ISM band using the ISO/IEC 18000-3 air interface standard at data rates ranging from 106 to 424 kbit/s.
Every active NFC device can work in one or more of three modes:
NFC card emulation Enables NFC-enabled devices such as smartphones to act like smart cards, allowing users to perform transactions such as payment or ticketing. See Host card emulation
NFC reader/writer Enables NFC-enabled devices to read information stored on inexpensive NFC tags embedded in labels or smart posters.
NFC peer-to-peerEnables two NFC-enabled devices to communicate with each other to exchange information in an ad hoc fashion.
NFC tags are passive data stores which can be read, and under some circumstances written to, by an NFC device. They typically contain data ( between 96 and 8,192 bytes) and are read-only in normal use, but may be rewritable. Applications include secure personal data storage (e.g. debit or credit card information, loyalty program data, personal identification numbers (PINs), contacts). NFC tags can be custom-encoded by their manufacturers or use the industry specifications.
The standards were provided by the NFC Forum. The forum was responsible for promoting the technology and setting standards and certifies device compliance. Secure communications are available by applying encryption algorithms as is done for credit cards and if they fit the criteria for being considered a personal area network.
NFC standards cover communications protocols and data exchange formats and are based on existing radio-frequency identification (RFID) standards including ISO/IEC 14443 and FeliCa. The standards include ISO/IEC 18092 and those defined by the NFC Forum. In addition to the NFC Forum, the GSMA group defined a platform for the deployment of GSMA NFC Standards within mobile handsets. GSMA's efforts include Trusted Services Manager, Single Wire Protocol, testing/certification and secure element.
A patent licensing program for NFC is under deployment by France Brevets, a patent fund created in 2011. This program was under development by Via Licensing Corporation, an independent subsidiary of Dolby Laboratories, and was terminated in May 2012. A platform-independent free and open source NFC library, , is available under the GNU Lesser General Public License.
Present and anticipated applications include contactless transactions, data exchange and simplified setup of more complex communications such as Wi-Fi. In addition, when one of the connected devices has Internet connectivity, the other can exchange data with online services.
History
NFC is rooted in radio-frequency identification technology (known as RFID) which allows compatible hardware to both supply power to and communicate with an otherwise unpowered and passive electronic tag using radio waves. This is used for identification, authentication and tracking. Similar ideas in advertising and industrial applications were not generally successful commercially, outpaced by technologies such as QR codes, barcodes and UHF RFID tags.
May 17, 1983 - The first patent to be associated with the abbreviation "RFID" was granted to Charles Walton.
1997 - Early form patented and first used in Star Wars character toys for Hasbro. The patent was originally held by Andrew White and Marc Borrett at Innovision Research and Technology (Patent WO9723060). The device allowed data communication between two units in close proximity.
March 25, 2002 - Philips and Sony agreed to establish a technology specification and created a technical outline. Philips Semiconductors applied for the six fundamental patents of NFC, invented by the Austrian and French engineers Franz Amtmann and Philippe Maugars who received the European Inventor Award in 2015.
December 8, 2003 - NFC was approved as an ISO/IEC standard and later as an ECMA standard.
2004 - Nokia, Philips and Sony established the NFC Forum
2004 - Nokia launch NFC shell add-on for Nokia 5140 and later Nokia 3220 models, to be shipped in 2005.
2005 - Mobile phone experimentations in transports, with payment in May in Hanau (Nokia) and as well validation aboard in October in Nice with Orange and payment in shops in October in Caen (Samsung) with first reception of "Fly Tag" informations
2006 - Initial specifications for NFC Tags
2006 - Specification for "SmartPoster" records
2007 - Innovision’s NFC tags used in the first consumer trial in the UK, in the Nokia 6131 handset.
2008 - AirTag launched what it called the first NFC SDK.
2009 - In January, NFC Forum released Peer-to-Peer standards to transfer contacts, URLs, initiate Bluetooth, etc.
2009 - NFC first used in transports by China Unicom and Yucheng Transportation Card in the tramways and bus of Chongqing on 19 January 2009, then implemented for the first time in a metro network, by China Unicom in Beijing on 31 December 2010.
2010 - Innovision released a suite of designs and patents for low cost, mass-market mobile phones and other devices.
2010 - Nokia C7: First NFC-capable smartphone released. NFC feature was enabled by software update in early 2011.
2010 - Samsung Nexus S: First Android NFC phone shown
May 21, 2010 - Nice, France launches, with "Cityzi", the "Nice City of contactless mobile" project, the first in Europe to provide inhabitants with NFC bank cards and mobile phones (like Samsung Player One S5230), and a "bouquet of services" covering transportation (tramways and bus), tourism and student's services
2011 - Google I/O "How to NFC" demonstrates NFC to initiate a game and to share a contact, URL, app or video.
2011 - NFC support becomes part of the Symbian mobile operating system with the release of Symbian Anna version.
2011 - Research In Motion devices are the first ones certified by MasterCard Worldwide for their PayPass service
2012 - UK restaurant chain EAT. and Everything Everywhere (Orange Mobile Network Operator), partner on the UK's first nationwide NFC-enabled smartposter campaign. A dedicated mobile phone app is triggered when the NFC-enabled mobile phone comes into contact with the smartposter.
2012 - Sony introduced NFC "Smart Tags" to change modes and profiles on a Sony smartphone at close range, included with the Sony Xperia P Smartphone released the same year.
2013 - Samsung and VISA announce their partnership to develop mobile payments.
2013 - IBM scientists, in an effort to curb fraud and security breaches, develop an NFC-based mobile authentication security technology. This technology works on similar principles to dual-factor authentication security.
October 2014 - Dinube becomes the first non-card payment network to introduce NFC contactless payments natively on a mobile device, i.e. no need for an external case attached or NFC 'sticker' nor for a card. Based on Host card emulation with its own application identifier (AID), contactless payment was available on Android KitKat upwards and commercial release commenced in June 2015.
2014 - AT&T, Verizon and T-Mobile released Softcard (formerly ISIS mobile wallet). It runs on NFC-enabled Android phones and iPhone 4 and iPhone 5 when an external NFC case is attached. The technology was purchased by Google and the service ended on March 31, 2015.
November 2015 - Swatch and Visa Inc. announced a partnership to enable NFC financial transactions using the "Swatch Bellamy" wristwatch. The system is currently online in Asia, through a partnership with China UnionPay and Bank of Communications. The partnership will bring the technology to the US, Brazil, and Switzerland.
November 2015 - Google’s Android Pay function was launched, a direct rival to Apple Pay, and its roll-out across the US commenced.
Design
NFC is a set of short-range wireless technologies, typically requiring a separation of 10 cm or less. NFC operates at 13.56 MHz on ISO/IEC 18000-3 air interface and at rates ranging from 106 kbit/s to 424 kbit/s. NFC always involves an initiator and a target; the initiator actively generates an RF field that can power a passive target. This enables NFC targets to take very simple form factors such as unpowered tags, stickers, key fobs, or cards. NFC peer-to-peer communication is possible, provided both devices are powered.
NFC tags contain data and are typically read-only, but may be writable. They can be custom-encoded by their manufacturers or use NFC Forum specifications. The tags can securely store personal data such as debit and credit card information, loyalty program data, PINs and networking contacts, among other information. The NFC Forum defines four types of tags that provide different communication speeds and capabilities in terms of configurability, memory, security, data retention and write endurance. Tags currently offer between 96 and 8,192 bytes of memory.
As with proximity card technology, NFC uses inductive coupling between two nearby loop antennas effectively forming an air-core transformer. Because the distances involved are tiny compared to the wavelength of electromagnetic radiation (radio waves) of that frequency (about 22 meters), the interaction is described as near field. Only an alternating magnetic field is involved so that almost no power is actually radiated in the form of radio waves (which are electromagnetic waves, also involving an oscillating electric field); that essentially prevents interference between such devices and any radio communications at the same frequency or with other NFC devices much beyond its intended range. They operate within the globally available and unlicensed radio frequency ISM band of 13.56 MHz. Most of the RF energy is concentrated in the ±7 kHz bandwidth allocated for that band, but the emission's spectral width can be as wide as 1.8 MHz in order to support high data rates.
Working distance with compact standard antennas and realistic power levels could be up to about 20 cm (but practically speaking, working distances never exceed 10 cm). Note that because the pickup antenna may be quenched in an eddy current by nearby metallic surfaces, the tags may require a minimum separation from such surfaces.
The ISO/IEC 18092 standard supports data rates of 106, 212 or 424 kbit/s.
The communication takes place between an active "initiator" device and a target device which may either be:
Passive The initiator device provides a carrier field and the target device, acting as a transponder, communicates by modulating the incident field. In this mode, the target device may draw its operating power from the initiator-provided magnetic field.
Active Both initiator and target device communicate by alternately generating their own fields. A device stops transmitting in order to receive data from the other. This mode requires that both devices include power supplies.
NFC employs two different codings to transfer data. If an active device transfers data at 106 kbit/s, a modified Miller coding with 100% modulation is used. In all other cases Manchester coding is used with a modulation ratio of 10%.
Standards
NFC standards cover communications protocols and data exchange formats, and are based on existing RFID standards including ISO/IEC 14443 and FeliCa. The standards include ISO/IEC 18092 and those defined by the NFC Forum.
ISO/IEC
NFC is standardized in ECMA-340 and ISO/IEC 18092. These standards specify the modulation schemes, coding, transfer speeds and frame format of the RF interface of NFC devices, as well as initialization schemes and conditions required for data collision-control during initialization for both passive and active NFC modes. They also define the transport protocol, including protocol activation and data-exchange methods. The air interface for NFC is standardized in:
ISO/IEC 18092 / ECMA-340—Near Field Communication Interface and Protocol-1 (NFCIP-1)
ISO/IEC 21481 / ECMA-352—Near Field Communication Interface and Protocol-2 (NFCIP-2)
NFC incorporates a variety of existing standards including ISO/IEC 14443 Type A and Type B, and FeliCa. NFC-enabled phones work at a basic level with existing readers. In "card emulation mode" an NFC device should transmit, at a minimum, a unique ID number to a reader. In addition, NFC Forum defined a common data format called NFC Data Exchange Format (NDEF) that can store and transport items ranging from any MIME-typed object to ultra-short RTD-documents, such as URLs. The NFC Forum added the Simple NDEF Exchange Protocol (SNEP) to the spec that allows sending and receiving messages between two NFC devices.
GSMA
The GSM Association (GSMA) is a trade association representing nearly 800 mobile telephony operators and more than 200 product and service companies across 219 countries. Many of its members have led NFC trials and are preparing services for commercial launch.
GSM is involved with several initiatives:
Standards: GSMA is developing certification and testing standards to ensure global interoperability of NFC services.
Pay-Buy-Mobile initiative: Seeks to define a common global approach to using NFC technology to link mobile devices with payment and contactless systems.
On November 17, 2010, after two years of discussions, AT&T, Verizon and T-Mobile launched a joint venture to develop a platform through which point of sale payments could be made using NFC in cell phones. Initially known as Isis Mobile Wallet and later as Softcard, the venture was designed to usher in broad deployment of NFC technology, allowing their customers' NFC-enabled cell phones to function similarly to credit cards throughout the US. Following an agreement with—and IP purchase by—Google, the Softcard payment system was shuttered in March, 2015, with an endorsement for its earlier rival, Google Wallet.
StoLPaN
StoLPaN (Store Logistics and Payment with NFC) is a pan-European consortium supported by the European Commission's Information Society Technologies program. StoLPaN will examine the potential for NFC local wireless mobile communication.
NFC Forum
NFC Forum is a non-profit industry association formed on March 18, 2004, by NXP Semiconductors, Sony and Nokia to advance the use of NFC wireless interaction in consumer electronics, mobile devices and PCs. Standards include the four distinct tag types that provide different communication speeds and capabilities covering flexibility, memory, security, data retention and write endurance. NFC Forum promotes implementation and standardization of NFC technology to ensure interoperability between devices and services. As of January 2020, the NFC Forum had over 120 member companies.
NFC Forum promotes NFC and certifies device compliance and whether it fits in a personal area network.
Other standardization bodies
GSMA defined a platform for the deployment of GSMA NFC Standards within mobile handsets. GSMA's efforts include, Single Wire Protocol, testing and certification and secure element. The GSMA standards surrounding the deployment of NFC protocols (governed by NFC Forum) on mobile handsets are neither exclusive nor universally accepted. For example, Google's deployment of Host Card Emulation on Android KitKat provides for software control of a universal radio. In this HCE Deployment the NFC protocol is leveraged without the GSMA standards.
Other standardization bodies involved in NFC include:
ETSI / SCP (Smart Card Platform) to specify the interface between the SIM card and the NFC chipset.
EMVCo for the impacts on the EMV payment applications
Applications
NFC allows one- and two-way communication between endpoints, suitable for many applications.
Commerce
NFC devices can be used in contactless payment systems, similar to those used in credit cards and electronic ticket smart cards and allow mobile payment to replace/supplement these systems.
In Android 4.4, Google introduced platform support for secure NFC-based transactions through Host Card Emulation (HCE), for payments, loyalty programs, card access, transit passes and other custom services. HCE allows any Android 4.4 app to emulate an NFC smart card, letting users initiate transactions with their device. Apps can use a new Reader Mode to act as readers for HCE cards and other NFC-based transactions.
On September 9, 2014, Apple announced support for NFC-powered transactions as part of Apple Pay. With the introduction of iOS 11, Apple devices allow third-party developers to read data from NFC tags.
Bootstrapping other connections
NFC offers a low-speed connection with simple setup that can be used to bootstrap more capable wireless connections. For example, Android Beam software uses NFC to enable pairing and establish a Bluetooth connection when doing a file transfer and then disabling Bluetooth on both devices upon completion. Nokia, Samsung, BlackBerry and Sony have used NFC technology to pair Bluetooth headsets, media players and speakers with one tap. The same principle can be applied to the configuration of Wi-Fi networks. Samsung Galaxy devices have a feature named S-Beam—an extension of Android Beam that uses NFC (to share MAC Address and IP addresses) and then uses Wi-Fi Direct to share files and documents. The advantage of using Wi-Fi Direct over Bluetooth is, that it permits much faster data transfers, running up to 300 Mbit/s.
Social networking
NFC can be used for social networking, for sharing contacts, text messages and forums, links to photos, videos or files and entering multiplayer mobile games.
Identity and access tokens
NFC-enabled devices can act as electronic identity documents found in Passports and ID cards, and keycards for the use in Fare cards, Transit passes, Login Cards, Car key and Access badges . NFC's short range and encryption support make it more suitable than less private RFID systems.
Smartphone automation and NFC tags
NFC-equipped smartphones can be paired with NFC Tags or stickers that can be programmed by NFC apps. These programs can allow a change of phone settings, texting, app launching, or command execution.
Such apps do not rely on a company or manufacturer, but can be utilized immediately with an NFC-equipped smartphone and an NFC tag.
The NFC Forum published the Signature Record Type Definition (RTD) 2.0 in 2015 to add integrity and authenticity for NFC Tags. This specification allows an NFC device to verify tag data and identify the tag author.
Gaming
NFC has been used in video games starting with Skylanders: Spyro's Adventure. These are customizable figurines which contain personal data with each figure, so no two figures are exactly alike. Nintendo's Wii U was the first system to include NFC technology out of the box via the GamePad. It was later included in the Nintendo 3DS range (being built into the New Nintendo 3DS/XL and in a separately sold reader which uses Infrared to communicate to older 3DS family consoles). The amiibo range of accessories utilize NFC technology to unlock features.
Sports
Adidas Telstar 18 is a soccer ball that contains an NFC chip within. The chip enables users to interact with the ball using a smartphone.
Bluetooth comparison
NFC and Bluetooth are both relatively short-range communication technologies available on mobile phones. NFC operates at slower speeds than Bluetooth and has a much shorter range, but consumes far less power and doesn't require pairing.
NFC sets up more quickly than standard Bluetooth, but has a lower transfer rate than Bluetooth low energy. With NFC, instead of performing manual configurations to identify devices, the connection between two NFC devices is automatically established in less than .1 second. The maximum data transfer rate of NFC (424 kbit/s) is slower than that of Bluetooth V2.1 (2.1 Mbit/s).
NFC's maximum working distance of less than 20 cm reduces the likelihood of unwanted interception, making it particularly suitable for crowded areas that complicate correlating a signal with its transmitting physical device (and by extension, its user).
NFC is compatible with existing passive RFID (13.56 MHz ISO/IEC 18000-3) infrastructures. It requires comparatively low power, similar to the Bluetooth V4.0 low-energy protocol. When NFC works with an unpowered device (e.g. on a phone that may be turned off, a contactless smart credit card, a smart poster), however, the NFC power consumption is greater than that of Bluetooth V4.0 Low Energy, since illuminating the passive tag needs extra power.
Devices
In 2011, handset vendors released more than 40 NFC-enabled handsets with the Android mobile operating system. BlackBerry devices support NFC using BlackBerry Tag on devices running BlackBerry OS 7.0 and greater.
MasterCard added further NFC support for PayPass for the Android and BlackBerry platforms, enabling PayPass users to make payments using their Android or BlackBerry smartphones. A partnership between Samsung and Visa added a 'payWave' application on the Galaxy S4 smartphone.
In 2012, Microsoft added native NFC functionality in their mobile OS with Windows Phone 8, as well as the Windows 8 operating system. Microsoft provides the "Wallet hub" in Windows Phone 8 for NFC payment, and can integrate multiple NFC payment services within a single application.
In 2014, iPhone 6 was released from Apple to support NFC. and since September 2019 in iOS 13 Apple now allows NFC tags to be read out as well as labeled using an NFC app.
Deployments
, hundreds of NFC trials had been conducted. Some firms moved to full-scale service deployments, spanning one or more countries. Multi-country deployments include Orange's rollout of NFC technology to banks, retailers, transport, and service providers in multiple European countries, and Airtel Africa and Oberthur Technologies deploying to 15 countries throughout Africa.
China Telecom (China's 3rd largest mobile operator) made its NFC rollout in November 2013. The company signed up multiple banks to make their payment apps available on its SIM Cards. China telecom stated that the wallet would support coupons, membership cards, fuel cards and boarding passes. The company planned to achieve targets of rolling out 40 NFC phone models and 30 Mn NFC SIMs by 2014.
Softcard (formerly Isis Mobile Wallet), a joint venture from Verizon Wireless, AT&T and T-Mobile, focuses on in-store payments making use of NFC technology. After doing pilots in some regions, they launched across the US.
Vodafone launched the NFC-based Vodafone SmartPass mobile payment service in Spain in partnership with Visa. It enables consumers with an NFC-enabled SIM card in a mobile device to make contactless payments via their SmartPass credit balance at any POS.
OTI, an Israeli company that designs and develops contactless microprocessor-based smart card technology, contracted to supply NFC-readers to one of its channel partners in the US. The partner was required to buy $10MM worth of OTI NFC readers over 3 years.
Rogers Communications launched virtual wallet Suretap to enable users to make payments with their phone in Canada in April 2014. Suretap users can load up gift cards and prepaid MasterCards from national retailers.
Sri Lanka's first workforce smart card uses NFC.
As of December 13, 2013 Tim Hortons TimmyME BlackBerry 10 Application allowed users to link their prepaid Tim Card to the app, allowing payment by tapping the NFC-enabled device to a standard contactless terminal.
Google Wallet allows consumers to store credit card and store loyalty card information in a virtual wallet and then use an NFC-enabled device at terminals that also accept MasterCard PayPass transactions.
Germany, Austria, Finland, New Zealand, Italy, Iran, Turkey and Greece trialed NFC ticketing systems for public transport. The Lithuanian capital of Vilnius fully replaced paper tickets for public transportation with ISO/IEC 14443 Type A cards on July 1, 2013.
NFC sticker-based payments in Australia's Bankmecu and card issuer Cuscal completed an NFC payment sticker trial, enabling consumers to make contactless payments at Visa payWave terminals using a smart sticker stuck to their phone.
India was implementing NFC-based transactions in box offices for ticketing purposes.
A partnership of Google and Equity Bank in Kenya introduced NFC payment systems for public transport in the Capital city Nairobi under the branding BebaPay.
January 2019 saw the start of trial using NFC-enabled Android mobile phones to pay public transport fares in Victoria, Australia.
Criticism
Vulnerabilities
Although the range of NFC is limited to a few centimeters, standard plain NFC is not protected against eavesdropping and can be vulnerable to data modifications. Applications may use higher-layer cryptographic protocols to establish a secure channel.
The RF signal for the wireless data transfer can be picked up with antennas. The distance from which an attacker is able to eavesdrop the RF signal depends on multiple parameters, but is typically less than 10 meters. Also, eavesdropping is highly affected by the communication mode. A passive device that doesn't generate its own RF field is much harder to eavesdrop on than an active device. An attacker can typically eavesdrop within 10 m of an active device and 1 m for passive devices.
Because NFC devices usually include ISO/IEC 14443 protocols, relay attacks are feasible. For this attack the adversary forwards the request of the reader to the victim and relays its answer to the reader in real time, pretending to be the owner of the victim's smart card. This is similar to a man-in-the-middle attack. One code example demonstrates a relay attack using two stock commercial NFC devices. This attack can be implemented using only two NFC-enabled mobile phones.
As NFC uses radio waves for data transfer the possibility for any number of security attack vectors exist such as eavesdropping, data corruption, data modification and imposter attacks (man in the middle).
Limitations
NFC, and its base technology that the standard is built from, RFID, has been criticized for its lack of long range support up to a maximum 20 cm distance.
Future considerations
Ultrawide Band (UWB) another radio technology has been hailed as a future possible alternatives to NFC Technology due to further distances of data transmission, as well as Bluetooth and wireless technology.
See also
BebaPay
CIPURSE
Device-to-device
EZ-link
Mobile phone accessories
Near and far field
New Nintendo 3DS
Object hyperlinking
Poken
RuBee
Smart keychain
TecTiles
TransferJet
Wii U GamePad
Notes
References
External links
A summary video of near-field communication
Articles containing video clips
Bandplans
Ecma standards
ISO standards
Mobile telecommunications
Wireless |
400414 | https://en.wikipedia.org/wiki/USB%20flash%20drive | USB flash drive | A USB flash drive (i.e. thumb drive) is a data storage device that includes flash memory with an integrated USB interface. It is typically removable, rewritable and much smaller than an optical disc. Most weigh less than . Since first appearing on the market in late 2000, as with virtually all other computer memory devices, storage capacities have risen while prices have dropped. , flash drives with anywhere from 8 to 256 gigabytes (GB) were frequently sold, while 512 GB and 1 terabyte (TB) units were less frequent. As of 2018, 2 TB flash drives were the largest available in terms of storage capacity. Some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, and are thought to physically last between 10 and 100 years under normal circumstances (shelf storage time).
Common uses of USB flash drives are for storage, supplementary back-ups, and transferring of computer files. Compared with floppy disks or CDs, they are smaller, faster, have significantly more capacity, and are more durable due to a lack of moving parts. Additionally, they are less vulnerable to electromagnetic interference than floppy disks, and are unharmed by surface scratches (unlike CDs). However, as with any flash storage, data loss from bit leaking due to prolonged lack of electrical power and the possibility of spontaneous controller failure due to poor manufacturing could make it unsuitable for long-term archival of data. The ability to retain data is affected by the controller's firmware, internal data redundancy, and error correction algorithms.
Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the "1.44 megabyte" (1440 kibibyte) 3.5-inch floppy disk.
USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, Linux, and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, and in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices, due to their standardized form factor, which allows it to be housed inside a device without protruding.
A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example. Some are equipped with an I/O indication LED that lights up or blinks upon access. The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces also exist (e.g. micro-USB and USB-C ports). USB flash drives draw power from the computer via the USB connection. Some devices combine the functionality of a portable media player with USB flash storage; they require a battery only when used to play music on the go.
History
The basis for USB flash drives is flash memory, a type of floating-gate semiconductor memory invented by Fujio Masuoka in the early 1980s. Flash memory uses floating-gate MOSFET transistors as memory cells.
Multiple individuals have staked a claim to being the inventor of the USB flash drive. On April 5, 1999, Amir Ban, Dov Moran, and Oron Ogdan of M-Systems, an Israeli company, filed a patent application entitled "Architecture for a Universal Serial Bus-Based PC Flash Disk". The patent was subsequently granted on November 14, 2000 and these individuals have often been recognized as the inventors of the USB flash drive. Also in 1999, Shimon Shmueli, an engineer at IBM, submitted an invention disclosure asserting that he had invented the USB flash drive. A Singaporean company named Trek 2000 International is the first company known to have sold a USB flash drive, and has also maintained that it is the original inventor of the device. Finally Pua Khein-Seng, a Malaysian engineer, has also been recognized by some as a possible inventor of the device.
Given these competing claims to inventorship, patent disputes involving the USB flash drive have arisen over the years. Both Trek 2000 International and Netac Technology have accused others of infringing their patents on the USB flash drive. However, despite these lawsuits, the question of who was the first to invent the USB flash drive has not been definitively settled and multiple claims persist.
Technology improvements
Flash drives are often measured by the rate at which they transfer data. Transfer rates may be given in megabytes per second (MB/s), megabits per second (Mbit/s), or in optical drive multipliers such as "180X" (180 times 150 KiB/s). File transfer rates vary considerably among devices. Second generation flash drives have claimed to read at up to 30 MB/s and write at about half that rate, which was about 20 times faster than the theoretical transfer rate achievable by the previous model, USB 1.1, which is limited to 12 Mbit/s (1.5 MB/s) with accounted overhead. The effective transfer rate of a device is significantly affected by the data access pattern.
By 2002, USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound; after accounting for the protocol overhead that translates to a 35 MB/s effective throughput. That same year, Intel sparked widespread use of second generation USB by including them within its laptops.
By 2010, the maximum available storage capacity for the devices had reached upwards of 128 GB. USB 3.0 was slow to appear in laptops. Through 2010, the majority of laptop models still contained only USB 2.0.
In January 2013, tech company Kingston, released a flash drive with 1 TB of storage. The first USB 3.1 type-C flash drives, with read/write speeds of around 530 MB/s, were announced in March 2015. By July 2016, flash drives with 8 to 256 GB capacity were sold more frequently than those with capacities between 512 GB and 1 TB. In 2017, Kingston Technology announced the release of a 2-TB flash drive. In 2018, SanDisk announced a 1TB USB-C flash drive, the smallest of its kind.
On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.
Technology
On a USB flash drive, one end of the device is fitted with a single USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.
Inside the plastic casing is a small printed circuit board, which has some power circuitry and a small number of surface-mounted integrated circuits (ICs). Typically, one of these ICs provides an interface between the USB connector and the onboard memory, while the other is the flash memory. Drives typically use the USB mass storage device class to communicate with the host.
Flash memory
Flash memory combines a number of older technologies, with lower cost, lower power consumption and small size made possible by advances in semiconductor device fabrication technology. The memory storage was based on earlier EPROM and EEPROM technologies. These had limited capacity, were slow for both reading and writing, required complex high-voltage drive circuitry, and could be re-written only after erasing the entire contents of the chip.
Hardware designers later developed EEPROMs with the erasure region broken up into smaller "fields" that could be erased individually without affecting the others. Altering the contents of a particular memory location involved copying the entire field into an off-chip buffer memory, erasing the field, modifying the data as required in the buffer, and re-writing it into the same field. This required considerable computer support, and PC-based EEPROM flash memory systems often carried their own dedicated microprocessor system. Flash drives are more or less a miniaturized version of this.
The development of high-speed serial data interfaces such as USB made semiconductor memory systems with serially accessed storage viable, and the simultaneous development of small, high-speed, low-power microprocessor systems allowed this to be incorporated into extremely compact systems. Serial access requires far fewer electrical connections for the memory chips than does parallel access, which has simplified the manufacture of multi-gigabyte drives.
Computers access flash memory systems very much like hard disk drives, where the controller system has full control over where information is actually stored. The actual EEPROM writing and erasure processes are, however, still very similar to the earlier systems described above.
Many low-cost MP3 players simply add extra software and a battery to a standard flash memory control microprocessor so it can also serve as a music playback decoder. Most of these players can also be used as a conventional flash drive, for storing files of any type.
Essential components
There are typically five parts to a flash drive:
USB plug provides a physical interface to the host computer. Some USB flash drives use USB plug that does not protect the contacts, with the possibility of plugging it into the USB port in the wrong orientation, if the connector type is not symmetrical.
USB mass storage controller a small microcontroller with a small amount of on-chip ROM and RAM.
NAND flash memory chip(s) stores data (NAND flash is typically also used in digital cameras).
Crystal oscillator produces the device's main clock signal and controls the device's data output through a phase-locked loop.
Cover typically made of plastic or metal, protecting the electronics against mechanical stress and even possible short circuits.
Additional components
The typical device may also include:
Jumpers and test pins – for testing during the flash drive's manufacturing or loading code into its microcontroller.
LEDs – indicate data transfers or data reads and writes.
Write-protect switches – Enable or disable writing of data into memory.
Unpopulated space – provides space to include a second memory chip. Having this second space allows the manufacturer to use a single printed circuit board for more than one storage size device.
USB connector cover or cap – reduces the risk of damage, prevents the entry of dirt or other contaminants, and improves overall device appearance. Some flash drives use retractable USB connectors instead. Others have a swivel arrangement so that the connector can be protected without removing anything.
Transport aid – the cap or the body often contains a hole suitable for connection to a key chain or lanyard. Connecting the cap, rather than the body, can allow the drive itself to be lost.
Some drives offer expandable storage via an internal memory card slot, much like a memory card reader.
Size and style of packaging
Most USB flash drives weigh less than . While some manufacturers are competing for the smallest size, with the biggest memory, offering drives only a few millimeters larger than the USB plug itself, some manufacturers differentiate their products by using elaborate housings, which are often bulky and make the drive difficult to connect to the USB port. Because the USB port connectors on a computer housing are often closely spaced, plugging a flash drive into a USB port may block an adjacent port. Such devices may carry the USB logo only if sold with a separate extension cable. Such cables are USB-compatible but do not conform to the USB standard.
USB flash drives have been integrated into other commonly carried items, such as watches, pens, laser pointers, and even the Swiss Army Knife; others have been fitted with novelty cases such as toy cars or Lego bricks. USB flash drives with images of dragons, cats or aliens are very popular in Asia. The small size, robustness and cheapness of USB flash drives make them an increasingly popular peripheral for case modding.
File system
Most flash drives ship preformatted with the FAT32, or exFAT file systems. The ubiquity of the FAT32 file system allows the drive to be accessed on virtually any host device with USB support. Also, standard FAT maintenance utilities (e.g., ScanDisk) can be used to repair or retrieve corrupted data. However, because a flash drive appears as a USB-connected hard drive to the host system, the drive can be reformatted to any file system supported by the host operating system.
Defragmenting
Flash drives can be defragmented. There is a widespread opinion that defragmenting brings little advantage (as there is no mechanical head that moves from fragment to fragment), and that defragmenting shortens the life of the drive by making many unnecessary writes. However, some sources claim that defragmenting a flash drive can improve performance (mostly due to improved caching of the clustered data), and the additional wear on flash drives may not be significant.
Even distribution
Some file systems are designed to distribute usage over an entire memory device without concentrating usage on any part (e.g., for a directory) to prolong the life of simple flash memory devices. Some USB flash drives have this 'wear leveling' feature built into the software controller to prolong device life, while others do not, so it is not necessarily helpful to install one of these file systems.
Hard disk drive
Sectors are 512 bytes long, for compatibility with hard disk drives, and the first sector can contain a master boot record and a partition table. Therefore, USB flash units can be partitioned just like hard disk drives.
Longevity
The memory in flash drives was commonly engineered with multi-level cell (MLC) based memory that is good for around 3,000-5,000 program-erase cycles. Nowadays Triple-level Cell (TLC) is also often used, which has up to 500 write cycles per physical sector, while some high-end flash drives have single-level cell (SLC) based memory that is good for around 30,000 writes. There is virtually no limit to the number of reads from such flash memory, so a well-worn USB drive may be write-protected to help ensure the life of individual cells.
Estimation of flash memory endurance is a challenging subject that depends on the SLC/MLC/TLC memory type, size of the flash memory chips, and actual usage pattern. As a result, a USB flash drive can last from a few days to several hundred years.
Regardless of the endurance of the memory itself, the USB connector hardware is specified to withstand only around 1,500 insert-removal cycles.
Counterfeit products
Counterfeit USB flash drives are sometimes sold with claims of having higher capacities than they actually have. These are typically low capacity USB drives whose flash memory controller firmware is modified so that they emulate larger capacity drives (for example, a 2 GB drive being marketed as a 64 GB drive). When plugged into a computer, they report themselves as being the larger capacity they were sold as, but when data is written to them, either the write fails, the drive freezes up, or it overwrites existing data. Software tools exist to check and detect fake USB drives, and in some cases it is possible to repair these devices to remove the false capacity information and use its real storage limit.
File transfer speeds
Transfer speeds are technically determined by the slowest of three factors: the USB version used, the speed in which the USB controller device can read and write data onto the flash memory, and the speed of the hardware bus, especially in the case of add-on USB ports.
USB flash drives usually specify their read and write speeds in megabytes per second (MB/s); read speed is usually faster. These speeds are for optimal conditions; real-world speeds are usually slower. In particular, circumstances that often lead to speeds much lower than advertised are transfer (particularly writing) of many small files rather than a few very large ones, and mixed reading and writing to the same device.
In a typical well-conducted review of a number of high-performance USB 3.0 drives, a drive that could read large files at 68 MB/s and write at 46 MB/s, could only manage 14 MB/s and 0.3 MB/s with many small files. When combining streaming reads and writes the speed of another drive, that could read at 92 MB/s and write at 70 MB/s, was 8 MB/s. These differences differ radically from one drive to another; some drives could write small files at over 10% of the speed for large ones. The examples given are chosen to illustrate extremes.
Uses
Personal data transport
The most common use of flash drives is to transport and store personal files, such as documents, pictures and videos. Individuals also store medical information on flash drives for emergencies and disaster preparation.
Secure storage of data, application and software files
With wide deployment(s) of flash drives being used in various environments (secured or otherwise), the issue of data and information security remains important. The use of biometrics and encryption is becoming the norm with the need for increased security for data; on-the-fly encryption systems are particularly useful in this regard, as they can transparently encrypt large amounts of data. In some cases a secure USB drive may use a hardware-based encryption mechanism that uses a hardware module instead of software for strongly encrypting data. IEEE 1667 is an attempt to create a generic authentication platform for USB drives. It is supported in Windows 7 and Windows Vista (Service Pack 2 with a hotfix).
Computer forensics and law enforcement
A recent development for the use of a USB Flash Drive as an application carrier is to carry the Computer Online Forensic Evidence Extractor (COFEE) application developed by Microsoft. COFEE is a set of applications designed to search for and extract digital evidence on computers confiscated from suspects. Forensic software is required not to alter, in any way, the information stored on the computer being examined. Other forensic suites run from CD-ROM or DVD-ROM, but cannot store data on the media they are run from (although they can write to other attached devices, such as external drives or memory sticks).
Updating motherboard firmware
Motherboard firmware (including BIOS and UEFI) can be updated using USB flash drives. Usually, new firmware image is downloaded and placed onto a FAT16- or FAT32-formatted USB flash drive connected to a system which is to be updated, and path to the new firmware image is selected within the update component of system's firmware. Some motherboard manufacturers are also allowing such updates to be performed without the need for entering system's firmware update component, making it possible to easily recover systems with corrupted firmware.
Also, HP has introduced a USB floppy drive key, which is an ordinary USB flash drive with additional possibility for performing floppy drive emulation, allowing its usage for updating system firmware where direct usage of USB flash drives is not supported. Desired mode of operation (either regular USB mass storage device or of floppy drive emulation) is made selectable by a sliding switch on the device's housing.
Booting operating systems
Most current PC firmware permits booting from a USB drive, allowing the launch of an operating system from a bootable flash drive. Such a configuration is known as a Live USB.
Original flash memory designs had very limited estimated lifetimes. The failure mechanism for flash memory cells is analogous to a metal fatigue mode; the device fails by refusing to write new data to specific cells that have been subject to many read-write cycles over the device's lifetime. Premature failure of a "live USB" could be circumvented by using a flash drive with a write-lock switch as a WORM device, identical to a live CD. Originally, this potential failure mode limited the use of "live USB" system to special-purpose applications or temporary tasks, such as:
Loading a minimal, hardened kernel for embedded applications (e.g., network router, firewall).
Bootstrapping an operating system install or disk cloning operation, often across a network.
Maintenance tasks, such as virus scanning or low-level data repair, without the primary host operating system loaded.
, newer flash memory designs have much higher estimated lifetimes. Several manufacturers are now offering warranties of 5 years or more. Such warranties should make the device more attractive for more applications. By reducing the probability of the device's premature failure, flash memory devices can now be considered for use where a magnetic disk would normally have been required. Flash drives have also experienced an exponential growth in their storage capacity over time (following the Moore's Law growth curve). As of 2013, single-packaged devices with capacities of 1 TB are readily available, and devices with 16 GB capacity are very economical. Storage capacities in this range have traditionally been considered to offer adequate space, because they allow enough space for both the operating system software and some free space for the user's data.
Operating system installation media
Installers of some operating systems can be stored to a flash drive instead of a CD or DVD, including various Linux distributions, Windows 7 and newer versions, and macOS. In particular, Mac OS X 10.7 is distributed only online, through the Mac App Store, or on flash drives; for a MacBook Air with Boot Camp and no external optical drive, a flash drive can be used to run installation of Windows or Linux.
However, for installation of Windows 7 and later versions, using USB flash drive with hard disk drive emulation as detected in PC's firmware is recommended in order to boot from it. Transcend is the only manufacturer of USB flash drives containing such feature.
Furthermore, for installation of Windows XP, using USB flash drive with storage limit of at most 2 GB is recommended in order to boot from it.
Windows ReadyBoost
In Windows Vista and later versions, ReadyBoost feature allows flash drives (from 4 GB in case of Windows Vista) to augment operating system memory.
Application carriers
Flash drives are used to carry applications that run on the host computer without requiring installation. While any standalone application can in principle be used this way, many programs store data, configuration information, etc. on the hard drive and registry of the host computer.
The U3 company works with drive makers (parent company SanDisk as well as others) to deliver custom versions of applications designed for Microsoft Windows from a special flash drive; U3-compatible devices are designed to autoload a menu when plugged into a computer running Windows. Applications must be modified for the U3 platform not to leave any data on the host machine. U3 also provides a software framework for independent software vendors interested in their platform.
Ceedo is an alternative product, with the key difference that it does not require Windows applications to be modified in order for them to be carried and run on the drive.
Similarly, other application virtualization solutions and portable application creators, such as VMware ThinApp (for Windows) or RUNZ (for Linux) can be used to run software from a flash drive without installation.
In October 2010, Apple Inc. released their newest iteration of the MacBook Air, which had the system's restore files contained on a USB hard drive rather than the traditional install CDs, due to the Air not coming with an optical drive.
A wide range of portable applications which are all free of charge, and able to run off a computer running Windows without storing anything on the host computer's drives or registry, can be found in the list of portable software.
Backup
Some value-added resellers are now using a flash drive as part of small-business turnkey solutions (e.g., point-of-sale systems). The drive is used as a backup medium: at the close of business each night, the drive is inserted, and a database backup is saved to the drive. Alternatively, the drive can be left inserted through the business day, and data regularly updated. In either case, the drive is removed at night and taken offsite.
This is simple for the end-user, and more likely to be done.
The drive is small and convenient, and more likely to be carried off-site for safety.
The drives are less fragile mechanically and magnetically than tapes.
The capacity is often large enough for several backup images of critical data.
Flash drives are cheaper than many other backup systems.
Flash drives also have disadvantages. They are easy to lose and facilitate unauthorized backups. A lesser setback for flash drives is that they have only one tenth the capacity of hard drives manufactured around their time of distribution.
Password Reset Disk
Password Reset Disk is a feature of the Windows operating system. If a user sets up a Password Reset Disk, it can be used to reset the password on the computer it was set up on.
Audio players
Many companies make small solid-state digital audio players, essentially producing flash drives with sound output and a simple user interface. Examples include the Creative MuVo, Philips GoGear and the first generation iPod shuffle. Some of these players are true USB flash drives as well as music players; others do not support general-purpose data storage. Other applications requiring storage, such as digital voice or sound recording, can also be combined with flash drive functionality.
Many of the smallest players are powered by a permanently fitted rechargeable battery, charged from the USB interface. Fancier devices that function as a digital audio player have a USB host port (type A female typically).
Media storage and marketing
Digital audio files can be transported from one computer to another like any other file, and played on a compatible media player (with caveats for DRM-locked files). In addition, many home Hi-Fi and car stereo head units are now equipped with a USB port. This allows a USB flash drive containing media files in a variety of formats to be played directly on devices which support the format. Some LCD monitors for consumer HDTV viewing have a dedicated USB port through which music and video files can also be played without use of a personal computer.
Artists have sold or given away USB flash drives, with the first instance believed to be in 2004 when the German punk band Wizo released the Stick EP, only as a USB drive. In addition to five high-bitrate MP3s, it also included a video, pictures, lyrics, and guitar tablature. Subsequently, artists including Nine Inch Nails and Kylie Minogue have released music and promotional material on USB flash drives. The first USB album to be released in the UK was Kiss Does... Rave, a compilation album released by the Kiss Network in April 2007.
Brand and product promotion
The availability of inexpensive flash drives has enabled them to be used for promotional and marketing purposes, particularly within technical and computer-industry circles (e.g., technology trade shows). They may be given away for free, sold at less than wholesale price, or included as a bonus with another purchased product.
Usually, such drives will be custom-stamped with a company's logo, as a form of advertising. The drive may be blank, or preloaded with graphics, documentation, web links, Flash animation or other multimedia, and free or demonstration software. Some preloaded drives are read-only, while others are configured with both read-only and user-writable segments. Such dual-partition drives are more expensive.
Flash drives can be set up to automatically launch stored presentations, websites, articles, and any other software immediately on insertion of the drive using the Microsoft Windows AutoRun feature. Autorunning software this way does not work on all computers, and it is normally disabled by security-conscious users.
Arcades
In the arcade game In the Groove and more commonly In The Groove 2, flash drives are used to transfer high scores, screenshots, dance edits, and combos throughout sessions. As of software revision 21 (R21), players can also store custom songs and play them on any machine on which this feature is enabled. While use of flash drives is common, the drive must be Linux compatible.
In the arcade games Pump it Up NX2 and Pump it Up NXA, a specially produced flash drive is used as a "save file" for unlocked songs, as well as for progressing in the WorldMax and Brain Shower sections of the game.
In the arcade game Dance Dance Revolution X, an exclusive USB flash drive was made by Konami for the purpose of the link feature from its Sony PlayStation 2 counterpart. However, any USB flash drive can be used in this arcade game.
Conveniences
Flash drives use little power, have no fragile moving parts, and for most capacities are small and light. Data stored on flash drives is impervious to mechanical shock, magnetic fields, scratches and dust. These properties make them suitable for transporting data from place to place and keeping the data readily at hand.
Flash drives also store data densely compared to many removable media. In mid-2009, 256 GB drives became available, with the ability to hold many times more data than a DVD (54 DVDs) or even a Blu-ray (10 BDs).
Flash drives implement the USB mass storage device class so that most modern operating systems can read and write to them without installing device drivers. The flash drives present a simple block-structured logical unit to the host operating system, hiding the individual complex implementation details of the various underlying flash memory devices. The operating system can use any file system or block addressing scheme. Some computers can boot up from flash drives.
Specially manufactured flash drives are available that have a tough rubber or metal casing designed to be waterproof and virtually "unbreakable". These flash drives retain their memory after being submerged in water, and even through a machine wash. Leaving such a flash drive out to dry completely before allowing current to run through it has been known to result in a working drive with no future problems. Channel Five's Gadget Show cooked one of these flash drives with propane, froze it with dry ice, submerged it in various acidic liquids, ran over it with a jeep and fired it against a wall with a mortar. A company specializing in recovering lost data from computer drives managed to recover all the data on the drive. All data on the other removable storage devices tested, using optical or magnetic technologies, were destroyed.
Comparison with other portable storage
Tape
The applications of current data tape cartridges hardly overlap those of flash drives: on tape, cost per gigabyte is very low for large volumes, but the individual drives and media are expensive. Media have a very high capacity and very fast transfer speeds, but store data sequentially and are very slow for random access of data. While disk-based backup is now the primary medium of choice for most companies, tape backup is still popular for taking data off-site for worst-case scenarios and for very large volumes (more than a few hundreds of TB). See LTO tapes.
Floppy disk
Floppy disk drives are rarely fitted to modern computers and are obsolete for normal purposes, although internal and external drives can be fitted if required. Floppy disks may be the method of choice for transferring data to and from very old computers without USB or booting from floppy disks, and so they are sometimes used to change the firmware on, for example, BIOS chips. Devices with removable storage like older Yamaha music keyboards are also dependent on floppy disks, which require computers to process them. Newer devices are built with USB flash drive support.
Floppy disk hardware emulators exist which effectively utilize the internal connections and physical attributes of a floppy disk drive to utilize a device where a USB flash drive emulates the storage space of a floppy disk in a solid state form, and can be divided into a number of individual virtual floppy disk images using individual data channels.
Optical media
The various writable and re-writable forms of CD and DVD are portable storage media supported by the vast majority of computers as of 2008. CD-R, DVD-R, and DVD+R can be written to only once, RW varieties up to about 1,000 erase/write cycles, while modern NAND-based flash drives often last for 500,000 or more erase/write cycles. DVD-RAM discs are the most suitable optical discs for data storage involving much rewriting.
Optical storage devices are among the cheapest methods of mass data storage after the hard drive. They are slower than their flash-based counterparts. Standard 120 mm optical discs are larger than flash drives and more subject to damage. Smaller optical media do exist, such as business card CD-Rs which have the same dimensions as a credit card, and the slightly less convenient but higher capacity 80 mm recordable MiniCD and Mini DVD. The small discs are more expensive than the standard size, and do not work in all drives.
Universal Disk Format (UDF) version 1.50 and above has facilities to support rewritable discs like sparing tables and virtual allocation tables, spreading usage over the entire surface of a disc and maximising life, but many older operating systems do not support this format. Packet-writing utilities such as DirectCD and InCD are available but produce discs that are not universally readable (although based on the UDF standard). The Mount Rainier standard addresses this shortcoming in CD-RW media by running the older file systems on top of it and performing defect management for those standards, but it requires support from both the CD/DVD burner and the operating system. Many drives made today do not support Mount Rainier, and many older operating systems such as Windows XP and below, and Linux kernels older than 2.6.2, do not support it (later versions do). Essentially CDs/DVDs are a good way to record a great deal of information cheaply and have the advantage of being readable by most standalone players, but they are poor at making ongoing small changes to a large collection of information. Flash drives' ability to do this is their major advantage over optical media.
Flash memory cards
Flash memory cards, e.g., Secure Digital cards, are available in various formats and capacities, and are used by many consumer devices. However, while virtually all PCs have USB ports, allowing the use of USB flash drives, memory card readers are not commonly supplied as standard equipment (particularly with desktop computers). Although inexpensive card readers are available that read many common formats, this results in two pieces of portable equipment (card plus reader) rather than one.
Some manufacturers, aiming at a "best of both worlds" solution, have produced card readers that approach the size and form of USB flash drives (e.g., Kingston MobileLite, SanDisk MobileMate) These readers are limited to a specific subset of memory card formats (such as SD, microSD, or Memory Stick), and often completely enclose the card, offering durability and portability approaching, if not quite equal to, that of a flash drive. Although the combined cost of a mini-reader and a memory card is usually slightly higher than a USB flash drive of comparable capacity, the reader + card solution offers additional flexibility of use, and virtually "unlimited" capacity. The ubiquity of SD cards is such that, circa 2011, due to economies of scale, their price is now less than an equivalent-capacity USB flash drive, even with the added cost of a USB SD card reader.
An additional advantage of memory cards is that many consumer devices (e.g., digital cameras, portable music players) cannot make use of USB flash drives (even if the device has a USB port), whereas the memory cards used by the devices can be read by PCs with a card reader.
External hard disk
Particularly with the advent of USB, external hard disks have become widely available and inexpensive. External hard disk drives currently cost less per gigabyte than flash drives and are available in larger capacities. Some hard drives support alternative and faster interfaces than USB 2.0 (e.g., Thunderbolt, FireWire and eSATA). For consecutive sector writes and reads (for example, from an unfragmented file), most hard drives can provide a much higher sustained data rate than current NAND flash memory, though mechanical latencies seriously impact hard drive performance.
Unlike solid-state memory, hard drives are susceptible to damage by shock (e.g., a short fall) and vibration, have limitations on use at high altitude, and although they are shielded by their casings, they are vulnerable when exposed to strong magnetic fields. In terms of overall mass, hard drives are usually larger and heavier than flash drives; however, hard disks sometimes weigh less per unit of storage. Like flash drives, hard disks also suffer from file fragmentation, which can reduce access speed.
Obsolete devices
Audio tape cassettes and high-capacity floppy disks (e.g., Imation SuperDisk), and other forms of drives with removable magnetic media, such as the Iomega Zip drive and Jaz drives, are now largely obsolete and rarely used. There are products in today's market that will emulate these legacy drives for both tape and disk (SCSI1/SCSI2, SASI, Magneto optic, Ricoh ZIP, Jaz, IBM3590/ Fujitsu 3490E and Bernoulli for example) in state-of-the-art Compact Flash storage devices – CF2SCSI.
Encryption and security
As highly portable media, USB flash drives are easily lost or stolen. All USB flash drives can have their contents encrypted using third-party disk encryption software, which can often be run directly from the USB drive without installation (for example, FreeOTFE), although some, such as BitLocker, require the user to have administrative rights on every computer it is run on.
Archiving software can achieve a similar result by creating encrypted ZIP or RAR files.
Some manufacturers have produced USB flash drives which use hardware-based encryption as part of the design, removing the need for third-party encryption software. In limited circumstances these drives have been shown to have security problems, and are typically more expensive than software-based systems, which are available for free.
A minority of flash drives support biometric fingerprinting to confirm the user's identity. As of mid-, this was an expensive alternative to standard password protection offered on many new USB flash storage devices. Most fingerprint scanning drives rely upon the host operating system to validate the fingerprint via a software driver, often restricting the drive to Microsoft Windows computers. However, there are USB drives with fingerprint scanners which use controllers that allow access to protected data without any authentication.
Some manufacturers deploy physical authentication tokens in the form of a flash drive. These are used to control access to a sensitive system by containing encryption keys or, more commonly, communicating with security software on the target machine. The system is designed so the target machine will not operate except when the flash drive device is plugged into it. Some of these "PC lock" devices also function as normal flash drives when plugged into other machines.
Controversies
Criticisms
Failures
Like all flash memory devices, flash drives can sustain only a limited number of write and erase cycles before the drive fails. This should be a consideration when using a flash drive to run application software or an operating system. To address this, as well as space limitations, some developers have produced special versions of operating systems (such as Linux in Live USB) or commonplace applications (such as Mozilla Firefox) designed to run from flash drives. These are typically optimized for size and configured to place temporary or intermediate files in the computer's main RAM rather than store them temporarily on the flash drive.
When used in the same manner as external rotating drives (hard drives, optical drives, or floppy drives), i.e. in ignorance of their technology, USB drives' failure is more likely to be sudden: while rotating drives can fail instantaneously, they more frequently give some indication (noises, slowness) that they are about to fail, often with enough advance warning that data can be removed before total failure. USB drives give little or no advance warning of failure. Furthermore, when internal wear-leveling is applied to prolong life of the flash drive, once failure of even part of the memory occurs it can be difficult or impossible to use the remainder of the drive, which differs from magnetic media, where bad sectors can be marked permanently not to be used.
Most USB flash drives do not include a write protection mechanism. This feature, which gradually became less common, consists of a switch on the housing of the drive itself, that prevents the host computer from writing or modifying data on the drive. For example, write protection makes a device suitable for repairing virus-contaminated host computers without the risk of infecting a USB flash drive itself. In contrast to SD cards, write protection on USB flash drives (when available) is connected to the drive circuitry, and is handled by the drive itself instead of the host (on SD cards handling of the write-protection notch is optional).
A drawback to the small physical size of flash drives is that they are easily misplaced or otherwise lost. This is a particular problem if they contain sensitive data (see data security). As a consequence, some manufacturers have added encryption hardware to their drives, although software encryption systems which can be used in conjunction with any mass storage medium will achieve the same result. Most drives can be attached to keychains or lanyards. The USB plug is usually retractable or fitted with a removable protective cap.
Robustness
Most USB-based flash technology integrates a printed circuit board with a metal tip, which is simply soldered on. As a result, the stress point is where the two pieces join. The quality control of some manufacturers does not ensure a proper solder temperature, further weakening the stress point. Since many flash drives stick out from computers, they are likely to be bumped repeatedly and may break at the stress point. Most of the time, a break at the stress point tears the joint from the printed circuit board and results in permanent damage. However, some manufacturers produce discreet flash drives that do not stick out, and others use a solid metal or plastic uni-body that has no easily discernible stress point. SD cards serve as a good alternative to USB drives since they can be inserted flush.
Security threats
USB killer
In appearance similar to a USB flash drive, a USB killer is a circuit that charges up capacitors to a high voltage using the power supply pins of a USB port then discharges high voltage pulses onto the data pins. This completely standalone device can instantly and permanently damage or destroy any host hardware that it is connected to.
"Flash Drives for Freedom"
The New York-based Human Rights Foundation collaborated with Forum 280 and USB Memory Direct to launch the "Flash Drives for Freedom" program. The program was created in 2016 to smuggle flash drives with American and South Korean movies and television shows, as well as a copy of the Korean Wikipedia, into North Korea to spread pro-Western sentiment.
Current and future developments
Semiconductor corporations have worked to reduce the cost of the components in a flash drive by integrating various flash drive functions in a single chip, thereby reducing the part-count and overall package-cost.
Flash drive capacities on the market increase continually. High speed has become a standard for modern flash drives. Capacities exceeding 256 GB were available on the market as early as 2009.
Lexar is attempting to introduce a USB FlashCard, which would be a compact USB flash drive intended to replace various kinds of flash memory cards. Pretec introduced a similar card, which also plugs into any USB port, but is just one quarter the thickness of the Lexar model. Until 2008, SanDisk manufactured a product called SD Plus, which was a SecureDigital card with a USB connector.
SanDisk has also introduced a new technology to allow controlled storage and usage of copyrighted materials on flash drives, primarily for use by students. This technology is termed FlashCP.
See also
List of computer hardware
Memristor
Microdrive
Nonvolatile BIOS memory
Portable application
ReadyBoost
Sneakernet
Solid-state drive (SSD)
USB dead drop
USB Flash Drive Alliance
Windows To Go
Disk enclosure
External storage
Notes
References
2000 in computing
2000 in technology
Computer-related introductions in 2000
Solid-state computer storage
Flash drive
21st-century inventions
Office equipment |
403355 | https://en.wikipedia.org/wiki/Indian%20Air%20Force | Indian Air Force | The Indian Air Force (IAF) is the air arm of the Indian Armed Forces. Its complement of personnel and aircraft assets ranks fourth amongst the air forces of the world. Its primary mission is to secure Indian airspace and to conduct aerial warfare during armed conflict. It was officially established on 8 October 1932 as an auxiliary air force of the British Empire which honoured India's aviation service during World War II with the prefix Royal. After India gained independence from United Kingdom in 1947, the name Royal Indian Air Force was kept and served in the name of Dominion of India. With the government's transition to a Republic in 1950, the prefix Royal was removed.
Since 1950, the IAF has been involved in four wars with neighbouring Pakistan. Other major operations undertaken by the IAF include Operation Vijay, Operation Meghdoot, Operation Cactus and Operation Poomalai. The IAF's mission expands beyond engagement with hostile forces, with the IAF participating in United Nations peacekeeping missions.
The President of India holds the rank of Supreme Commander of the IAF. , 1,39,576 personnel are in service with the Indian Air Force. The Chief of the Air Staff, an air chief marshal, is a four-star officer and is responsible for the bulk of operational command of the Air Force. There is never more than one serving ACM at any given time in the IAF. The rank of Marshal of the Air Force has been conferred by the President of India on one occasion in history, to Arjan Singh. On 26 January 2002, Singh became the first and so far, only five-star rank officer of the IAF.
Mission
The IAF's mission is defined by the Armed Forces Act of 1947, the Constitution of India, and the Air Force Act of 1950. It decrees that in the aerial battlespace:
Defence of India and every part there of including preparation for defence and all such acts as may be conducive in times of war to its prosecution and after its termination to effective demobilisation.
In practice, this is taken as a directive meaning the IAF bears the responsibility of safeguarding Indian airspace and thus furthering national interests in conjunction with the other branches of the armed forces. The IAF provides close air support to the Indian Army troops on the battlefield as well as strategic and tactical airlift capabilities. The Integrated Space Cell is operated by the Indian Armed Forces, the civilian Department of Space, and the Indian Space Research Organisation. By uniting the civilian run space exploration organisations and the military faculty under a single Integrated Space Cell the military is able to efficiently benefit from innovation in the civilian sector of space exploration, and the civilian departments benefit as well.
The Indian Air Force, with highly trained crews, pilots, and access to modern military assets provides India with the capacity to provide rapid response evacuation, search-and-rescue (SAR) operations, and delivery of relief supplies to affected areas via cargo aircraft. The IAF provided extensive assistance to relief operations during natural calamities such as the Gujarat cyclone in 1998, the tsunami in 2004, and North India floods in 2013. The IAF has also undertaken relief missions such as Operation Rainbow in Sri Lanka.
History
Formation and early pilots
The Indian Air Force was established on 8 October 1932 in British India as an auxiliary air force of the Royal Air Force. The enactment of the Indian Air Force Act 1932 stipulated out their auxiliary status and enforced the adoption of the Royal Air Force uniforms, badges, brevets and insignia. On 1 April 1933, the IAF commissioned its first squadron, No.1 Squadron, with four Westland Wapiti biplanes and five Indian pilots. The Indian pilots were led by British RAF Commanding officer Flight Lieutenant (later Air Vice Marshal) Cecil Bouchier.
World War II (1939–1945)
During World War II, the IAF played an instrumental role in halting the advance of the Japanese army in Burma, where the first IAF air strike was executed. The target for this first mission was the Japanese military base in Arakan, after which IAF strike missions continued against the Japanese airbases at Mae Hong Son, Chiang Mai and Chiang Rai in northern Thailand.
The IAF was mainly involved in strike, close air support, aerial reconnaissance, bomber escort and pathfinding missions for RAF and USAAF heavy bombers. RAF and IAF pilots would train by flying with their non-native air wings to gain combat experience and communication proficiency. Besides operations in the Burma Theatre IAF pilots participated in air operations in North Africa and Europe.
In addition to the IAF, many native Indians and some 200 Indians resident in Britain volunteered to join the RAF and Women's Auxiliary Air Force. One such volunteer was Sergeant Shailendra Eknath Sukthankar, who served as a navigator with No. 83 Squadron. Sukthankar was commissioned as an officer, and on 14 September 1943, received the DFC. Squadron Leader Sukthankar eventually completed 45 operations, 14 of them on board the RAF Museum’s Avro Lancaster R5868. Another volunteer was Assistant Section Officer Noor Inayat Khan a Muslim pacifist and Indian nationalist who joined the WAAF, in November 1940, to fight against Nazism. Noor Khan served bravely as a secret agent with the Special Operations Executive (SOE) in France, but was eventually betrayed and captured. Many of these Indian airmen were seconded or transferred to the expanding IAF such as Squadron Leader Mohinder Singh Pujji DFC who led No. 4 Squadron IAF in Burma.
During the war, the IAF experienced a phase of steady expansion. New aircraft added to the fleet included the US-built Vultee Vengeance, Douglas Dakota, the British Hawker Hurricane, Supermarine Spitfire, and Westland Lysander.
In recognition of the valiant service by the IAF, King George VI conferred the prefix "Royal" in 1945. Thereafter the IAF was referred to as the Royal Indian Air Force. In 1950, when India became a republic, the prefix was dropped and it reverted to being the Indian Air Force.
First years of independence (1947–1950)
After it became independent from the British Empire in 1947, British India was partitioned into the new states of the Dominion of India and the Dominion of Pakistan. Along the lines of the geographical partition, the assets of the air force were divided between the new countries. India's air force retained the name of the Royal Indian Air Force, but three of the ten operational squadrons and facilities, located within the borders of Pakistan, were transferred to the Royal Pakistan Air Force. The RIAF Roundel was changed to an interim 'Chakra' roundel derived from the Ashoka Chakra.
Around the same time, conflict broke out between them over the control of the princely state of Jammu & Kashmir. With Pakistani forces moving into the state, its Maharaja decided to accede to India in order to receive military help. The day after, the Instrument of Accession was signed, the RIAF was called upon to transport troops into the war zone. And this was when a good management of logistics came into help. This led to the eruption of full-scale war between India and Pakistan, though there was no formal declaration of war. During the war, the RIAF did not engage the Pakistan Air Force in air-to-air combat; however, it did provide effective transport and close air support to the Indian troops.
When India became a republic in 1950, the prefix 'Royal' was dropped from the Indian Air Force. At the same time, the current IAF roundel was adopted.
Congo crisis and Annexation of Goa (1960–1961)
The IAF saw significant conflict in 1960, when Belgium's 75-year rule over Congo ended abruptly, engulfing the nation in widespread violence and rebellion. The IAF activated No. 5 Squadron, equipped with English Electric Canberra, to support the United Nations Operation in the Congo. The squadron started undertaking operational missions in November. The unit remained there until 1966, when the UN mission ended. Operating from Leopoldville and Kamina, the Canberras soon destroyed the rebel Air Force and provided the UN ground forces with its only long-range air support force.
In late 1961, the Indian government decided to attack the Portuguese colony of Goa after years of disagreement between New Delhi and Lisbon. The Indian Air Force was requested to provide support elements to the ground force in what was called Operation Vijay. Probing flights by some fighters and bombers were carried out from 8–18 December to draw out the Portuguese Air Force, but to no avail. On 18 December, two waves of Canberra bombers bombed the runway of Dabolim airfield taking care not to bomb the Terminals and the ATC tower. Two Portuguese transport aircraft (a Super Constellation and a DC-6) found on the airfield were left alone so that they could be captured intact. However the Portuguese pilots managed to take off the aircraft from the still damaged airfield and made their getaway to Portugal. Hunters attacked the wireless station at Bambolim. Vampires were used to provide air support to the ground forces. In Daman, Mystères were used to strike Portuguese gun positions. Ouragans (called Toofanis in the IAF) bombed the runways at Diu and destroyed the control tower, wireless station and the meteorological station. After the Portuguese surrendered the former colony was integrated into India.
Border disputes and changes in the IAF (1962–1971)
In 1962, border disagreements between China and India escalated to a war when China mobilised its troops across the Indian border. During the Sino-Indian War, India's military planners failed to deploy and effectively use the IAF against the invading Chinese forces. This resulted in India losing a significant amount of advantage to the Chinese; especially in Jammu and Kashmir.
On 24 April 1965, an Indian Ouragan strayed over the Pakistani border and was forced to land by a Pakistani Lockheed F-104 Starfighter, the pilot was returned to India; however, the captured aircraft would be kept by the Pakistan Air Force(PAF) and ended up being displayed at the PAF museum in Peshawar.
Three years after the Sino-Indian conflict, in 1965, Pakistan launched Operation Gibraltar, strategy of Pakistan to infiltrate Jammu and Kashmir, and start a rebellion against Indian rule. This came to be known as the Second Kashmir War. This was the first time the IAF actively engaged an enemy air force. However, instead of providing close air support to the Indian Army, the IAF carried out independent raids against PAF bases. These bases were situated deep inside Pakistani territory, making IAF fighters vulnerable to anti-aircraft fire. During the course of the conflict, the PAF enjoyed technological superiority over the IAF and had achieved substantial strategic and tactical advantage due to the suddenness of the attack and advanced state of their air force. The IAF was restrained by the government from retaliating to PAF attacks in the eastern sector while a substantive part of its combat force was deployed there and could not be transferred to the western sector, against the possibility of Chinese intervention. Moreover, international (UN) stipulations and norms did not permit military force to be introduced into the Indian state of J&K beyond what was agreed during the 1949 ceasefire. Despite this, the IAF was able to prevent the PAF from gaining air superiority over conflict zones. The small and nimble IAF Folland Gnats proved effective against the F-86 Sabres of the PAF earning it the nickname "Sabre Slayers". By the time the conflict had ended, the IAF lost 60–70 aircraft, while the PAF lost 43 aircraft. More than 60% of IAF's aircraft losses took place in ground attack missions to enemy ground-fire, since fighter-bomber aircraft would carry out repeated dive attacks on the same target. According to, Air Chief Marshal Arjan Singh of the Indian Air Force, despite having been qualitatively inferior, IAF achieved air superiority in three days in the 1965 War.
After the 1965 war, the IAF underwent a series of changes to improve its capabilities. In 1966, the Para Commandos regiment was created. To increase its logistics supply and rescue operations ability, the IAF inducted 72 HS 748s which were built by Hindustan Aeronautics Limited (HAL) under licence from Avro. India started to put more stress on indigenous manufacture of fighter aircraft. As a result, HAL HF-24 Marut, designed by the famed German aerospace engineer Kurt Tank, were inducted into the air force. HAL also started developing an improved version of the Folland Gnat, known as HAL Ajeet. At the same time, the IAF also started inducting Mach 2 capable Soviet MiG-21 and Sukhoi Su-7 fighters.
Bangladesh Liberation War (1971)
By late 1971, the intensification of the independence movement in East Pakistan lead to the Bangladesh Liberation War between India and Pakistan. On 22 November 1971, 10 days before the start of a full-scale war, four PAF F-86 Sabre jets attacked Indian and Mukti Bahini positions at Garibpur, near the international border. Two of the four PAF Sabres were shot down and one damaged by the IAF's Folland Gnats. On 3 December, India formally declared war against Pakistan following massive preemptive strikes by the PAF against Indian Air Force installations in Srinagar, Ambala, Sirsa, Halwara and Jodhpur. However, the IAF did not suffer significantly because the leadership had anticipated such a move and precautions were taken. The Indian Air Force was quick to respond to Pakistani air strikes, following which the PAF carried out mostly defensive sorties.
Within the first two weeks, the IAF had carried out almost 12,000 sorties over East Pakistan and also provided close air support to the advancing Indian Army. IAF also assisted the Indian Navy in its operations against the Pakistani Navy and Maritime Security Agency in the Bay of Bengal and Arabian Sea. On the western front, the IAF destroyed more than 20 Pakistani tanks, 4 APCs and a supply train during the Battle of Longewala. The IAF undertook strategic bombing of West Pakistan by carrying out raids on oil installations in Karachi, the Mangla Dam and a gas plant in Sindh. Similar strategy was also deployed in East Pakistan and as the IAF achieved complete air superiority on the eastern front, the ordnance factories, runways, and other vital areas of East Pakistan were severely damaged. By the time Pakistani forces surrendered, the IAF destroyed 94 PAF Aircraft
The IAF was able to conduct a wide range of missions – troop support; air combat; deep penetration strikes; para-dropping behind enemy lines; feints to draw enemy fighters away from the actual target; bombing; and reconnaissance. In contrast, the Pakistan Air Force, which was solely focused on air combat, was blown out of the subcontinent's skies within the first week of the war. Those PAF aircraft that survived took refuge at Iranian air bases or in concrete bunkers, refusing to offer a fight. Hostilities officially ended at 14:30 GMT on 17 December, after the fall of Dacca on 15 December. India claimed large gains of territory in West Pakistan (although pre-war boundaries were recognised after the war), and the independence of Pakistan's East wing as Bangladesh was confirmed. The IAF had flown over 16,000 sorties on both East and West fronts; including sorties by transport aircraft and helicopters. while the PAF flew about 30 and 2,840. More than 80 per cent of the IAF's sorties were close-support and interdiction, and according to neutral assessments about 45 IAF Aircraft were lost while, Pakistan lost 75 aircraft. Not including any F-6s, Mirage IIIs, or the six Jordanian F-104s which failed to return to their donors. But the imbalance in air losses was explained by the IAF's considerably higher sortie rate, and its emphasis on ground-attack missions. On the ground Pakistan suffered most, with 9,000 killed and 25,000 wounded while India lost 3,000 dead and 12,000 wounded. The loss of armoured vehicles was similarly imbalanced. This represented a major defeat for Pakistan. Towards the end of the war, IAF's transport planes dropped leaflets over Dhaka urging the Pakistani forces to surrender, demoralising Pakistani troops in East Pakistan.
Incidents before Kargil (1984–1988)
In 1984, India launched Operation Meghdoot to capture the Siachen Glacier in the contested Kashmir region. In Op Meghdoot, IAF's Mi-8, Chetak and Cheetah helicopters airlifted hundreds of Indian troops to Siachen. Launched on 13 April 1984, this military operation was unique because of Siachen's inhospitable terrain and climate. The military action was successful, given the fact that under a previous agreement, neither Pakistan nor India had stationed any personnel in the area. With India's successful Operation Meghdoot, it gained control of the Siachen Glacier. India has established control over all of the long Siachen Glacier and all of its tributary glaciers, as well as the three main passes of the Saltoro Ridge immediately west of the glacier—Sia La, Bilafond La, and Gyong La. Pakistan controls the glacial valleys immediately west of the Saltoro Ridge. According to the TIME magazine, India gained more than of territory because of its military operations in Siachen.
Following the inability to negotiate an end to the Sri Lankan Civil War, and to provide humanitarian aid through an unarmed convoy of ships, the Indian Government decided to carry out an airdrop of the humanitarian supplies on the evening of 4 June 1987 designated Operation Poomalai (Tamil: Garland) or Eagle Mission 4. Five An-32s escorted by four Mirage 2000 of 7 Sqn AF, 'The Battleaxes', carried out the supply drop which faced no opposition from the Sri Lankan Armed Forces. Another Mirage 2000 orbited 150 km away, acting as an airborne relay of messages to the entire fleet since they would be outside radio range once they descended to low levels. The Mirage 2000 escort formation was led by Wg Cdr Ajit Bhavnani, with Sqn Ldrs Bakshi, NA Moitra and JS Panesar as his team members and Sqn Ldr KG Bewoor as the relay pilot. Sri Lanka accused India of "blatant violation of sovereignty". India insisted that it was acting only on humanitarian grounds.
In 1987, the IAF supported the Indian Peace Keeping Force (IPKF) in northern and eastern Sri Lanka in Operation Pawan. About 70,000 sorties were flown by the IAF's transport and helicopter force in support of nearly 100,000 troops and paramilitary forces without a single aircraft lost or mission aborted. IAF An-32s maintained a continuous air link between air bases in South India and Northern Sri Lanka transporting men, equipment, rations and evacuating casualties. Mi-8s supported the ground forces and also provided air transportation to the Sri Lankan civil administration during the elections. Mi-25s of No. 125 Helicopter Unit were utilised to provide suppressive fire against militant strong points and to interdict coastal and clandestine riverine traffic.
On the night of 3 November 1988, the Indian Air Force mounted special operations to airlift a parachute battalion group from Agra, non-stop over to the remote Indian Ocean archipelago of the Maldives in response to Maldivian president Gayoom's request for military help against a mercenary invasion in Operation Cactus. The IL-76s of No. 44 Squadron landed at Hulhule at 0030 hours and the Indian paratroopers secured the airfield and restored Government rule at Male within hours. Four Mirage 2000 aircraft of 7 Sqn, led by Wg Cdr AV 'Doc' Vaidya, carried out a show of force early that morning, making low-level passes over the islands.
Kargil War (1999)
On 11 May 1999, the Indian Air Force was called in to provide close air support to the Indian Army at the height of the ongoing Kargil conflict with the use of helicopters. The IAF strike was code named Operation Safed Sagar. The first strikes were launched on 26 May, when the Indian Air Force struck infiltrator positions with fighter aircraft and helicopter gunships. The initial strikes saw MiG-27s carrying out offensive sorties, with MiG-21s and later MiG-29s providing fighter cover. The IAF also deployed its radars and the MiG-29 fighters in vast numbers to keep check on Pakistani military movements across the border. Srinagar Airport was at this time closed to civilian air-traffic and dedicated to the Indian Air Force.
On 27 May, the Indian Air Force suffered its first fatality when it lost a MiG-21 and a MiG-27 in quick succession. The following day, while on an offensive sortie, a Mi-17 was shot down by three Stinger missiles and lost its entire crew of four. Following these losses the IAF immediately withdrew helicopters from offensive roles as a measure against the threat of Man-portable air-defence systems (MANPAD). On 30 May, the Mirage 2000s were introduced in offensive capability, as they were deemed better in performance under the high-altitude conditions of the conflict zone. Mirage 2000s were not only better equipped to counter the MANPAD threat compared to the MiGs, but also gave IAF the ability to carry out aerial raids at night. The MiG-29s were used extensively to provide fighter escort to the Mirage 2000. Radar transmissions of Pakistani F-16s were picked up repeatedly, but these aircraft stayed away. The Mirages successfully targeted enemy camps and logistic bases in Kargil and severely disrupted their supply lines. Mirage 2000s were used for strikes on Muntho Dhalo and the heavily defended Tiger Hill and paved the way for their early recapture. At the height of the conflict, the IAF was conducting over forty sorties daily over the Kargil region. By 26 July, the Indian forces had successfully repulsed the Pakistani forces from Kargil.
Post Kargil incidents (1999–present)
Since the late 1990s, the Indian Air Force has been modernising its fleet to counter challenges in the new century. The fleet size of the IAF has decreased to 33 squadrons during this period because of the retirement of older aircraft. Still, India maintains the fourth largest air force in the world. The IAF plans to raise its strength to 42 squadrons. Self-reliance is the main aim that is being pursued by the defence research and manufacturing agencies.
On 10 August 1999, IAF MiG-21s intercepted a Pakistan Navy Breguet Atlantique which was flying over Sir Creek, a disputed territory. The aircraft was shot down killing all 16 Pakistani Navy personnel on board. India claimed that the Atlantic was on a mission to gather information on IAF air defence, a charge emphatically rejected by Pakistan which argued that the unarmed aircraft was on a training mission.
On 2 August 2002, the Indian Air Force bombed Pakistani posts along the Line of Control in the Kel sector, following inputs about Pakistani military buildup near the sector.
On 20 August 2013, the Indian Air Force created a world record by performing the highest landing of a C-130J at the Daulat Beg Oldi airstrip in Ladakh at the height of . The medium-lift aircraft will be used to deliver troops, supplies and improve communication networks. The aircraft belonged to the Veiled Vipers squadron based at Hindon Air Force Station.
On 13 July 2014, two MiG-21s were sent from Jodhpur Air Base to investigate a Turkish Airlines aircraft over Jaisalmer when it repeated an identification code, provided by another commercial passenger plane that had already entered Indian airspace before it. The flights were on their way to Mumbai and Delhi, and the planes were later allowed to proceed after their credentials were verified.
2019 Balakot airstrike
Following heightened tensions between India and Pakistan after the 2019 Pulwama attack that was carried out by Jaish-e-Mohammed (JeM) which killed forty-six servicemen of the Central Reserve Police Force, a group of twelve Mirage 2000 fighter planes from the Indian Air Force carried out air strikes on alleged JeM bases in Chakothi and Muzaffarabad in the Pakistan-administered Kashmir. Furthermore, the Mirage 2000s targeted an alleged JeM training camp in Balakot, a town in the Pakistani province of Khyber Pakhtunkhwa. Pakistan claimed that the Indian aircraft had only dropped bombs in the forest area demolishing pine trees near the Jaba village which is away from Balakot
and Indian officials claimed to bomb and kill a large number of terrorists in the airstrike.
2019 India–Pakistan standoff
On 27 February 2019, in retaliation for the IAF bombing of an alleged terrorist hideout in Balakot, a group of PAF Mirage-5 and JF-17 fighters allegedly conducted an airstrike against certain ground targets across the Line of Control. They were intercepted by a group of IAF fighters consisting of Su-30MKI and MiG-21 jets. An ensuing dogfight began. According to India, one PAF F-16 was shot down by an IAF MiG-21 piloted by Abhinandan Varthaman, while Pakistan denied use of F-16s in the operation. According to Pakistan, a MiG-21 and a Su30MKI were shot down, while India claims that only the MiG-21 was shot down. Indian officials rejected Pakistani claims of shooting down an Su-30MKI stating that its impossible to hide an aircraft crash as of now in populated area like Kashmir and said its a coverup for the loss of F16. While the downed MiG-21's pilot had ejected successfully, he landed in Pakistan-administered Kashmir, and was captured by the Pakistan military. Before his capture he was assaulted by a few locals. After a couple of days of captivity, the captured pilot was released by Pakistan per Third Geneva convention obligations. While Pakistan denied involvement of any of its F-16 aircraft in the strike, the IAF presented remnants of AMRAAM missiles that are only carried by the F-16s within the PAF as proof of their involvement. The US-based ''Foreign Policy'' magazine, quoting unnamed US officials, reported in April 2019 that an audit didn't find any Pakistani F-16s missing. However, the same has not been confirmed by US Official citing it as bilateral matter between US and Pakistan
Structure
The President of India is the Supreme Commander of all Indian armed forces and by virtue of that fact is the national Commander-in-chief of the Air Force. The Chief of the Air Staff with the rank of Air chief marshal is the Commander
In January 2002, the government conferred the rank of Marshal of the Indian Air Force on Arjan Singh making him the first and only Five-star officer with the Indian Air Force and ceremonial chief of the air force.
Commands
The Indian Air Force is divided into five operational and two functional commands. Each Command is headed by an Air Officer Commanding-in-Chief with the rank of Air Marshal. The purpose of an operational command is to conduct military operations using aircraft within its area of responsibility, whereas the responsibility of functional commands is to maintain combat readiness. Aside from the Training Command at Bangalore, the primary flight training is done at the Air Force Academy (located in Hyderabad), followed by operational training at various other schools. Advanced officer training for command positions is also conducted at the Defence Services Staff College; specialised advanced flight training schools are located at Bidar, Karnataka and Hakimpet, Telangana (also the location for helicopter training). Technical schools are found at a number of other locations.
{| class="wikitable sortable"
|-
! Name !! Headquarters !! Commander
|-
|Central Air Command (CAC)||Prayagraj, Uttar Pradesh
|Air Marshal Richard John Duckworth, AVSM, VSM
|-
|Eastern Air Command (EAC)||Shillong, Meghalaya
|Air Marshal Dilip Kumar Patnaik, AVSM, VM
|-
|Southern Air Command (SAC)||Thiruvananthapuram, Kerala
|Air Marshal Jonnalagedda Chalapati, AVSM, VSM
|-
|South Western Air Command (SWAC)||Gandhinagar, Gujarat
|Air Marshal Vikram Singh, AVSM, VSM
|-
|Western Air Command (WAC)||New Delhi
|Air Marshal Amit Dev, PVSM, AVSM, VM<ref name="aoc-wac">{{cite web|url=https://www.outlookindia.com/newsscroll/air-marshal-amit-dev-takes-charge-of-iafs-western-air-command/2171219|title= Air Marshal Amit Dev takes charge of IAFs Western Air Command|publisher=Outlook India|date=2021-10-01}}</ref>
|-
|Training Command (TC)+||Bangalore, Karnataka
|Air Marshal Manavendra Singh, PVSM, AVSM, VrC, VSM, ADC
|-
|Maintenance Command (MC)+||Nagpur, Maharashtra
|Air Marshal Shashiker Choudhary, PVSM, AVSM, VSM, ADC
|}Note: + = Functional CommandWings
A wing is a formation intermediate between a command and a squadron. It generally consists of two or three IAF squadrons and helicopter units, along with forward base support units (FBSU). FBSUs do not have or host any squadrons or helicopter units but act as transit airbases for routine operations. In times of war, they can become fully fledged air bases playing host to various squadrons. In all, about 47 wings and 19 FBSUs make up the IAF. Wings are typically commanded by an air commodore.
Stations
Within each operational command are anywhere from nine to sixteen bases or stations. Smaller than wings, but similarly organised, stations are static units commanded by a group captain. A station typically has one wing and one or two squadrons assigned to it.
Squadrons and units
Squadrons are the field units and formations attached to static locations. Thus, a flying squadron or unit is a sub-unit of an air force station which carries out the primary task of the IAF. A fighter squadron consists of 18 aircraft; all fighter squadrons are headed by a commanding officer with the rank of wing commander. Some transport squadrons and helicopter units are headed by a commanding officer with the rank of group captain.
Flights
Flights are sub-divisions of squadrons, commanded by a squadron leader. Each flight consists of two sections.
Sections
The smallest unit is the section, led by a flight lieutenant. Each section consists of three aircraft.
Within this formation structure, IAF has several service branches for day-to-day operations. They are:
Garud Commando Force
The Garud commandos are the special forces of the Indian Air Force (IAF). Their tasks include counter-terrorism, hostage rescue, providing security to IAF's vulnerably located assets and various air force-specific special operations. First conceived in 2002, this unit was officially established on February 6, 2004.
All Garuds are volunteers who are imparted a 52-week basic training, which includes a three-month probation followed by special operations training, basic airborne training and other warfare and survival skills. The last phase of basic training sees Garuds been deployed to get combat experience. Advanced training follows, which includes specialised weapons training.
The mandated tasks of the Garuds include direct action, special reconnaissance, rescuing downed pilots in hostile territory, establishing airbases in hostile territory and providing air-traffic control to these airbases. The Garuds also undertake suppression of enemy air defences and the destruction of other enemy assets such as radars, evaluation of the outcomes of Indian airstrikes and use laser designators to guide Indian airstrikes.
The security of IAF installations and assets are usually performed by the Air Force Police and the Defence Security Corps even though some critical assets are protected by the Garuds.
Integrated Space Cell
An Integrated Space Cell, which will be jointly operated by all the three services of the Indian armed forces, the civilian Department of Space and the Indian Space Research Organisation (ISRO) has been set up to utilise more effectively the country's space-based assets for military purposes. This command will leverage space technology including satellites. Unlike an aerospace command, where the air force controls most of its activities, the Integrated Space Cell envisages co-operation and co-ordination between the three services as well as civilian agencies dealing with space.
India currently has 10 remote sensing satellites in orbit. Though most are not meant to be dedicated military satellites, some have a spatial resolution of or below which can be also used for military applications. Noteworthy satellites include the Technology Experiment Satellite (TES) which has a panchromatic camera (PAN) with a resolution of , the RISAT-2 which is capable of imaging in all-weather conditions and has a resolution of , the CARTOSAT-2, CARTOSAT-2A and CARTOSAT-2B which carries a panchromatic camera which has a resolution of (black and white only).
Display teamsThe Surya Kiran Aerobatic Team (SKAT) (Surya Kiran is Sanskrit for Sun Rays) is an aerobatics demonstration team of the Indian Air Force. They were formed in 1996 and are successors to the Thunderbolts. The team has a total of 13 pilots (selected from the fighter stream of the IAF) and operate 9 HAL HJT-16 Kiran Mk.2 trainer aircraft painted in a "day-glo orange" and white colour scheme. The Surya Kiran team were conferred squadron status in 2006, and presently have the designation of 52 Squadron ("The Sharks"). The team is based at the Indian Air Force Station at Bidar. The IAF has begun the process of converting Surya Kirans to BAE Hawks.Sarang (Sanskrit for Peacock) is the Helicopter Display Team of the Indian Air Force. The team was formed in October 2003 and their first public performance was at the Asian Aerospace Show, Singapore, 2004. The team flies four HAL Dhruvs painted in red and white with a peacock figure at each side of the fuselage. The team is based at the Sulur Air Force Station, Coimbatore.
Personnel
Over the years reliable sources provided notably divergent estimates of the personnel strength of the Indian Air Force after analysing open-source intelligence. The public policy organisation GlobalSecurity.org had estimated that the IAF had an estimated strength of 110,000 active personnel in 1994. In 2006, Anthony Cordesman estimated that strength to be 170,000 in the International Institute for Strategic Studies (IISS) publication "The Asian Conventional Military Balance in 2006". In 2010, James Hackett revised that estimate to an approximate strength of 127,000 active personnel in the IISS publication "Military Balance 2010".
, the Indian Air Force has a sanctioned strength of 12,550 officers (12,404 serving with 146 under strength), and 142,529 airmen (127,172 serving with 15,357 under strength).
Rank structure
The rank structure of the Indian Air Force is based on that of the Royal Air Force. The highest rank attainable in the IAF is Marshal of the Indian Air Force, conferred by the President of India after exceptional service during wartime. MIAF Arjan Singh is the only officer to have achieved this rank. The head of the Indian Air Force is the Chief of the Air Staff, who holds the rank of Air Chief Marshal.
Officers
Anyone holding Indian citizenship can apply to be an officer in the Air Force as long as they satisfy the eligibility criteria. There are four entry points to become an officer. Male applicants, who are between the ages of 16 and 19 and have passed high school graduation, can apply at the Intermediate level. Men and women applicants, who have graduated from college (three-year course) and are between the ages of 18 and 28, can apply at the Graduate level entry. Graduates of engineering colleges can apply at the Engineer level if they are between the ages of 18 and 28 years. The age limit for the flying and ground duty branch is 23 years of age and for technical branch is 28 years of age. After completing a master's degree, men and women between the ages of 18 and 28 years can apply at the Post Graduate level. Post graduate applicants do not qualify for the flying branch. For the technical branch the age limit is 28 years and for the ground duty branch it is 25. At the time of application, all applicants below 25 years of age must be single. The IAF selects candidates for officer training from these applicants. After completion of training, a candidate is commissioned as a Flying Officer.
Airmen
The duty of an airman is to make sure that all the air and ground operations run smoothly. From operating Air Defence systems to fitting missiles, they are involved in all activities of an air base and give support to various technical and non-technical jobs. The airmen of Technical trades are responsible for maintenance, repair and prepare for use the propulsion system of aircraft and other airborne weapon delivery system, Radar, Voice/Data transmission and reception equipment, latest airborne weapon delivery systems, all types of light, mechanical, hydraulic, pneumatic systems of airborne missiles, aero engines, aircraft fuelling equipment and heavy duty mechanical vehicles, cranes and loading equipment etc. The competent and qualified Airmen from Technical trades also participate in flying as Flight Engineers, Flight Signallers and Flight Gunners. The recruitment of personnel below officer rank is conducted through All India Selection Tests and Recruitment Rallies. All India Selection Tests are conducted among 15 Airmen Selection Centres (ASCs) located all over India. These centres are under the direct functional control of Central Airmen Selection Board (CASB), with administrative control and support by respective commands. The role of CASB is to carry out selection and enrolment of airmen from the Airmen Selection Centres for their respective commands. Candidates initially take a written test at the time of application. Those passing the written test undergo a physical fitness test, an interview conducted in English, and medical examination. Candidates for training are selected from individuals passing the battery of tests, on the basis of their performance. Upon completion of training, an individual becomes an Airman. Some MWOs and WOs are granted honorary commission in the last year of their service as an honorary Flying Officer or Flight Lieutenant before retiring from the service.
Honorary officers
Sachin Tendulkar was the first sportsperson and the first civilian without an aviation background to be awarded the honorary rank of group captain by the Indian Air Force.
Non combatants enrolled and civilians
Non combatants enrolled (NCs(E)) were established in British India as personal assistants to the officer class, and are equivalent to the orderly or sahayak of the Indian Army.
Almost all the commands have some percentage of civilian strength which are central government employees. These are regular ranks which are prevalent in ministries. They are usually not posted outside their stations and are employed in administrative and non-technical work.
Training and education
The Indian Armed Forces have set up numerous military academies across India for training its personnel, such as the National Defence Academy (NDA). Besides the tri-service institutions, the Indian Air Force has a Training Command and several training establishments. While technical and other support staff are trained at various Ground Training Schools, the pilots are trained at the Air Force Academy, Dundigul (located in Hyderabad). The Pilot Training Establishment at Allahabad, the Air Force Administrative College at Coimbatore, the Institute of Aerospace Medicine at Bangalore, the Air Force Technical College, Bangalore at Jalahalli, the Tactics and Air Combat and Defence Establishment at Gwalior, and the Paratrooper's Training School at Agra are some of the other training establishments of the IAF.
Aircraft inventory
The Indian Air Force has aircraft and equipment of Russian (erstwhile Soviet Union), British, French, Israeli, US and Indian origins with Russian aircraft dominating its inventory. HAL produces some of the Russian and British aircraft in India under licence. The exact number of aircraft in service with the Indian Air Force cannot be determined with precision from open sources. Various reliable sources provide notably divergent estimates for a variety of high-visibility aircraft. Flight International estimates there to be around 1,750 aircraft in service with the IAF, while the International Institute for Strategic Studies provides a similar estimate of 1,850 aircraft. Both sources agree there are approximately 900 combat capable (fighter, attack etc.) aircraft in the IAF.
Multi-role fighters and strike aircraft
Dassault Rafale: the latest addition to India's aircraft arsenal; India has signed a deal for 36 Dassault Rafale multirole fighter aircraft. As of Feb 2022, 35 Rafale fighters are in service with the Indian Air Force.
Sukhoi Su-30MKI: the IAF's primary air superiority fighter, with additional air-to-ground (strike) mission capability, is the Sukhoi Su-30MKI. 272 Su-30MKIs have been in service with 12 more on order with HAL.
Mikoyan MiG-29: the MiG-29, known as Baaz (Hindi for Hawk), is a dedicated air superiority fighter, constituting the IAF's second line of defence after the Su-30MKI. There are 69 MiG-29s in service, all of which have been recently upgraded to the MiG-29UPG standard, after the decision was made in 2016 to upgrade the remaining 21 MiG-29s to the UPG standard.
Dassault Mirage 2000: the Mirage 2000, known as Vajra (Sanskrit for diamond or thunderbolt) in Indian service. The IAF currently operates 49 Mirage 2000Hs and 8 Mirage 2000 TH all of which are currently being upgraded to the Mirage 2000-5 MK2 standard with Indian specific modifications and 2 Mirage 2000-5 MK2 are in service . The IAF's Mirage 2000 are scheduled to be phased out by 2030.
HAL Tejas: IAF MiG-21s are to be replaced by domestically built HAL Tejas. The first Tejas IAF unit, No. 45 Squadron IAF Flying Daggers, was formed on 1 July 2016, followed by No. 18 Squadron IAF "Flying Bullets" on 27 May 2020. Initially stationed at Bangalore, the first squadron was then to be transferred to its home base in Sulur, Tamil Nadu. In February 2021, the Indian Air Force ordered 83 Tejas, including 40 Mark 1, 73 single-seat Mark 1As and 10 two-seat Mark 1 trainers. Total 123 ordered.
SEPECAT Jaguar: the Jaguar, known as the Shamsher, serves as the IAF's primary ground attack force. The IAF currently operates 139 Jaguars. The first batch of DARIN-1 Jaguars are now going through a DARIN-3 upgrade being equipped with EL/M-2052 AESA radars, and an improved jamming suite plus new avionics. These aircraft are scheduled to be phased out by 2030.
Mikoyan-Gurevich MiG-21: the MiG-21 serves as an interceptor aircraft in the IAF, which phased out most of its MiG-21s and planned to keep only the 125 aircraft upgraded to the MiG-21 Bison standard. The phase-out date for these interceptors has been postponed several times. Initially set for 2014–2017, it was later postponed to 2019. Current phase-out is scheduled for 2021–2022.
Airborne early warning and control system
The IAF is currently training crews in the operation of indigenously developed DRDO AEW&CS, conducting the training on Embraer ERJ 145 aircraft. The IAF also operates the EL/W-2090 Phalcon AEW&C incorporated in a Beriev A-50 platform. A total of three such systems are currently in service, with two further potential orders. The two additional Phalcons are currently in negotiation to settle price differences between Russia and India. India is also going ahead with Project India, an in-house AWACS program to develop and deliver six Phalcon-class AWACS, based on DRDO work on the smaller AEW&CS.
Aerial refuelling
The IAF currently operates six Ilyushin Il-78MKIs in the aerial refueling (tanker) role.
Transport aircraft
For strategic airlift operations, the IAF uses the Ilyushin Il-76, known as Gajraj (Hindi for King Elephant) in Indian service. The IAF operated 17 Il-76s in 2010, which are in the process of being replaced by C-17 Globemaster IIIs.
IAF C-130Js are used by special forces for combined Army-Air Force operations. India purchased six C-130Js; however, one crashed at Gwalior on 28 March 2014 while on a training mission, killing all 5 on board and destroying the aircraft. The Antonov An-32, known in Indian service as the Sutlej'' (named after Sutlej River), serves as a medium transport aircraft in the IAF. The aircraft is also used in bombing roles and paradropping operations. The IAF currently operates 105 An-32s, all of which are being upgraded. The IAF operates 53 Dornier 228 to fullfill its light transport duties. The IAF also operates Boeing 737s and Embraer ECJ-135 Legacy aircraft as VIP transports and passenger airliners for troops. Other VIP transport aircraft are used for both the Indian President and Prime Minister under the call sign Air India One.
The Hawker Siddeley HS 748 once formed the backbone of the IAF's transport fleet, but are now used mainly for training and communication duties. A replacement is under consideration.
Trainer aircraft
The HAL HPT-32 Deepak is IAF's basic flight training aircraft for cadets. The HPT-32 was grounded in July 2009 following a crash that killed two senior flight instructors, but was revived in May 2010 and is to be fitted with a parachute recovery system (PRS) to enhance survivability during an emergency in the air and to bring the trainer down safely. The HPT-32 is to be phased out soon. The HPT 32 has been replaced by Pilatus, a Swiss aircraft. The IAF uses the HAL HJT-16 Kiran mk.I for intermediate flight training of cadets, while the HJT-16 Kiran mk.II provides advanced flight and weapons training. The HAL HJT-16 Kiran Mk.2 is also operated by the Surya Kiran Aerobatic Team (SKAT) of the IAF. The Kiran is to be replaced by the HAL HJT-36 Sitara. The BAE Hawk Mk 132 serves as an advanced jet trainer in the IAF and is progressively replacing the Kiran Mk.II. The IAF has begun the process of converting the Surya Kiran display team to Hawks. A total of 106 BAE Hawk trainers have been ordered by the IAF of which 39 have entered service . IAF also ordered 72 Pipistrel Virus SW 80 microlight aircraft for basic training purpose.
Helicopters
The HAL Dhruv serves primarily as a light utility helicopter in the IAF. In addition to transport and utility roles, newer Dhruvs are also used as attack helicopters. Four Dhruvs are also operated by the Indian Air Force Sarang Helicopter Display Team. The HAL Chetak is a light utility helicopter and is used primarily for training, rescue and light transport roles in the IAF. The HAL Chetak is being gradually replaced by HAL Dhruv. The HAL Cheetah is a light utility helicopter used for high altitude operations. It is used for both transport and search-and-rescue missions in the IAF.
The Mil Mi-8 and the Mil Mi-17, Mi-17 1V and Mi-17V 5 are operated by the IAF for medium lift strategic and utility roles. The Mi-8 is being progressively replaced by the Mi-17 series of helicopters. The IAF has ordered 22 Boeing AH-64E Apache attack helicopters, 68 HAL Light Combat Helicopters (LCH), 35 HAL Rudra attack helicopters, 15 CH-47F Chinook heavy lift helicopters and 150 Mi-17V-5s to replace and augment its existing fleet of Mi-8s, Mi-17s, and Mi-24s. The Mil Mi-26 serves as a heavy lift helicopter in the IAF. It can also be used to transport troops or as a flying ambulance. The IAF currently operates three Mi-26s.
The Mil Mi-35 serves primarily as an attack helicopter in the IAF. The Mil Mi-35 can also act as a low-capacity troop transport. The IAF currently operates two squadrons (No. 104 Firebirds and No. 125 Gladiators) of Mi-25/35s.
Unmanned Aerial Vehicles
The IAF currently uses the IAI Searcher II and IAI Heron for reconnaissance and surveillance purposes. The IAI Harpy serves as an Unmanned Combat Aerial Vehicle (UCAV) which is designed to attack radar systems. The IAF also operates the DRDO Lakshya which serves as realistic towed aerial sub-targets for live fire training.
Land-based missile systems
Surface-To-Air Missiles
The air force operates twenty-five squadrons of S-125 Pechora, six squadrons of 9K33 Osa-AK, ten flights of 9K38 Igla-1, thirteen squadrons of Akash along with eighteen squadron of SPYDER for air defence. Two squadrons of Akash were on ordered. IAF and Indian Army has both placed the order of 1,000 kit of MRSAM.
Ballistic missiles
The IAF currently operates the Prithvi-II short-range ballistic missile (SRBM). The Prithvi-II is an IAF-specific variant of the Prithvi ballistic missile.
Future
The number of aircraft in the IAF has been decreasing from the late 1990s due to the retirement of older aircraft and several crashes. To deal with the depletion of force levels, the IAF has started to modernise its fleet. This includes both the upgrade of existing aircraft, equipment and infrastructure as well as induction of new aircraft and equipment, both indigenous and imported. As new aircraft enter service and numbers recover, the IAF plans to have a fleet of 42 squadrons.
Expected future acquisitions
Single-engined fighter
On 3 January 2017, Minister of Defence Manohar Parrikar addressed a media conference and announced plans for a competition to select a Strategic Partner to deliver "... 200 new single engine fighters to be made in India, which will easily cost around (USD)$45 million apiece without weaponry" with an expectation that Lockheed Martin (USA) and Saab (Sweden) will pitch the F-16 Block 70 and Gripen, respectively. An MoD official said that a global tender will be put to market in the first quarter of 2018, with a private company nominated as the strategic partners production agency followed by a two or more year process to evaluate technical and financial bids and conduct trials, before the final government-to-government deal in 2021. This represents 11 squadrons of aircraft plus several 'attrition' aircraft. India is also planning to set up an assembly line of American Lockheed Martin F-16 Fighting Falcon Block 70 in Bengaluru. It is not yet confirmed whether IAF will induct these aircraft or not.
In 2018, the defence minister Nirmala Sitharaman gave the go ahead to scale up the manufacturing of Tejas at HAL and also to export Tejas. She is quoted saying "We are not ditching the LCA. We have not gone for anything instead of Tejas. We are very confident that Tejas Mark II will be a big leap forward to fulfil the single engine fighter requirement of the forces.". IAF committed to buy 201 Mark-II variant of the Tejas taking the total order of Tejas to 324. The government also scrapped the plan to import single engine fighters leading to reduction in reliance on imports thereby strengthening the domestic defence industry.
The IAF also submitted a request for information to international suppliers for a stealth unmanned combat air vehicle (UCAV)
Current acquisitions
The IAF has placed orders for 123 HAL Tejas 40 Mark 1, 73 Mark 1A fighters and 10 Mark 1 trainers, 36 Dassault Rafale multi-role fighters, 106 basic trainer aircraft HAL HTT-40, 112 Pilatus PC-7MkII basic trainers, 72 HAL HJT-36 Sitara trainers, 65 HAL Light Combat Helicopters, 6 Airbus A330 MRTT, 56 EADS CASA C-295 aircraft and IAI Harop UCAVs.
DRDO and HAL projects
Indian defence company HAL and Defense Research Organization DRDO are developing several aircraft for the IAF such as the HAL Tejas Mk2, HAL TEDBF (naval aircraft), HAL AMCA (5th generation aircraft), DRDO AEW&CS (revived from the Airavat Project), NAL Saras, HAL HJT-36 Sitara, HAL HTT-40, HAL Light Utility Helicopter (LUH), DRDO Rustom and DRDO Ghatak UCAV. DRDO has developed the Akash missile system for the IAF and also developed the Prithvi II ballistic missile.
HAL is also close to develop its own fifth generation fighter aircraft HAL AMCA which will be inducted by 2028. DRDO has entered in a joint venture with Israel Aerospace Industries (IAI) to develop the Barak 8 SAM. Akash-NG is also being developed by DRDO which will be the same range of Barak 8. DRDO is developing the air-launched version of the BrahMos cruise missile in a joint venture with Russia's NPO Mashinostroeyenia. DRDO has now successfully developed the nuclear capable Nirbhay cruise missile.
DRDO and HAL has also engaged in the unmanned combat system. According to this, HAL will develop the whole family of unmanned aircraft by the end of 2024–25
Network-centric warfare
The Air Force Network (AFNET), a robust digital information grid that enabled quick and accurate threat responses, was launched in 2010, helping the IAF become a truly network-centric air force. AFNET is a secure communication network linking command and control centres with offensive aircraft, sensor platforms and ground missile batteries. Integrated Air Command and Control System (IACCS), an automated system for Air Defence operations will ride the AFNet backbone integrating ground and airborne sensors, weapon systems and command and control nodes. Subsequent integration with civil radar and other networks shall provide an integrated Air Situation Picture, and reportedly acts as a force multiplier for intelligence analysis, mission control, and support activities like maintenance and logistics. The design features multiple layers of security measures, including encryption and intrusion prevention technologies, to hinder and deter espionage efforts.
See also
List of Indian Air Force Gallantry Award Winners
List of Indian Army Gallantry Award Winners
List of historical aircraft of the Indian Air Force
References
Bibliography
External links
Official website of The Indian Air Force
1965, IAF Claimed its First Air-to-Air Kill documentary published by IAF
Indian Air Force on bharat-rakshak.com
Global Security article on Indo-Pakistani Wars
Designators Batches of Indian Air Force
Career Air Force Government of India
Defence agencies of India
Military units and formations established in 1932
1931 establishments in India
Military history of India during World War II |
405646 | https://en.wikipedia.org/wiki/EarthStation%205 | EarthStation 5 | Earth Station 5 (ES5) was a peer-to-peer network active between 2003 and 2005, operated by a company of the same name. The user client application also shared this name. Earth Station 5 was notable for its strong, if overstated, emphasis on user anonymity, and for its bold advocacy of piracy and copyright infringement. ES5's highly antagonistic position toward copyright advocacy and enforcement organizations garnered the group significant attention and peaked with an ES5 press release announcing a "declaration of war" against the Motion Picture Association of America. ES5 claimed to operate out of the Jenin in the Palestinian Authority-controlled West Bank, a region where they argued that copyright laws were unenforceable. Investigative journalism cast serious doubts on the company's Palestinian origin as well as many of its other claims. To this day, much about the company and its leadership remains uncertain or unknown.
Peer-to-peer services
Earth Station 5 was based around a peer-to-peer (P2P) file sharing service and a standalone Earth Station 5 file-sharing client. Initial versions of the software could only share or download files by using the ES5 network.
ES5's P2P network and client were announced on June 9, 2003. People associated with ES5 claimed in media reports that the network had more than 16,000,000 participants at its peak, but these numbers were unsupported and viewed very skeptically. The actual number of participants was probably several orders of magnitude smaller.
Largely due to the low availability of files on the small ES5 network, later versions of the ES5 client included the free software/open source giFT daemon which provided ES5 users access to the larger Gnutella and FastTrack networks. While Gnutella and FastTrack offered access to many more files, the functionality that let users access these networks did not take advantage of any of ES5's anonymity features, which decreased the advantages of ES5 over other P2P clients — in particular other FastTrack or Gnutella clients.
ES5 software
The first version of the ES5 client used a space and spaceship motif and provided many options. In addition to several filesharing options, it provided links to chat, news, forums, dating functionality and news. The client interface was derided by reviewers as "clunky" and "a busy affair" for its plethora of features.
The second version of the client, released a year later, garnered better reviews. However, users still felt overwhelmed by the "bundled" features that included a dating service and audio-visual chat. ES5 claimed it planned to capitalize on these features in order to become profitable.
Claims to anonymity
ES5 became well known for its strong claims that file-sharing on its network was entirely anonymous — a feature it billed as its most important and revolutionary — and that its users could share files while remaining undetectable and thus invulnerable to lawsuits by the RIAA's member companies, which had recently begun suing P2P users. ES5 president Ras Kabir claimed that on ES5, "users no longer have to be concerned about what they are sharing, or with whom they are sharing because there is complete anonymity."
Many groups countered ES5's claims about its users' anonymity. RIAA vice president Matt Oppenheim described it as "marketing hype of the worst kind. It is playing on the fears of others, encouraging them to engage in behavior that will get them into a boatload of trouble." Even many participating in or sympathetic to the file-sharing community were skeptical, believing that anonymous communication on P2P networks was technically impossible without critically compromising quality of service, and as a result they considered ES5's claims to be snake oil.
ES5's claims to anonymity were based on its use of several security technologies. The first version of the software used Secure Sockets Layer (SSL) encryption, encrypted searches over UDP, and integration with PGPDisk. However, none of these layers of security prevented RIAA member companies from detecting and gathering information about ES5 users' file trading activities. In later versions of its client, ES5 added the ability to use a network of proxies to obscure the source of requests or shared files. ES5 staffers maintained a frequently-updated list of proxy servers. List updates were posted regularly to the ES5 forums and then updated into the clients by hand. ES5 also added the ability to "spoof" IP addresses in a way that ES5 claimed made it more difficult to track down file sharers. While ES5's claim to anonymity continued to be viewed skeptically by both P2P advocates and RIAA representatives, no ES5 users were ever sued by the RIAA.
Copyright infringement by ES5
After the ruling in A&M Records, Inc. v. Napster, Inc., which held Napster liable for contributory infringement by its users, Most P2P networks were careful to adopt a strategy of "turning a blind eye" to copyright infringement on their network in order to escape or minimize liability. ES5 distinguished itself by openly supporting its users' copyright infringement over its network and by actively participating in sharing movies with its users.
Some within the file-sharing community speculated that ES5 shared media in order to "seed" the network with media — a step necessary because ES5's user base was very small compared to other networks and it did not require quotas. ES5 shared media by downloading content from other networks (e.g., Kazaa and Gnutella), vetting these for quality, and then connecting a version of their client to their network from one of their servers. This step was important in assuring that ES5's network could offer a sufficient amount of content to users.
In the early stages of the network, ES5 tried to attract users by streaming movies in addition to in-network sharing. To do this, ES5 created a website at es5.org which provided links to dozens of Hollywood films immediately available for streaming.
Antagonistic relationship to media industry
While few P2P networks enjoy friendly relationships with the media and content industries, ES5 displayed a famously antagonistic relationship to them—most notably the US-based Motion Picture Association of America and Recording Industry Association of America. This position culminated in a famous press release where ES5 formally declared war on the MPAA:
In response to the email received today from the Motion Picture Association of America (MPAA) to Earthstation 5 for copyright violations for streaming FIRST RUN movies over the internet for FREE, this is our official response!
Earthstation 5 is at war with the Motion Picture Association of America (MPAA) and the Record Association of America (RIAA), and to make our point very clear that their governing laws and policys [sic] have absolutely no meaning to us here in Palestine, we will continue to add even more movies for FREE.
In the same release, ES5 claimed that, "unlike Kazaa and other P2P programs who subsequently deny building their P2P program for illegal file-sharing, ES5 is the only P2P application and portal to actually join its users in doing P2P."
Because of its antagonistic relationship to media companies, its highly outspoken stance, and its claims to be based in a refugee camp in the West Bank, ES5 became the center of a large amount of media attention. Investigative articles ultimately served to expose much of the lies and misinformation behind the site and its operators.
People and leadership
ES5 intentionally obscured the details of its company leadership. While ES5 claimed that it had over one thousand employees (most based in the West Bank), the facts seem to indicate that there were never more than a handful.
Publicly, ES5 was run by President Ras Kabir, Lead Programmer "File Hoover", and Forums Administrator and Programmer "SharePro". Faced with evidence that the company had a relationship with Stephen M. Cohen, then a fugitive because of his involvement in the sex.com scandal, the company announced its retention of Cohen's services as an executive consultant.
Ras Kabir
Most ES5 press releases quoted company president Ras Kabir who, like most of the company, was nominally based in Jenin in the West Bank.
Stephen M. Cohen
While his involvement in the project was publicly hidden early on, Stephen Michael Cohen was closely involved in all stages of ES5 and, in most opinions, was the founder, primary architect, and primary participant in most of its actions. Cohen is best known as the person involved in fraudulently obtaining control of the sex.com domain name.
ES5's business registrations papers, filed with the Palestinian Authority, listed Cohen as the "sole director" of ES5. Both in the forum and through their spokesperson, ES5 officials claimed that Cohen played the role of a consultant. Kabir claimed that, "We offered Mr. Cohen an executive job with our company. He initially turned us down, however after several telephone calls, he finally gave in and agreed to help us in the capacity of a consultant."
Fueling speculation about Cohen's close involvement, ES5's unanticipated closing coincided closely with Cohen's arrest for activity related to sex.com.
Downfall
ES5 suffered a precipitous downfall that ultimately ended with the closing of the network in 2005. Analysts have seen several factors as contributing to its downfall and ultimate closing.
Accusations of Malicious code
In September 2003, Shaun "Random Nut" Garriok posted an email message to the Full Disclosure email list claiming to have uncovered "malicious code" in the ES5 client. By sending a specially-formed request, a remote user could use a facility of the ES5 client in order to delete arbitrary files on the computer of anybody running the ES5 client. Garriok concluded that "the people behind ES5 have intentionally added malicious code to ES5" and speculated that:
They could be working for the RIAA, MPAA, or a similar organization. Once they have enough users on their ES5 network, they would start deleting all copyrighted files they own which their users are sharing. The users wouldn't know what hit them.
ES5 representatives replied and suggested that this ability was an unintended side effect of the program's automatic upgrade functionality and patched it through a software update soon after Garriok's revelation. While there no public evidence of collusion between ES5 and the MPAA or RIAA, speculation about it dogged ES5, sowing seeds of distrust.
Untrue claims exposed
The anonymous authors claimed they were based in a refugee camp in Jenin, though some investigative attempts to locate their headquarters in Jenin were fruitless.
Many of the claims behind ES5 were unsubstantiated and untrue. With increased media analysis, especially from The Washington Post, it became clear that ES5 was not what it claimed to be. ES5 was not based in Jenin or even elsewhere in Palestine. Most of the people claimed to be behind ES5 were found to be fabricated, and many of ES5's technical claims were debunked; also, ES5 was not nearly as large as it claimed at several points.
ES5 had claimed that the encryption around its system made identifying and blocking traffic from the site impossible. A P2P Watch Dog post demonstrated a method by which packets from ES5 could be identified and blocked. This was quickly put into action in several anti-P2P systems on the market at the time.
Other contributing factors
There were several other reasons that many in the P2P and file-sharing community distrusted ES5. Many speculated that the project was engaging in practices such as seeding and streaming of films from ES5 servers and proxying that would not scale.
But as the apparently secretive and untruthful acts of ES5 came to light, the core fan base began to rebel in the main ES5 forum, leading to many users being banned and topics being deleted. Eventually, the forum collapsed and a new forum was started by the admin "SharePro." The new forum never gained the popularity that the old one had.
Closing
In February 2005, ES5 quietly closed its doors. On January 24, 2005, Stephen Cohen posted a strange message on the ES5 forums asking users to work with him to reinvent the platform:
Within a month, ES5 was completely shut down and dismantled. The ES5 website and forums were taken offline permanently and are today only accessible through the Internet Archive's Wayback Machine.
Notes and references
External links
Softpedia page which includes links to screenshots and the ability to download the (now useless) client
Story on P2P.Net
File sharing software
File sharing networks |
406756 | https://en.wikipedia.org/wiki/New%20Zealand%20Police | New Zealand Police | The New Zealand Police () is the national police service of New Zealand, responsible for enforcing criminal law, enhancing public safety, and maintaining order. With about 12,000 personnel it is the largest law enforcement agency in New Zealand and, with few exceptions, has primary jurisdiction over the majority of New Zealand criminal law. The New Zealand Police also has responsibility for traffic and commercial vehicle enforcement as well as other key responsibilities including protection of dignitaries, firearms licensing and matters of national security.
Policing in New Zealand was introduced in 1840, modelled on similar constabularies that existed in Britain at that time. The constabulary was initially part police and part militia. By the end of the 19th century policing by consent was the goal. The New Zealand Police has generally enjoyed a reputation for mild policing, but there have been cases when the use of force was criticised, such as during the 1981 Springbok tour.
The current Minister of Police is Poto Williams. While the New Zealand Police is a government department with a minister responsible for it, the commissioner and sworn members swear allegiance directly to the Sovereign and, by convention, have constabulary independence from the government of the day. The New Zealand Police is perceived to have a minimal level of institutional corruption.
Origins and history
Policing in New Zealand started in 1840 with the arrival of six constables accompanying Lt. Governor Hobson's official landing party to form the colony of New Zealand. Early policing arrangements were along similar lines to the UK and British colonial police forces, in particular the Royal Irish Constabulary and the New South Wales Police Force. Many of its first officers had seen prior service in either Ireland or Australia. The early force was initially part police and part militia.
The Constabulary Act 1846 aided at 'preserving the peace, and preventing robberies and other felonies, and apprehending offenders against the peace.' The Armed Constabulary Act 1867 focused the force on dealing with unrest between the indigenous Māori and the encroaching European settlers and the force grew to 200 musket-trained men. The Armed Constabulary took part in military actions against Māori opponents Riwha Titokowaru in Taranaki and Te Kooti in the central North Island in the dying stages of the New Zealand Wars.
From the police force's beginnings in 1840 through the next forty years, policing arrangements varied around New Zealand. Whilst the nationally organised Armed Constabulary split its efforts between regular law enforcement functions and militia support to the land wars, some provinces desired local police forces of their own. This led to a separate Provincial Police Force Act being passed by the parliament. However, provincial policing models lasted only two decades as economic depression in the 1870s saw some provinces stop paying their police as they ran out of money. Eventually, the government decided a single nationally organised police would be the best and most efficient policing arrangement.
The New Zealand Police Force was established as a single national force under the Police Force Act of 1886. The change in name was significant, and provincial policing arrangements were dis-established and their staff largely absorbed into the newly created New Zealand Police Force. At the same time, the government took the important step to hive off the militia functions of the old Armed Constabulary, and form the genesis of today's New Zealand Defence Force, initially called in 1886 the New Zealand Permanent Militia.
Just a decade later, policing in New Zealand was given a significant overhaul. In 1898 there was a very constructive Royal Commission of Enquiry into New Zealand Police. The Royal Commission, which included the reforming Commissioner Tunbridge who had come from the Metropolitan Police in London, produced a far-reaching report which laid the basis for positive reform of New Zealand Police for the next several decades. A complete review of police legislation in 1908 built significantly off the Royal Commission's work.
A further Police Force Act in 1947 reflected some changes of a growing New Zealand, and a country coming out of World War II. The most significant change in the structure and arrangement for police came after the departure of Commissioner Compton under a cloud of government and public concern over his management of Police in 1955. The appointment of a caretaker civilian leader of police, especially titled "Controller General" to recognise his non-operational background, opened the windows on the organisation and allowed a period of positive and constructive development to take place.
In 1958, the word "Force" was removed from the name when legislation was significantly revised.
On 1 July 1992, the Traffic Safety Service of the Ministry of Transport was merged with the police. Up until that time, the Ministry of Transport and local councils had been responsible for traffic law enforcement. In 2001, the Police re-established a specialist road policing branch known as the Highway Patrol. Today the police are mainly responsible for enforcing traffic law, while local councils can appoint parking wardens, who can enforce traffic rules regarding parking and special vehicle lanes. In 2010, after some calls to split traffic enforcement again from standard police duties, it was decided that it would remain part of their duties, partly due to the public having shown "enormous support" for it remaining this way.
The Police Act 1958 was extensively reviewed starting in 2006, after a two and a half-year consultative process the Policing Act 2008 came into effect on 1 October 2008. The process included the world's first use of a wiki to allow the public to submit or propose amendments. The wiki was open for less than two weeks, but drew international attention.
More recently, the New Zealand Police has been involved in international policing and peacekeeping missions to East Timor and the Solomon Islands, to assist these countries with establishing law and order after civil unrest. They have also been involved in community police training in Bougainville, in conjunction with Australian Federal Police. Other overseas deployments for regional assistance and relief have been to Afghanistan as part of the reconstruction effort, the Kingdom of Tonga, Thailand for the tsunami disaster and Indonesia after terrorist bombings. New Zealand Police maintains an international policing support network in eight foreign capitals, and has about 80 staff deployed in differing international missions.
Female officers
In 1936, there was 'a proposal to establish a women police branch in New Zealand', and former principal of the women's section of the South Australia Police, Kate Cocks (1875–1954) attended to speak to the member of the government, the Commissioner of Police, and a gathering of women's societies. Cocks was the first of two female officers in December 1915 with the SA Police, until her retirement in 1935, with the largest women's section of all Australian state law enforcement agencies.
Women were first admitted to the police in 1941 but were not issued with uniforms. One of the first intake was Edna Pearce, who received the badge number S1 when she was finally issued a uniform in 1952. Pearce made the first arrest by a woman police officer in New Zealand. By January 1949, officer Miss R. M. Hadfield did a cross-Tasman interchange, working for two months in Sydney, a month in Melbourne, and Tasmania. At the time, female officers wore only a small badge under the coat lapel.
In 1992 less than 10 per cent of the New Zealand Police Force were women.
Organisation
There is a Police National Headquarters that provides policy and planning advice as well as national oversight and management of the organisation. Although headed by a Commissioner, the New Zealand Police is a decentralised organisation divided into twelve districts.
Each district has a geographical area of responsibility and a central station from which subsidiary and suburban stations are managed. As of March 2019, there are 327 police stations around the country with nearly 12,000 staff who respond to more than 600,000 emergency 111 calls each year.
The Commissioner is in overall charge of the New Zealand Police. Assisting the Commissioner are two chief officers in the rank of Deputy Commissioner: Deputy Commissioner-Resource Management; and Deputy Commissioner-Operations.
Five chief officers in the rank of Assistant Commissioner and the Director of Intelligence report to the Deputy Commissioner-Operations. The Assistant Commissioner-Investigations/International is responsible for the National Criminal Investigations Group, the Organised and Financial Crime Agency New Zealand (OFCANZ), Financial Crime Group, International Services Group and Pacific Islands Chiefs of Police Secretariat. The Investigations and International Group leads the prevention, investigation, disruption and prosecution of serious and transnational crime. It also leads liaison, overseas deployment and capacity building with international policing partners. The Assistant Commissioner-Operations is responsible for Community Policing, Youth, Communications Centres, Operations Group, Prosecutions and Road Policing. The remaining three Assistant Commissioners command geographical policing areas – Upper North, Lower North and South. Each area is divided into three to five districts.
District Commanders hold the rank of Superintendent, as do sworn National Managers, the road policing manager in the Waitemata District, responsible for the motorway network and traffic alcohol group, and the commandant of the Royal New Zealand Police College. Area Commanders hold the rank of Inspector as do Shift Commanders based in each of the three Communications Centres. District Section Commanders are typically Senior Sergeants.
The New Zealand Police is a member of Interpol and has close relationships with the Australian police forces, at both the state and federal level. Several New Zealand Police representatives are posted overseas in key New Zealand diplomatic missions.
It is acknowledged, by both Police and legislation, that important and valuable roles in the performance of the functions of the Police are played by: public agencies or bodies (for example, local authorities and state sectors), persons who hold certain statutory offices (for example, Maori Wardens), and parts of the private sector, especially the private security industry. It is also acknowledged that it is often appropriate, or even necessary, for Police to perform some of its functions by working in co-operation with citizens, or other agencies or bodies.
Districts
The New Zealand Police is organised into twelve districts: nine in the North Island and three in the South Island. Each district is subdivided into between two and four areas:
Northland – based in Whangārei; divided into two areas: Far North (Kerikeri) and Whangarei-Kaipara (Whangārei).
Waitematā – based in Henderson; divided into three areas: North (Orewa), West (Henderson), and East (Rosedale).
Auckland City – based in Auckland Central; divided into three areas: West (Avondale), Central (Auckland Central), and East (Mount Wellington).
Counties-Manukau – based in Manukau; divided into four areas: West (Ōtāhuhu), Central (Manurewa), East (Flat Bush), and South (Papakura).
Waikato – based in Hamilton; divided into three areas: Hamilton City, Waikato West (Huntly), and Waikato East (Thames).
Bay of Plenty – based in Rotorua; divided into four areas: Western Bay of Plenty (Tauranga), Eastern Bay of Plenty (Whakatāne), Rotorua, and Taupō.
Eastern – based in Hastings; divided into two areas: Hawke's Bay (Hastings) and Tairāwhiti (Gisborne).
Central – based in Palmerston North; divided into three areas: Taranaki (New Plymouth), Whanganui, and Manawatū (Palmerston North).
Wellington – based in Wellington; divided into four areas: Wellington City, Kapi-Mana (Porirua), Hutt Valley (Lower Hutt), and Wairarapa (Masterton).
Tasman – based in Nelson; divided into three areas: Nelson Bays (Nelson), Marlborough (Blenheim), and West Coast (Greymouth).
Canterbury – based in Christchurch; divided into three areas: Christchurch Metro, Canterbury Rural (Rangiora) and Aoraki (Timaru).
Southern – based in Dunedin; divided into three areas: Otago Coastal (Dunedin), Otago Lakes-Central (Queenstown), and Southland (Invercargill).
Communications centres
New Zealand Police operate three communications centres that are responsible for receiving 111 emergency calls, *555 traffic calls and general calls for service and dispatching the relevant response. The centres are:
Northern Communications Centre, based in Auckland and responsible for the northern half of the North Island, down to Hicks Bay, Desert Road south of Turangi, and Awakino
Central Communications Centre, based in Wellington and responsible for the southern half of the North Island, from Mokau, Taumarunui, the Desert Road north of Waiouru, and Te Araroa in the north
Southern Communications Centre, based in the Christchurch Central Police Station, responsible for the South Island
The Police Digital Services Centre, a new digital services and communications centre, opened in Paraparaumu in November 2018.
Ranks
A police employee becomes a constable by swearing the oath under section 22 of the New Zealand Policing Act 2008. Upon doing so the constable receives certain statutory powers and responsibilities, including the power of arrest. While constables make up the majority of the workforce, non-sworn staff and volunteers provide a wide range of support services where a constable's statutory powers are not required. Rank insignia are worn on epaulettes. Officers of Inspector rank and higher are commissioned by the Governor-General, but are still promoted from the ranks of non-commissioned officers. A recently graduated constable is considered a Probationary Constable for up to two years, until he or she has passed ten workplace assessment standards. The completion of the above is known as obtaining permanent appointment.
Detective ranks somewhat parallel the street ranks up to Detective Superintendent. Trainee Detectives spend a minimum of six months as a Constable on Trial after completing an intensive Selection and Induction course. During these initial six months they are required to pass four module based exams before progression to Detective Constable. They are then required to continue studying with another six exam based modules as well as a number of workplace assessments. Once the Detective Constable has completed all of this they are then required to sit a pre-requisite exam based on all of the exam based modules they have previously sat. If they are successful in passing this they attend the Royal New Zealand Police College where they complete their training with the Detective Qualification course before receiving the final designation of Detective. All of these requirements are expected to be completed within two to three years.
The rank of Senior Constable is granted to Constables after 14 years of service and the Commissioner of Police is satisfied with their conduct. Senior Constables are well regarded within the New Zealand Police for their extensive policing experience, and are often used to train and mentor other police officers.
Detective and Detective Constable are considered designations and not specific ranks. That is, Detectives do not outrank uniformed constables. Nevertheless, a police officer with a Detective designation will generally assume control of a serious crime scene rather than a uniform staff member regardless of rank.
To promote to the rank of a Sergeant, Constables must have at least 5 years of service, have a good understanding of general Policing and pass the Core Policing Knowledge examination. Once completed, they are then eligible for promotion.
Insignia and uniform
New Zealand police uniforms formerly followed the British model closely but, since the 1970s, a number of changes have been implemented. These include the adoption of a medium blue shade in place of dark blue, the abolition of custodian helmets and the substitution of synthetic leather jackets for silver buttoned tunics when on ordinary duty. The normal headdress is a peaked cap with blue and white Sillitoe tartan band and silver badge. Baseball caps and Akubra wide-brimmed hats are authorised for particular duties or climatic conditions. Stab resistant and high visibility vests are normally worn on duty. The body vests are also marked with Sillitoe tartan markings.
AOS and STG members, when deployed, wear the usual charcoal-coloured clothing used by armed-response and counter-terror units around the world. In 2008, a survey found strong staff support for the re-introduction of the white custodian helmets worn until 1995, to reinforce the police's professional image.
Equipment
Communications
Police officers communicate with each other via Apple iPhones. For shorter, fast communication, front-line police officers also use radios.
In 2009 New Zealand Police began moving from using analogue two-way radios, to trialling digital encrypted radios in the Wellington region. The trial was perceived as having been successful and New Zealand Police planned to roll out digital encrypted radios to all regions. However, this has not progressed as planned and only the main centres of Auckland, Wellington and Christchurch have digital encryption.
Fleet
Drones
In 2012, the police began using drones also known as Unmanned Aerial Vehicles. By 2013, drones had been used only twice; in one case a drone was used in a criminal investigation and led to charges being laid in court. Privacy Commissioner Marie Shroff said "organisations using drones needed good privacy policies – or possibly a warrant".
Helicopters
The Air Support Unit, commonly known as Eagle, is based in Auckland at Auckland Heliport, Pikes Point, Onehunga and operates three Bell 429 GlobalRanger helicopters. In October 2017, the Eagle became a 24/7 service and in July 2019 the Bell 429 helicopters entered service to replace the AS355 Squirrels. In February 2020, an Eagle helicopter was based in Christchurch at Christchurch Airport for a five week trial.
Maritime Units
Two maritime units are also operated – the launch Deodar III in Auckland and the launch Lady Elizabeth IV in Wellington, supported by various smaller vessels.
Road Vehicles
The Holden Commodore is the current generic road vehicle of choice for the police. In the past they have used Ford Falcons and the Nissan Maxima. The highway patrol mainly uses the Holden Commodore S variant along with the Holden VF Commodore. The police also use unmarked models of the Holden Cruze and Holden Commodore. Liveries are chequered Battenburg markings orange-blue (older models – VT, VX and VZ Commodores) or yellow-blue (Newer Models, Captiva, Commodore VE and VF, trucks and vans), as well as cars in standard factory colours, commonly referred to as unmarked or undercover.
Since 2008 the orange-blue livery is being phased out, and all marked patrol vehicles were expected to have the yellow-blue livery as well as LED light bars by 2014. Both Commodore sedan and wagon bodies are used – normally in V6 form. The Holden Commodore (VE, VT, VX and VZ) is currently being phased starting 2013 and slowly being replaced with Holden VF Commodores, with ZB Commodores joining the fleet in 2018. The Holden Cruze is currently only used for Youth Aid, both marked and unmarked. With Holden's announcement it would cease operations in 2021, new pursuit vehicles have been investigated. A request for proposal was issued in July 2020 and 27 different vehicle models were evaluated. In November 2020, the police announced that the Skoda Superb would supersede the Holden Commodore, with the first cars being introduced in April 2021. Dog handlers have fully enclosed utility or station wagon vehicles, which may be liveried or unmarked, with cages in the rear and remotely operated canopy doors to allow the handler to release their dog if away from the vehicle.
The police also use vans and trucks as Team Policing Units, command centers, mobile police stations, and for the Riot Squad and Armed Offenders Squad (AOS). The AOS also have their own vehicles which is commonly seen as a Nissan X-Trail and the newly introduced Toyota Highlander (all unmarked and equipped with bullbars). They also use the new Holden Acadia with unique markings in the upper/middle North Island.
The police use SUV-type vehicles mainly for use in rural New Zealand but can also be used in urban areas (mainly in airports). The vehicles used are the Holden Captiva, the Colorado, and its predecessor the Rodeo.
The police and Ministry of Transport (see history above) have used a wide range of different cars and motorbikes over the years.
Previous police vehicles
Humber Super Snipe 1938 – 1960
Vauxhall Velox 1950 – 1962
Ford Zephyr 1954 – 1967
Holden Standard/Special 1958 – 1968
Vauxhall Velox PB 1962 – 1965
Ford Falcon 1962 – 2000
Holden Kingswood 1968 – 1982
Vauxhall 3.5 Cresta 1969
Holden Belmont 1969 – 1987
Leyland P76 1976 – 1978
Holden Commodore 1980 – 2021
Ford Sierra 1984 – 1988
Mitsubishi V3000 1986 – 1989
Holden Colorado 2008 – 2021
Holden Captiva 2009 – 2018
Toyota Camry 2006 – 2021
Holden Cruze 2014 – 2021
Holden Equinox 2020 - 2021
Skoda Superb 2021 – present
Previous police motorcycles
BSA 650 Police Special 1969–1971
Triumph Trophy 650 1970s
Norton Commando 750 1970s
Honda ST1300 2014–current
Various Japanese and European motorcycles including Yamaha, Suzuki, Kawasaki, BMW 1979–2000
BMW R1150, BMW R1200 2000–current
Weapons
New Zealand Police officers carry OC spray (pepper spray), batons and tasers (stun guns). The only officers who routinely carry firearms are members of the Diplomatic Protection Squad, and those with dog and airport units. Most officers are trained to use Glock 17 pistols and Bushmaster XM15 M4A3 Patrolman AR-15 type, military style semi-automatic rifles and wear a holster attachment in case they do need a pistol. Since 2012, frontline vehicles have had a locked box in the passenger foot-well containing two loaded and holstered Glock 17s and, in the rear of the vehicle, a locked case with two Bushmaster rifles and ballistic vests. Officers must tell their supervisor or communications staff if they are accessing a firearm from their vehicle. The vehicles are fitted with alarms in case windows are broken. Each officer carries vehicle keys and safe keys.
The Police Association claims the carrying of handguns is inevitable. In January 2013, a Waikato officer was attacked by at least five men after he deployed his OC spray and Taser. His radio was taken from him and his pistol was 'misplaced' during the attack. The Police Association's request for routine carrying of firearms for all officers after this incident was dismissed by the Police Commissioner. The current firearm training and issuing policy has been criticised. Not all police officers receive regular firearm training and not all vehicles contain a firearm. In October 2015, unarmed officers at a routine police checkpoint at Te Atatū South who pursued a vehicle that sped off from the checkpoint were shot at from the offender's vehicle. In December 2015, the Police Association referred to the incident while requesting that all frontline officers receive firearm training and that their vehicles contain a secured firearm. This was rejected.
In July 2015, the Police Commissioner announced that Tasers would be routinely carried by police officers. Tasers were first trialled in 2006 and in 2010 were rolled out throughout New Zealand with all frontline vehicles containing an X26 or X2 Taser in a locked box. In 2012, figures showed that a 'disproportionate number of people' targeted by police Tasers were mental health patients.
Police officers receive regular Police Integrated Tactical Training (PITT) with different levels of training, depending upon an officer's role and responsibilities. In 2017, a training model was introduced, and the number of officers trained as so-called 'Level 1 responders' increased to 79%. Level 1 includes training with pistols, rifles, tasers, defensive tactics, handcuffs, OC spray and batons. In 2019, Level 1 responder live-fire training and simunitions training increased by 50%. Police annually release a report of their use of force including OC spray, Tasers and firearms.
Since 2019, all officers wear a Body Armour System (BAS). This is a stab-resistant vest with equipment pouches that can be fitted with ballistic hard armour plates. The BAS replaced the stab resistant body armour (SRBA) introduced in 2006 and the ballistic Hard Armour Plate (HAP) which could be worn over the SRBA.
Notable incidents
On 8 October 1941, four police officers were killed by South Island farmer Stanley Graham, 40, who fired at them as they attempted to seize arms from his West Coast home at Kowhitirangi. After widespread searches, two policemen and a local civilian saw Graham carrying his rifle and ammunition belts on 20 October. He was shot by Constable James D'Arcy Quirke with a .303 rifle, from a distance of 25 meters, while crawling through a patch of scrub. He died early the next morning in Westland Hospital, Hokitika.
The police investigation into the murders of Harvey and Jeanette Crewe in 1970 was a turning point in the public's perception of the police. A royal commission subsequently found that the police had planted evidence and framed Arthur Allan Thomas for the murder. Writer Keith Hunter believes this introduced "a cynicism (in attitudes towards the police) that infects us today."
During the 1981 Springbok tour, the police formed three riot squads known as Red Squad, Blue Squad and White Squad to control anti-apartheid protesters who laid siege to rugby union fields where the touring team was playing. Police were described as being heavy-handed with their batons as they tried to 'subdue' protesters opposed to the Springbok tour. The tour had a significant effect on public perceptions of the police who since this time "have never been viewed with the same general benign approval".
In July 1985, the New Zealand Police arrested two French Action Service operatives after the Rainbow Warrior was bombed and sunk in Auckland harbour. The rapid arrest was attributed to the high level of public support for the investigation.
In October 2007 at least 17 people were arrested in a series of raids under the Suppression of Terrorism Act and the Arms Act 1983. The raids targeted a range of political activists allegedly involved in illegal firearms activity. The case dragged on for nearly four years and cost taxpayers millions of dollars. Much of the surveillance evidence was found to have been gained illegally and charges against all but four defendants were dropped. The remaining four were charged with firearms offences, found guilty and sentenced to terms of imprisonment and home detention.
On 20 January 2012, the police flew in by helicopter and arrested Kim Dotcom and three others in Coatesville, Auckland, in an armed raid on Dotcom's house following United States cybercrime indictments against him for on-line piracy via his internet file sharing company, Megaupload. Assets worth $17 million were seized including eighteen luxury cars, giant screen TVs and works of art. According to Dotcom, about 80 police officers were involved in the operation; the New Zealand police claimed it was between 20 and 30. The incident became controversial when a district court judge ruled that the warrants issued for the property seizures were invalid and it turned out the Government Communications Security Bureau (GCSB) had broken the law when asked by police to spy on Dotcom.
Police and civilian deaths
Police killed on duty
Since 1 September 1886, 33 police officers have been killed by criminals.
A member of the New Zealand Police, Sergeant Stewart Graeme Guthrie, was the last New Zealand civilian recipient of the George Cross, which is awarded for conspicuous gallantry. He fired a warning shot near a gunman at Aramoana on 13 November 1990, but was killed by a return shot from the gunman, who also killed twelve others. , 29 police officers have been killed by criminal acts, and about 17 by accident, while in the performance of their official duties. The most recent policeman to die was Constable Matthew Dennis Hunt, who was shot and killed during a routine traffic stop.
Civilian deaths involving police
In June 2012 the Independent Police Conduct Authority (IPCA) released a comprehensive report on deaths in police custody. There were 27 deaths in the last ten years – ten of which were suicides. Seven deaths occurred when police were overly vigorous in the use of restraint. Another seven were "caused by the detainee's medical condition" which got dramatically worse in police custody, and three deaths were drug related when police failed to ascertain the detainees were on drugs. Of the 27 deaths, the IPCA said only four "involved serious neglect of duty or breaches of policy by police". On top of deaths in custody, police have shot and killed seven people in the last ten years. One was an innocent bystander, and another two were not carrying firearms but were carrying other weapons. The police were exonerated in all seven cases.
Numerous people have also died in collisions during or shortly after police car chases. In the five years after December 2003, 24 people died and 91 received serious injuries in police pursuits. Over this period, the IPCA made numerous recommendations to change police protocols, but the death rate continued to climb. In 2010, 18 drivers fleeing police were killed. Fourteen of the deaths were triggered by pursuits over minor offences rather than serious crimes. That year police conducted the fourth review of pursuit policy in six years and ignored key recommendations of the Independent Police Conduct Authority making only minor changes to the policy. Over the next 12 months, 15 drivers died in the course of police pursuits. 14% of pursuits result in a crash either by the police or the offender but police guidelines do not provide a predetermined speed at which officers should pull out of a pursuit. The IPCA has now recommended that pursuit policy would should require officers to "state a reason for beginning a pursuit," and recommended compulsory alcohol and drug testing of police officers involved in fatal incidents.
Counter-terrorism and military assistance
Since 2005 the NZ Police's main counterterrorism and threat assessment group is the National Security Investigations Team, previously known as the Special Investigation Group. The NSIT is composed of four teams in regional centres, with a remit that covers early intervention in cases of extremism, soliciting informants, and building relationships with communities. Public information on the NSIT was released in relation to criticism of its handling of right wing terrorism in the lead up to the Christchurch terror attack.
The NZ Police are accountable for the operational response to threats to national security, including terrorism. If an incident escalates to a level where their internal resources are unable to adequately deal with the issue (for example, a major arms encounter or a significant terrorist threat), the Police Incident Controller may call on extra assistance from the New Zealand Defence Force and in particular NZ's Special Forces, the military focused New Zealand Special Air Service and terrorism focused Commando Squadron (D Squadron). Control of the incident remains with police throughout. As of 2009, the two military counter terrorist units have never been deployed in a domestic law-enforcement operation. Military resources such as Light Armoured Vehicles have been used and requested before, such as during the Napier shootings, and Royal New Zealand Air Force helicopters from No. 3 Squadron are often used to assist in search and rescue and cannabis eradication operations.
In 1964, the Armed Offenders Squad (AOS) was created to provide a specialist armed response unit, similar to the Metropolitan Police Service's SC&O19 in the United Kingdom. In addition to the AOS, the New Zealand Police maintain a full-time counter-terrorist unit, the Special Tactics Group (STG). Similar to the FBI's Hostage Rescue Team, the STG train in dynamic entry and other tactics vital in high-risk situations. The STG train with the SAS and are the last line of law enforcement response available before a police Incident Controller calls in support from the Defence Force.
Crime statistics
Crime statistics are documented in the police annual report. The police also publish bi-yearly statistical summaries of crime for both New Zealand as a whole and each police district. In early 2005, crime statistics for both recorded crime and recorded apprehensions for the last 10 years were published by Statistics New Zealand. These statistics provide offence statistics related to individual sections of legislation and appear to be the most detailed national crime statistics available today.
Controversies
During the early years of the present century several controversies put the Police under close scrutiny. Some have been investigated by the Independent Police Conduct Authority; others have received significant publicity.
INCIS
The Integrated National Crime Information System (INCIS) was a computer software package developed by IBM in the early 1990s to provide improved information, investigation and analysis capabilities to the police. Deputy Police Commissioner, Barry Matthews, was responsible for its implementation and acknowledged that police requested 'hundreds and hundreds of changes' to the system as the programme was being developed. It never worked as required and ended up costing $130 million before it was finally abandoned in 2000.
The wasted resources and on-going problems surrounding the failure of the project were a huge distraction for the police. When it was about to be scrapped, Police Association president Greg O'Connor said "The reality of it is that the sooner ... the huge distraction that is Incis is gone, the better." Funding wasted on INCIS subsequently led to budget cuts in other areas so that infrastructure such as cars and communications centres were poorly resourced.
Use of facial recognition technology
In 2021, police were accused of racially profiling Māori and young people by taking photos of any youth apprehended during the course of patrols or considered "suspicious" on a mobile app called "OnDuty" connected to the National Intelligence Application (NIA) system. Police claim the photos were a necessary part of combatting crime through more effective intelligence sharing.
Communications centres
In 2004 and 2005, the police were criticised over several incidents in which callers to the Police Communications Centres, particularly those using the 111 emergency telephone number, received inadequate responses. In October 2004, the Commissioner of Police ordered an Independent Review into the Communications Centres under sustained political scrutiny after the Iraena Asher incident received a lot of publicity and a whistle-blowing employee resigned. On 11 May 2005, the Review Panel released its report which criticised the service for systemic failures and inadequate management. The report expressed ongoing concerns for public safety.
Police acted on the recommendations of the review with a number of initiatives, including increasing communications centre staff numbers and then initiating a demonstration project for a single non-emergency number centre, to reduce the load on the 111 service. The single non-emergency number 105 was launched on 10 May 2019.
Historical sexual misconduct by police
In 2004, a number of historical sexual misconduct allegations dating from the 1980s were made against both serving and former police officers. In March 2006 assistant police commissioner Clinton Rickards and former police officers Brad Shipton and Bob Schollum were charged with raping and sexually abusing Louise Nicholas in Rotorua during the 1980s. The defendants claimed all sex was consensual and were found not guilty on 31 March 2006. In February 2007 the same three men faced historical charges of kidnapping and indecent assault for the pack rape of a 16-year-old girl with a whisky bottle that took place in the early 1980s, and again they were acquitted. Throughout both trials, the jury were unaware that Brad Shipton and Bob Schollum had been convicted of a previous pack rape in 2005 and were already serving prison sentences for this crime.
Rickards was forced to resign from the police but was paid $300,000 as part of his termination package. Complaints about inappropriate sexual behaviour by police officers led to a three-year inquiry conducted by Dame Margaret Bazley. Her highly critical report was released in 2007.
Poor prosecution of sexual abuse cases
In 2008 there was a public scandal regarding the failure of police to investigate a backlog of sexual abuse cases in the Wairarapa. The then head of the Masterton Criminal Investigation Bureau, Detective Senior Sergeant Mark McHattie, received an unspecified disciplinary "outcome" and has since been promoted to head of the Auckland CIB's serious crime unit.
Spying on community, union and activist groups
In 2008, the police's Special Investigation Group came under considerable media scrutiny after it was revealed Chrischurch man Rob Gilchrist had been hired by officers to spy on individuals and organisations including Greenpeace, Iraq war protestors, student associations, unions, animal rights and climate change campaigners.
Detention of youth in police cells
The Independent Police Conduct Authority launched a wider investigation into the treatment of young people in police cells and in October 2012 issued a report which found that the number of young people being held has more than doubled since 2009. It said that "youths in crisis are being locked up in police cells and denied their human rights." Practices that "are, or risk being, inconsistent with accepted human rights" include: being held in solitary confinement; having cell lights on 24 hours a day; family members being prevented access; and not being allowed to see the doctor when they have medical or mental health problems. The IPCA made 24 recommendations into how police can improve the detention and treatment of young people in custody.
Bullying
In 2019, it was reported that there had been claims of bullying within New Zealand Police.
Taranaki death in custody, June 2020
On 3 June 2020, three police officers in the town Hāwera in the Taranaki region were charged with manslaughter in relation to the death of a 55 year old man who died in police custody in early June 2019. The man's death had been investigated by the Independent Police Conduct Authority.
Armed response teams
On 9 June 2020, Police Commissioner Andrew Coster announced that the police would be scrapping their armed response teams after public feedback and consultation with the Māori and Pasifika communities. Public discussion around the armed response teams was influenced by concerns about police-community relations in light of the murder of George Floyd, which sparked protests around the world including New Zealand.
Alo Ngata's death
On 27 August 2020, the Independent Police Conduct Authority criticised the Police's handling of the detention of Alo Ngata, who died in police custody in July 2018 after he had been incorrectly fitted with a spit hood. Ngata had been arrested for assaulting an elderly pensioner named Mike Reilly in Auckland's Freemans Bay and had violently resisted arrest. While the IPCA considered the Police's use of force to be reasonable, they found that the police had failed to assess his well-being while in custody. Both Ngata and Reilly's family have asked the police to release footage from the Police helicopter showing Ngata assaulting Reilly.
See also
Armed Offenders Squad
Cook Islands Police Service
Corruption in New Zealand
Crime in New Zealand
Crimes Act 1961
Diplomatic Protection Service
Gangs in New Zealand
Independent Police Conduct Authority
Institute of Environmental Science and Research – provider of forensic services to NZ police
New Zealand Police Negotiation Team
Organised Crime Agency
Royal New Zealand Police College
Special Tactics Group
Notes
References
External links
New Zealand Police Association
Police
New Zealand Police
1842 establishments in New Zealand |
409665 | https://en.wikipedia.org/wiki/Clipper%20chip | Clipper chip | The Clipper chip was a chipset that was developed and promoted by the United States National Security Agency (NSA) as an encryption device that secured "voice and data messages" with a built-in backdoor that was intended to "allow Federal, State, and local law enforcement officials the ability to decode intercepted voice and data transmissions." It was intended to be adopted by telecommunications companies for voice transmission. Introduced in 1993, it was entirely defunct by 1996.
Key escrow
The Clipper chip used a data encryption algorithm called Skipjack to transmit information and the Diffie–Hellman key exchange-algorithm to distribute the cryptokeys between the peers. Skipjack was invented by the National Security Agency of the U.S. Government; this algorithm was initially classified SECRET, which prevented it from being subjected to peer review from the encryption research community. The government did state that it used an 80-bit key, that the algorithm was symmetric, and that it was similar to the DES algorithm. The Skipjack algorithm was declassified and published by the NSA on June 24, 1998. The initial cost of the chips was said to be $16 (unprogrammed) or $26 (programmed), with its logic designed by Mykotronx, and fabricated by VLSI Technology, Inc.
At the heart of the concept was key escrow. In the factory, any new telephone or other device with a Clipper chip would be given a cryptographic key, that would then be provided to the government in escrow. If government agencies "established their authority" to listen to a communication, then the key would be given to those government agencies, who could then decrypt all data transmitted by that particular telephone. The newly formed Electronic Frontier Foundation preferred the term "key surrender" to emphasize what they alleged was really occurring.
Clinton Administration
The Clinton Administration argued that the Clipper chip was essential for law enforcement to keep up with the constantly progressing technology in the United States. While many believed that the device would act as an additional way for terrorists to receive information, the Clinton Administration said it would actually increase national security. They argued that because "terrorists would have to use it to communicate with outsiders — banks, suppliers, and contacts — the Government could listen in on those calls."
Other proponents
There were several advocates of the Clipper chip who argued that the technology was safe to implement and effective for its intended purpose of providing law enforcement with the ability to intercept communications when necessary and with a warrant to do so. One proponent, Howard S. Dakoff, writing in the John Marshall Law Review voiced support for the Clipper chip stating that the technology was secure and the legal rationale for its implementation was sound. Another proponent, Stewart Baker, wrote an opinion in Wired magazine debunking a series of purported myths surrounding the technology.
Backlash
Organizations such as the Electronic Privacy Information Center and the Electronic Frontier Foundation challenged the Clipper chip proposal, saying that it would have the effect not only of subjecting citizens to increased and possibly illegal government surveillance, but that the strength of the Clipper chip's encryption could not be evaluated by the public as its design was classified secret, and that therefore individuals and businesses might be hobbled with an insecure communications system. Further, it was pointed out that while American companies could be forced to use the Clipper chip in their encryption products, foreign companies could not, and presumably phones with strong data encryption would be manufactured abroad and spread throughout the world and into the United States, negating the point of the whole exercise, and, of course, materially damaging U.S. manufacturers en route. Then-Senators John Ashcroft and John Kerry were opponents of the Clipper chip proposal, arguing in favor of the individual's right to encrypt messages and export encryption software.
The release and development of several strong cryptographic software packages such as Nautilus, PGP and PGPfone was in response to the government push for the Clipper chip. The thinking was that if strong cryptography was freely available on the internet as an alternative, the government would be unable to stop its use.
Technical vulnerabilities
In 1994, Matt Blaze published the paper Protocol Failure in the Escrowed Encryption Standard. It pointed out that the Clipper's escrow system has a serious vulnerability: the chip transmitted a 128-bit "Law Enforcement Access Field" (LEAF) that contained the information necessary to recover the encryption key. To prevent the software that transmitted the message from tampering with the LEAF, a 16-bit hash was included. The Clipper chip would not decode messages with an invalid hash; however, the 16-bit hash was too short to provide meaningful security. A brute-force attack would quickly produce another LEAF value that would give the same hash but not yield the correct keys after the escrow attempt. This would allow the Clipper chip to be used as an encryption device, while disabling the key escrow capability. In 1995 Yair Frankel and Moti Yung published another attack which is inherent to the design and which shows that the key escrow device tracking and authenticating capability (namely, the LEAF) of one device, can be attached to messages coming from another device and will nevertheless be received, thus bypassing the escrow in real time. In 1997, a group of leading cryptographers published a paper, "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption", analyzing the architectural vulnerabilities of implementing key escrow systems in general, including but not limited to the Clipper chip Skipjack protocol. The technical flaws described in this paper were instrumental in the demise of the Clipper chip as a public policy option. While many leading voices in the computer science community expressed opposition to the Clipper chip and key recovery in general, some supported the concept, including Dorothy E. Denning.
Lack of adoption
The Clipper chip was not embraced by consumers or manufacturers and the chip itself was no longer relevant by 1996; the only significant purchaser of phones with the chip was the United States Department of Justice. The U.S. government continued to press for key escrow by offering incentives to manufacturers, allowing more relaxed export controls if key escrow were part of cryptographic software that was exported. These attempts were largely made moot by the widespread use of strong cryptographic technologies, such as PGP, which were not under the control of the U.S. government.
However, strongly encrypted voice channels are still not the predominant mode for current cell phone communications. Secure cell phone devices and smartphone apps exist, but may require specialized hardware, and typically require that both ends of the connection employ the same encryption mechanism. Such apps usually communicate over secure Internet pathways (e.g. ZRTP) instead of through phone voice data networks.
Later debates
Following the Snowden disclosures from 2013, Apple and Google stated that they would lock down all data stored on their smartphones with encryption, in such a way that Apple and Google themselves could not break the encryption even if ordered to do so with a warrant. This prompted a strong reaction from the authorities, including the chief of detectives for Chicago's police department stating that "Apple['s iPhone] will become the phone of choice for the pedophile". An editorial in the Washington Post argued that "smartphone users must accept that they cannot be above the law if there is a valid search warrant", and after claiming to agree that backdoors would be undesirable, then suggested implementing a "golden key" backdoor which would unlock the data with a warrant. The members of "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption" 1997 paper, as well as other researchers at MIT, wrote a follow-up article in response to the revival of this debate, arguing that mandated government access to private conversations would be an even worse problem now than twenty years ago.
See also
Bullrun (decryption program)
Cryptoprocessor
Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age by Steven Levy
Trusted Platform Module
Hardware backdoor
References
External links
Clipper chip Q&A
Clipper chip White House Statement
The Evolution of US Government Restrictions on Using and Exporting Encryption Technologies (U), Michael Schwartzbeck, Encryption Technologies, circa 1997, formerly Top Secret, approved for release by NSA with redactions September 10, 2014, C06122418
Oral history interview with Martin Hellman Oral history interview 2004, Palo Alto, California. Charles Babbage Institute, University of Minnesota, Minneapolis. Hellman describes his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s. He also relates his subsequent work in cryptography with Steve Pohlig (the Pohlig-Hellman system) and others. Hellman addresses key escrow (the so-called Clipper chip). He also touches on the commercialization of cryptography with RSA Data Security and VeriSign.
History of cryptography
Kleptography
National Security Agency encryption devices
Encryption debate
Mass surveillance
de:Escrowed Encryption Standard |
411102 | https://en.wikipedia.org/wiki/Winny | Winny | Winny (also known as WinNY) is a Japanese peer-to-peer (P2P) file-sharing program developed by Isamu Kaneko, a research assistant at the University of Tokyo in 2002. Like Freenet, a user must add an encrypted node list in order to connect to other nodes on the network. Users choose three cluster words which symbolize their interests, and then Winny connects to other nodes which share these cluster words, downloading and storing encrypted data from cache of these neighbors in a distributed data store. If users want a particular file, they set up triggers (keywords), and Winny will download files marked by these triggers. The encryption was meant to provide anonymity, but Winny also included bulletin boards where users would announce uploads, and the IP address of posters could be discovered through these boards. While Freenet was implemented in Java, Winny was implemented as a Windows C++ application.
The software takes its name from WinMX, where the M and the X are each advanced one letter in the Latin alphabet, to N and Y. Netagent published a survey in June 2018 suggesting that Winny was still the most popular p2p network in Japan ahead of Perfect Dark (P2P) and Share (P2P) with approximately 45,000 nodes connecting each day over Golden Week. The number of nodes on Winny appears to be holding steady compared with 2015.
Kaneko first announced Winny on the Download Software board of the 2channel (2ch for short) Japanese bulletin board site. Since 2channel users often refer to anonymous users by their post numbers, Kaneko came to be known as "Mr. 47" ("47-Shi", or 47氏 in Japanese), or just "47".
After Winny's development stopped, a new peer-to-peer application, Share, was developed to be a successor.
Antinny
Since August 2003, several worms called "Antinny" have spread on the Winny network.
Some versions of Antinny work as follows:
Upload files from the host computer onto the Winny network.
Upload screenshots onto an image board.
Denial-of-service attack to a copyright protecting agency web site.
Some people have uploaded their information unwittingly from their computers because of Antinny. That information includes governmental documents, information about customers, and people's private files. Once the information is uploaded, it is difficult to delete.
Recently, highly publicized cases of sensitive file uploading have come to light in Japan's media. In particular, a defense agency was forced to admit that classified information from the Maritime Self Defense Force was uploaded by a computer with Winny software installed on it.
Following this, All Nippon Airways suffered an embarrassing leak of passwords for security-access areas in 29 airports across Japan. A similar incident occurred with JAL Airlines on 17 December 2005, after a virus originating from Winny affected the computer of a co-pilot.
Perhaps the largest Winny-related leak was that of the Okayama Prefectural Police Force, whose computer leaked data about around 1,500 investigations. This information included sensitive data such as the names of sex crime victims, and is the largest amount of information held by Japanese police to have ever leaked online.
Arrests and court cases
On November 28, 2003, two Japanese users of Winny, Yoshihiro Inoue, a 41-year-old self-employed businessman from Takasaki, Gunma Prefecture and an unemployed 19-year-old from Matsuyama, were arrested by the Kyoto Prefectural Police. They were accused of sharing copyrighted material via Winny and admitted to their crimes.
Shortly following the two users' arrests, Kaneko also had his home searched and had the source code of Winny confiscated by the Kyoto Police. On May 10, 2004, Kaneko was arrested for suspected conspiracy to encourage copyright infringement by the High-tech Crime Taskforce of the Kyoto Prefectural Police. Kaneko was released on bail on June 1, 2004. The court hearings started in September 2004 at Kyoto district court. On December 13, 2006, Kaneko was convicted of assisting copyright violations and sentenced to pay a fine of ¥1.5 million (about US$13,200). He appealed the ruling. On October 8, 2009, the guilty verdict was overturned by the Osaka High Court. On December 20, 2011, Kaneko was cleared of all charges after a panel of judges agreed that the prosecution could not prove that he had any intention to promote the software for illegal use.
See also
Anonymous P2P
File sharing in Japan
Perfect Dark
Share
WinMX
Winny copyright infringement case
References
Further reading
(Japanese)
External links
Winny?
Download and nodes for Winny, Share, Perfect Dark
A post on 2ch that critics claim that Kaneko states the aim of development that Kaneko spoke is his will of pushing the tide toward the world filled with copyright law violation
Japanese power plant secrets leaked by virus, The Register, 17 May 2006
Anonymous file sharing networks
File sharing networks
file sharing software
Windows file sharing software
Windows-only software |
412627 | https://en.wikipedia.org/wiki/Challenge%E2%80%93response%20authentication | Challenge–response authentication | In computer security, challenge–response authentication is a family of protocols in which one party presents a question ("challenge") and another party must provide a valid answer ("response") to be authenticated.
The simplest example of a challenge–response protocol is password authentication, where the challenge is asking for the password and the valid response is the correct password.
Clearly an adversary who can eavesdrop on a password authentication can then authenticate itself in the same way. One solution is to issue multiple passwords, each of them marked with an identifier. The verifier can ask for any of the passwords, and the prover must have that correct password for that identifier. Assuming that the passwords are chosen independently, an adversary who intercepts one challenge–response message pair has no clues to help with a different challenge at a different time.
For example, when other communications security methods are unavailable, the U.S. military uses the AKAC-1553 TRIAD numeral cipher to authenticate and encrypt some communications. TRIAD includes a list of three-letter challenge codes, which the verifier is supposed to choose randomly from, and random three-letter responses to them. For added security, each set of codes is only valid for a particular time period which is ordinarily 24 hours.
A more interesting challenge–response technique works as follows. Say, Bob is controlling access to some resource. Alice comes along seeking entry. Bob issues a challenge, perhaps "52w72y". Alice must respond with the one string of characters which "fits" the challenge Bob issued. The "fit" is determined by an algorithm "known" to Bob and Alice. (The correct response might be as simple as "63x83z", with the algorithm changing each character of the challenge using a Caesar cipher). In the real world, the algorithm would be much more complex.) Bob issues a different challenge each time, and thus knowing a previous correct response (even if it is not "hidden" by the means of communication used between Alice and Bob) is of no use.
Other non-cryptographic protocols
Challenge–response protocols are also used to assert things other than knowledge of a secret value. CAPTCHAs, for example, are a sort of variant on the Turing test, meant to determine whether a viewer of a Web application is a real person. The challenge sent to the viewer is a distorted image of some text, and the viewer responds by typing in that text. The distortion is designed to make automated optical character recognition (OCR) difficult and preventing a computer program from passing as a human.
Cryptographic techniques
Non-cryptographic authentication were generally adequate in the days before the Internet, when the user could be sure that the system asking for the password was really the system they were trying to access, and that nobody was likely to be eavesdropping on the communication channel to observe the password being entered. To address the insecure channel problem, a more sophisticated approach is necessary. Many cryptographic solutions involve two-way authentication, where both the user and the system must each convince the other that they know the shared secret (the password), without this secret ever being transmitted in the clear over the communication channel, where eavesdroppers might be lurking.
One way this is done involves using the password as the encryption key to transmit some randomly generated information as the challenge, whereupon the other end must return as its response a similarly encrypted value which is some predetermined function of the originally offered information, thus proving that it was able to decrypt the challenge. For instance, in Kerberos, the challenge is an encrypted integer N, while the response is the encrypted integer N + 1, proving that the other end was able to decrypt the integer N. A hash function can also be applied to a password and a random challenge value to create a response value. Another variation uses a probabilistic model to provide randomized challenges conditioned on model input.
Such encrypted or hashed exchanges do not directly reveal the password to an eavesdropper. However, they may supply enough information to allow an eavesdropper to deduce what the password is, using a dictionary attack or brute-force attack. The use of information which is randomly generated on each exchange (and where the response is different from the challenge) guards against the possibility of a replay attack, where a malicious intermediary simply records the exchanged data and retransmits it at a later time to fool one end into thinking it has authenticated a new connection attempt from the other.
Authentication protocols usually employ a cryptographic nonce as the challenge to ensure that every challengeresponse sequence is unique. This protects against Eavesdropping with a subsequent replay attack. If it is impractical to implement a true nonce, a strong cryptographically secure pseudorandom number generator and cryptographic hash function can generate challenges that are highly unlikely to occur more than once. It is sometimes important not to use time-based nonces, as these can weaken servers in different time zones and servers with inaccurate clocks. It can also be important to use time-based nonces and synchronized clocks if the application is vulnerable to a delayed message attack. This attack occurs where an attacker copies a transmission whilst blocking it from reaching the destination, allowing them to replay the captured transmission after a delay of their choosing. This is easily accomplished on wireless channels. The time-based nonce can be used to limit the attacker to resending the message but restricted by an expiry time of perhaps less than one second, likely having no effect upon the application and so mitigating the attack.
Mutual authentication is performed using a challengeresponse handshake in both directions; the server ensures that the client knows the secret, and the client also ensures that the server knows the secret, which protects against a rogue server impersonating the real server.
Challenge–response authentication can help solve the problem of exchanging session keys for encryption. Using a key derivation function, the challenge value and the secret may be combined to generate an unpredictable encryption key for the session. This is particularly effective against a man-in-the-middle attack, because the attacker will not be able to derive the session key from the challenge without knowing the secret, and therefore will not be able to decrypt the data stream.
Simple example mutual authentication sequence
Server sends a unique challenge value sc to the client
Client sends a unique challenge value cc to the server
Server computes sr = hash(cc + secret) and sends to the client
Client computes cr = hash(sc + secret) and sends to the server
Server calculates the expected value of cr and ensures the client responded correctly
Client calculates the expected value of sr and ensures the server responded correctly
where
sc is the server-generated challenge
cc is the client-generated challenge
cr is the client response
sr is the server response
Password storage
To avoid storage of passwords, some operating systems (e.g. Unix-type) store a hash of the password rather than storing the password itself. During authentication, the system need only verify that the hash of the password entered matches the hash stored in the password database. This makes it more difficult for an intruder to get the passwords, since the password itself is not stored, and it is very difficult to determine a password that matches a given hash. However, this presents a problem for many (but not all) challengeresponse algorithms, which require both the client and the server to have a shared secret. Since the password itself is not stored, a challengeresponse algorithm will usually have to use the hash of the password as the secret instead of the password itself. In this case, an intruder can use the actual hash, rather than the password, which makes the stored hashes just as sensitive as the actual passwords. SCRAM is a challengeresponse algorithm that avoids this problem.
Examples
Examples of more sophisticated challenge-response algorithms are:
Zero-knowledge password proof and key agreement systems (such as Secure Remote Password (SRP))
Challenge-Handshake Authentication Protocol (CHAP) ()
CRAM-MD5, OCRA: OATH Challenge-Response Algorithm ()
Salted Challenge Response Authentication Mechanism (SCRAM) ()
ssh's challenge-response system based on RSA .
Some people consider a CAPTCHA a kind of challenge-response authentication that blocks spambots.
See also
Challenge-handshake authentication protocol
Challenge–response spam filtering
CRAM-MD5
Cryptographic hash function
Cryptographic nonce
Kerberos
Otway–Rees protocol
Needham–Schroeder protocol
Wide Mouth Frog protocol
Password-authenticated key agreement
Salted Challenge Response Authentication Mechanism
SQRL
Distance-bounding protocol
Reflection attack
Replay attack
Man-in-the-middle attack
WebAuthn
References
Authentication methods
Computer access control |
414413 | https://en.wikipedia.org/wiki/Internet%20Printing%20Protocol | Internet Printing Protocol | The Internet Printing Protocol (IPP) is a specialized Internet protocol for communication between client devices (computers, mobile phones, tablets, etc.) and printers (or print servers). It allows clients to submit one or more print jobs to the printer or print server, and perform tasks such as querying the status of a printer, obtaining the status of print jobs, or cancelling individual print jobs.
Like all IP-based protocols, IPP can run locally or over the Internet. Unlike other printing protocols, IPP also supports access control, authentication, and encryption, making it a much more capable and secure printing mechanism than older ones.
IPP is the basis of several printer logo certification programs including AirPrint, IPP Everywhere, and Mopria Alliance, and is supported by over 98% of printers sold today.
History
IPP began as a proposal by Novell for the creation of an Internet printing protocol project in 1996. The result was a draft written by Novell and Xerox called the Lightweight Document Printing Application (LDPA), derived from ECMA-140: Document Printing Application (DPA). At about the same time, Lexmark publicly proposed something called the HyperText Printing Protocol (HTPP), and both HP and Microsoft had started work on new print services for what became Windows 2000. Each of the companies chose to start a common Internet Printing Protocol project in the Printer Working Group (PWG) and negotiated an IPP birds-of-a-feather (or BOF) session with the Application Area Directors in the Internet Engineering Task Force (IETF). The BOF session in December 1996 showed sufficient interest in developing a printing protocol, leading to the creation of the IETF Internet Printing Protocol (ipp) working group, which concluded in 2005.
Work on IPP continues in the PWG Internet Printing Protocol workgroup with the publication of 23 candidate standards, 1 new and 3 updated IETF RFCs, and several registration and best practice documents providing extensions to IPP and support for different services including 3D Printing, scanning, facsimile, cloud-based services, and overall system and resource management.
IPP/1.0 was published as a series of experimental documents (RFC 2565, RFC 2566, RFC 2567, RFC 2568, RFC 2569, and RFC 2639) in 1999.
IPP/1.1 followed as a draft standard in 2000 with support documents in 2001, 2003, and 2015 (RFC 2910, RFC 2911, RFC 3196, RFC 3510 RFC 7472). IPP/1.1 was updated as a proposed standard in January 2017 (RFC 8010, RFC 8011,) and then adopted as Internet Standard 92 (STD 92,) in June 2018.
IPP 2.0 was published as a PWG Candidate Standard in 2009 (PWG 5100.10-2009,) and defined two new IPP versions (2.0 for printers and 2.1 for print servers) with additional conformance requirements beyond IPP 1.1. A subsequent Candidate Standard replaced it 2011 defining an additional 2.2 version for production printers (PWG 5100.12-2011,). This specification was updated and approved as a full PWG Standard (PWG 5100.12-2015,) in 2015.
IPP Everywhere was published in 2013 and provides a common baseline for printers to support so-called "driverless" printing from client devices. It builds on IPP and specifies additional rules for interoperability, such as a list of document formats printers need to support. A corresponding self-certification manual and tool suite was published in 2016 allowing printer manufacturers and print server implementors to certify their solutions against the published specification and be listed on the IPP Everywhere printers page maintained by the PWG.
Implementation
IPP is implemented using the Hypertext Transfer Protocol (HTTP) and inherits all of the HTTP streaming and security features. For example, authorization can take place via HTTP's Digest access authentication mechanism, GSSAPI, or any other HTTP authentication methods. Encryption is provided using the TLS protocol-layer, either in the traditional always-on mode used by HTTPS or using the HTTP Upgrade extension to HTTP (RFC 2817). Public key certificates can be used for authentication with TLS. Streaming is supported using HTTP chunking. The document to be printed is usually sent as a data stream.
IPP accommodates various formats for documents to be printed. The PWG defined an image format called PWG Raster specifically for this purpose. Other formats include PDF or JPEG, depending on the capabilities of the destination printer.
IPP uses the traditional client–server model, with clients sending IPP request messages with the MIME media type "application/ipp" in HTTP POST requests to an IPP printer. IPP request messages consist of key–value pairs using a custom binary encoding followed by an "end of attributes" tag and any document data required for the request (such as the document to be printed). The IPP response is sent back to the client in the HTTP POST response, again using the "application/ipp" MIME media type.
Among other things, IPP allows a client to:
query a printer's capabilities (such as supported character sets, media types and document formats)
submit print jobs to a printer
query the status of a printer
query the status of one or more print jobs
cancel previously submitted jobs
IPP uses TCP with port 631 as its well-known port.
Products using the Internet Printing Protocol include CUPS (which is part of Apple macOS and many BSD and Linux distributions and is the reference implementation for most versions of IPP ), Novell iPrint, and Microsoft Windows versions starting from MS Windows 2000. Windows XP and Windows Server 2003 offer IPP printing via HTTPS. Windows Vista, Windows 7, Windows Server 2008 and 2008 R2 also support IPP printing over RPC in the "Medium-Low" security zone.
See also
CUPS
Job Definition Format
Line Printer Daemon protocol
T.37 (ITU-T recommendation)
References
Further reading
Standards
.
Informational documents
External links
.
.
.
.
.
Printing protocols
Computer printing |
415882 | https://en.wikipedia.org/wiki/OTR | OTR | OTR or Otr may refer to:
Science and technology
Off-the-Record Messaging, an instant messaging encryption protocol
Oxygen transmission rate, of a substance
Otaara, a plant genus
Entertainment
Old-time radio, a broadcasting era
Off the Record with Michael Landsberg, a sports talk show
Music
On the Rocks (band)
Otierre, an Italian hip hop band
Other uses
Coto 47 Airport, in Costa Rica
Occupational Therapist, Registered, Licensed
On the Run (convenience store)#Australia
Ótr, a mythical dwarf
Over-the-Rhine, a neighborhood in Cincinnati, Ohio, US
Over the road, truck drivers without schedule or route |
415946 | https://en.wikipedia.org/wiki/USS%20Pueblo%20%28AGER-2%29 | USS Pueblo (AGER-2) | USS Pueblo (AGER-2) is a , attached to Navy intelligence as a spy ship, which was attacked and captured by North Korean forces on 23 January 1968, in what was later known as the "Pueblo incident" or alternatively, as the "Pueblo crisis'''".
The seizure of the U.S. Navy ship and her 83 crew members, one of whom was killed in the attack, came less than a week after President Lyndon B. Johnson's State of the Union address to the United States Congress, a week before the start of the Tet Offensive in South Vietnam during the Vietnam War and three days after 31 men of North Korea's KPA Unit 124 had crossed the Korean Demilitarized Zone (DMZ) and killed 26 South Koreans in an attempt to attack the South Korean Blue House (executive mansion) in the capital Seoul. The taking of Pueblo and the abuse and torture of her crew during the subsequent eleven months became a major Cold War incident, raising tensions between western and eastern powers.
North Korea stated that Pueblo deliberately entered their territorial waters away from Ryo Island, and that the logbook shows that they intruded several times. However, the United States maintains that the vessel was in international waters at the time of the incident and that any purported evidence supplied by North Korea to support its statements was fabricated. Pueblo, still held by North Korea today, officially remains a commissioned vessel of the United States Navy. Since early 2013, the ship has been moored along the Pothonggang Canal in Pyongyang and used there as a museum ship at the Victorious War Museum. Pueblo is the only ship of the U.S. Navy still on the commissioned roster currently being held captive.
Initial operations
The ship was launched at the Kewaunee Shipbuilding and Engineering Company in Kewaunee, Wisconsin, on 16 April 1944, as the United States Army Freight and Passenger (FP) FP-344. The Army later redesignated the FP vessels as Freight and Supply changing the designation to FS-344. The ship, commissioned at New Orleans on 7 April 1945, served as a Coast Guard–manned Army vessel used for training civilians for the Army. Her first commanding officer was Lt. J. R. Choate, USCGR, succeeded by Lt. J.G. Marvin B. Barker, USCGR, on 12 September 1945. FS-344 was placed out of service in 1954.
In 1964 the Department of Defense became interested in having smaller, less expensive, more flexible and responsive signals intelligence collection vessels than the existing AGTR and T-AG vessels. The mothballed light cargo ships were the most suitable existing DOD ships, and one was converted to in 1964 and began operations in 1965. Banner's mission was to surveil high-frequency electronic emissions with line-of-sight propagation requiring operating closer to shore than previous intelligence gathering missions. Banner was unarmed, but the crew were issued five M1911 pistols and three M1 Garand rifles. Banner was confronted by Soviet Navy ships while operating off the Pacific coast of the Soviet Union. These ships would sometimes display international signal flags meaning: "Heave to or I will fire," but Banner kept steaming with scrupulous attention to International Regulations for Preventing Collisions at Sea. Soviet recognition of possible American reciprocity against Soviet ships on similar missions discouraged attacks.FS-344 was transferred to the United States Navy on 12 April 1966 and was renamed USS Pueblo (AKL-44) after Pueblo and Pueblo County, Colorado on 18 June. Initially, she was classified as a light cargo ship for basic refitting at Puget Sound Naval Shipyard during 1966. As Pueblo was prepared under a non-secret cover as a light cargo ship, the general crew staffing and training was on this basis, with 44% having never been to sea when first assigned. Installation of signals intelligence equipment, at a cost of $1.5 million, was delayed to 1967 for budgetary reasons, resuming service as what is colloquially known as a "spy ship" and redesignated AGER-2 on 13 May 1967. The limited budget for conversion caused disapproval of several improvements requested by the prospective commanding officer, Lloyd Bucher. Requested engine overhaul was denied despite Banner's experience of drifting for two days unable to communicate following failure of both engines on patrol. A requested emergency scuttling system was denied, and Bucher was subsequently unable to obtain explosives for demolition charges. Replacement of burn barrels with a fuel-fed incinerator to allow speedy destruction of classified documents was denied. After Bucher's subsequent request to reduce the ship's library of classified publications was similarly denied, he was able to purchase a less capable incinerator using some discretionary funds intended for crew comfort. Following the USS Liberty incident on 8 June, Vice Chief of Naval Operations (VCNO) Horacio Rivero Jr. ordered that no Navy ship would operate without adequate means of defending itself. VCNO staff directed the shipyard to install a 3-inch/50-caliber gun on Pueblo's main deck with provisions for ammunition storage, but Bucher successfully argued against such installation because of reduced ship stability by addition of weight above the main deck. After testing and deficiency rework Pueblo sailed from the shipyard on 11 September 1967 to San Diego for shake-down training.
When the unarmed Pueblo reached the U.S. Navy base at Yokosuka, Japan, the commander of United States Naval Forces Japan directed the ship to take two M2 Browning .50 caliber machine guns as a substitute for the missing deck gun. In the limited time available for training, ten of the ship's crew fired five rounds each. Bucher opted to mount these guns in exposed positions on the bow and stern to keep them as far as possible from his position on the bridge. These positions eliminated possible use of the ship's superstructure to protect the gunners and conceal the guns and ammunition service lockers. The exposed guns, with no nearby ammunition supply, were disguised under canvas covers which became rigid with frozen spray.
Pueblo incident
On 5 January 1968, Pueblo left Yokosuka in transit to the U.S. naval base at Sasebo, Japan; from there she left on 11 January 1968, headed northward through the Tsushima Strait into the Sea of Japan. She left with specific orders to intercept and conduct surveillance of Soviet Navy activity in the Tsushima Strait and to gather signal and electronic intelligence from North Korea. Mission planners failed to recognize that the absence of similar North Korean missions around the United States would free North Korea from the possibility of retribution in kind which had restrained Soviet response. The declassified SIGAD for the National Security Agency (NSA) Direct Support Unit (DSU) from the Naval Security Group (NSG) on Pueblo during the patrol involved in the incident was USN-467Y. AGER (Auxiliary General Environmental Research) denoted a joint Naval and National Security Agency (NSA) program. Aboard were the ship's crew of five officers and 38 enlisted men, one officer and 37 enlisted men of the NSG, and two civilian oceanographers to provide a cover story.
On 16 January 1968, Pueblo arrived at the 42°N parallel in preparation for the patrol, which was to transit down the North Korean coast from 41°N to 39°N, and then back, without getting closer than from the North Korean coast, at night moving out to a distance of . This was challenging as only two sailors had good navigational experience, with the captain later reporting, "I did not have a highly professional group of seamen to do my navigational chores for me."
At 17:30 on 20 January 1968, a North Korean modified SO-1 class Soviet style submarine chaser passed within of Pueblo, which was about southeast of Mayang-do at a position 39°47'N and 128°28.5'E.
In the afternoon of 22 January 1968, the two North Korean fishing trawlers Rice Paddy 1 and Rice Paddy 2 passed within of Pueblo. That day, a North Korean KPA Special Operations Force unit made an assassination attempt at the Blue House executive mansion against South Korean president Park Chung-hee, but the crew of Pueblo was not informed.
According to the American account, the following day, 23 January, Pueblo was approached by a submarine chaser and her nationality was challenged; Pueblo responded by raising the U.S. flag and directing the civilian oceanographers to commence water sampling procedures with their deck winch. The North Korean vessel then ordered Pueblo to stand down or be fired upon. Pueblo attempted to maneuver away, but was considerably slower than the submarine chaser. Several warning shots were fired. Additionally, three torpedo boats appeared on the horizon and then joined in the chase and subsequent attack.
The attackers were soon joined by two Korean People's Air Force MiG-21 fighters. A fourth torpedo boat and a second submarine chaser appeared on the horizon a short time later. The ammunition on Pueblo was stored below decks, and her machine guns were wrapped in cold-weather tarpaulins. The machine guns were unmanned, and no attempt was made to man them. An NSA report quotes the sailing order:
and notes:
U.S. Navy authorities and the crew of Pueblo insist that before the capture, Pueblo was miles outside North Korean territorial waters. North Korea claims that the vessel was well within North Korean territory. The Pueblos mission statement allowed her to approach within a nautical mile (1,852 m) of that limit. However, North Korea describes a sea boundary even though international standards were at the time.
The North Korean vessels attempted to board Pueblo, but she was maneuvered to prevent this for over two hours. The submarine chaser then opened fire with a 57 mm cannon and the smaller vessels fired machine guns, injuring Signalman Leach in his left calf and upper right side. Captain Bucher, too, received slight shrapnel wounds, but they were not incapacitating. The crew of Pueblo then began destroying sensitive material. The volume of material on board was so great that it was impossible to destroy it all. An NSA report quotes Lieutenant Steve Harris, the officer in charge of Pueblos Naval Security Group Command detachment:
and concludes:
Radio contact between Pueblo and the Naval Security Group in Kamiseya, Japan had been ongoing during the incident. As a result, Seventh Fleet command was fully aware of Pueblos situation. Air cover was promised but never arrived. The Fifth Air Force had no aircraft on strip alert, and estimated a two-to-three-hour delay in launching aircraft. was located south of Pueblo, yet her four F-4B aircraft on alert were not equipped for an air-to-surface engagement. Enterprises captain estimated that 1.5 hours (90 minutes) were required to get the converted aircraft into the air.
Eventually the shelling forced Pueblo to stop, signal compliance and follow the North Korean vessels as ordered. Pueblo stopped again immediately outside North Korean waters in an attempt to obtain more time for destroying sensitive material, but was immediately fired upon by the submarine chaser, and a sailor, fireman Duane Hodges, was killed, after which Pueblo resumed following the North Korean vessels. The ship was finally boarded at 05:55 UTC (2:55 pm local) by men from a torpedo boat and the submarine chaser. Crew members had their hands tied and were blindfolded, beaten, and prodded with bayonets. Once Pueblo was in North Korean territorial waters, she was boarded again, this time by high-ranking North Korean officials.
The first official confirmation that the ship was in North Korean hands came five days later, 28 January 1968. Two days earlier, a flight by a CIA A-12 Oxcart aircraft from the Project Black Shield squadron at Kadena, Okinawa, flown by pilot Jack Weeks, made three high-altitude, high-speed flights over North Korea. When the aircraft's films were processed in the United States, they showed Pueblo to be in the Wonsan harbor area surrounded by two North Korean vessels.
There was dissent among government officials in the United States regarding the nation's response to the situation. Congressman Mendel Rivers suggested that President Johnson issue an ultimatum for the return of Pueblo under penalty of nuclear attack, while Senator Gale McGee said that the United States should wait for more information and not make "spasmodic response[s] to aggravating incidents." According to Horace Busby, Special Assistant to President Johnson, the president's "reaction to the hostage taking was to work very hard here to keep down any demands for retaliation or any other attacks upon North Koreans", worried that rhetoric might result in the hostages being killed.
On Wednesday, 24 January 1968, the day following the incident, after extensive cabinet meetings Washington decided that its initial response should be to:
Deploy air and naval forces to the immediate area.
Make reconnaissance flights over the location of the Pueblo.
Call up military reserves and extend terms of military service.
Protest the incident within the framework of the United Nations.
Have President Johnson personally cable Soviet premier Alexei Kosygin.
The Johnson Administration also considered a blockade of North Korean ports, air strikes on military targets and an attack across the Demilitarized Zone separating the two Koreas.
Although American officials at the time assumed that the seizure of Pueblo had been directed by the Soviet Union, declassified Soviet archives later showed that the Soviet leadership was caught by surprise, and became fearful of the possibility of war on the Korean peninsula. Eastern Bloc ambassadors actively cautioned North Korea to exercise caution in the aftermath of the incident. Several documents suggest that the aggressive action may have been an attempt by North Korea to signal a tilt towards the Chinese Communist Party in the aftermath of the Sino-Soviet split in 1966.
AftermathPueblo was taken into port at Wonsan and the crew was moved twice to prisoner-of-war (POW) camps. The crew members reported upon release that they were starved and regularly tortured while in North Korean custody. This treatment turned worse when the North Koreans realized that crewmen were secretly giving them "the finger" in staged propaganda photos.
Commander Lloyd M. Bucher was psychologically tortured, including being put through a mock firing squad in an effort to make him confess. Eventually the North Koreans threatened to execute his men in front of him, and Bucher relented and agreed to "confess to his and the crew's transgression." Bucher wrote the confession since a "confession" by definition needed to be written by the confessor himself. They verified the meaning of what he wrote, but failed to catch the pun when he said "We paean the DPRK [North Korea]. We paean their great leader Kim Il Sung". (Bucher pronounced "paean" as "pee on.")
Negotiations for the release of the crew took place at Panmunjom. At the same time, U.S. officials were concerned with conciliating the South Koreans, who expressed discontent about being left out of the negotiations. Richard A. Ericson, a political counselor for the American embassy in Seoul and operating officer for the Pueblo negotiations, notes in his oral history:
The South Koreans were absolutely furious and suspicious of what we might do. They anticipated that the North Koreans would try to exploit the situation to the ROK's disadvantage in every way possible, and they were rapidly growing distrustful of us and losing faith in their great ally. Of course, we had this other problem of how to ensure that the ROK would not retaliate for the Blue House Raid and to ease their growing feelings of insecurity. They began to realize that the DMZ was porous and they wanted more equipment and aid. So, we were juggling a number of problems.
He also noted how the meetings at Panmunjom were usually unproductive because of the particular negotiating style of the North Koreans:
As one example, we would go up with a proposal of some sort on the release of the crew and they would be sitting there with a card catalog ... If the answer to the particular proposal we presented wasn't in the cards, they would say something that was totally unresponsive and then go off and come back to the next meeting with an answer that was directed to the question. But there was rarely an immediate answer. That happened all through the negotiations. Their negotiators obviously were never empowered to act or speak on the basis of personal judgment or general instructions. They always had to defer a reply and presumably they went over it up in Pyongyang and passed it around and then decided on it. Sometimes we would get totally nonsensical responses if they didn't have something in the card file that corresponded to the proposal at hand.
Ericson and George Newman, the Deputy Chief of Mission in Seoul, wrote a telegram for the State Department in February 1968, predicting how the negotiations would play out:
What we said in effect was this: If you are going to do this thing at Panmunjom, and if your sole objective is to get the crew back, you will be playing into North Korea's hands and the negotiations will follow a clear and inevitable path. You are going to be asked to sign a document that the North Koreans will have drafted. They will brook no changes. It will set forth their point of view and require you to confess to everything they accuse you of ... If you allow them to, they will take as much time as they feel they need to squeeze every damn thing they can get out of this situation in terms of their propaganda goals, and they will try to exploit this situation to drive a wedge between the U.S. and the ROK. Then when they feel they have accomplished all they can, and when we have agreed to sign their document of confession and apology, they will return the crew. They will not return the ship. This is the way it is going to be because this is the way it has always been.
Following an apology, a written admission by the U.S. that Pueblo had been spying, and an assurance that the U.S. would not spy in the future, the North Korean government decided to release the 82 remaining crew members, although the written apology was preceded by an oral statement that it was done only to secure the release. On 23 December 1968, the crew was taken by buses to the Korean Demilitarized Zone (DMZ) border with South Korea and crossing at the "Bridge of No Return", carrying with them the body of Fireman Duane D. Hodges, who was killed during the capture. Exactly 11 months after being taken prisoner, the captain led the long line of crewmen, followed at the end by the executive officer, Lieutenant Ed Murphy, the last man across the bridge.
Bucher and all of the officers and crew subsequently appeared before a Navy Court of Inquiry. A court-martial was recommended for Bucher and the officer in charge of the research department, Lieutenant Steve Harris, for surrendering without a fight and for failing to destroy classified material, but Secretary of the Navy John Chafee, rejected the recommendation, stating, "They have suffered enough." Commander Bucher was never found guilty of any indiscretions and continued his Navy career until retirement.
In 1970, Bucher published an autobiographical account of the USS Pueblo incident entitled Bucher: My Story. Bucher died in San Diego on 28 January 2004, at the age of 76. James Kell, a former sailor under his command, suggested that the injuries that Bucher suffered during his time in North Korea contributed to his death.
Along with the Battle of Khe Sanh and the Tet Offensive, the Pueblo incident was a key factor in turning U.S. public opinion against the Vietnam War and influencing Lyndon B. Johnson into withdrawing from the 1968 presidential election.
USS Pueblo is still held by North Korea. In October 1999, she was towed from Wonsan on the east coast, around the Korean Peninsula, to the port of Nampo on the west coast. This required moving the vessel through international waters, and was undertaken just before the visit of U.S. presidential envoy James Kelly to Pyongyang. After the stop at the Nampo shipyard, Pueblo was relocated to Pyongyang and moored on the Taedong River near the spot where the General Sherman incident is believed to have taken place. In late 2012, Pueblo was moved again to the Pothonggang Canal in Pyongyang, next to a new addition to the Fatherland Liberation War Museum.
Today, Pueblo remains the second-oldest commissioned ship in the U.S. Navy, behind ("Old Ironsides"). Pueblo is one of only a few American ships to have been captured since the First Barbary War.
Breach of U.S. communications security
Reverse engineering of communications devices on Pueblo allowed the North Koreans to share knowledge with the Soviet Union that led to the replication of those communications devices. This allowed the two nations access to the US Navy's communication systems until the US Navy revised those systems. The seizure of Pueblo followed soon after US Navy warrant officer John Anthony Walker introduced himself to Soviet authorities, setting up the Walker spy ring. It has been argued that the seizure of Pueblo was executed specifically to capture the encryption devices aboard.
Without them, it was difficult for the Soviets to make full use of Walker's information.Heath, Laura J. Analysis of the Systemic Security Weaknesses of the U.S. Navy Fleet Broadcasting System, 1967–1974, as Exploited by CWO John Walker (PDF) U.S. Army Command and General Staff College Master's Thesis. 2005. Mitchell Lerner and Jong-Dae Shin argue that Soviet-bloc Romanian dossiers demonstrate that the Soviets had no knowledge of the capture of the ship and were taken by surprise when it happened.
After debriefing the released crew, the U.S. prepared a "Cryptographic Damage Assessment" that was declassified in late 2006. The report concluded that, while the crew made a diligent effort to destroy sensitive material, most of them were not familiar with cryptographic equipment and publications, had not received training in their proper destruction, and that their efforts were not sufficient to prevent the North Koreans from recovering most of the sensitive material. The crew itself thought the North Koreans would be able to rebuild much of the equipment.
Cryptographic equipment on board at the time of capture included "one KL-47 for off-line encryption, two KW-7s for on-line encryption, three KWR-37s for receiving the Navy Operational Intelligence Broadcast, and four KG-14s which are used in conjunction with the KW-37 for receiving the Fleet Broadcasts." Additional tactical systems and one-time pads were captured, but they were considered of little significance since most messages sent using them would be of value for only a short time.
The ship's cryptographic personnel were subject to intense interrogation by what they felt were highly knowledgeable electronics experts. When crew members attempted to withhold details, they were later confronted with pages from captured manuals and told to correct their earlier accounts. The report concluded that the information gained from the interrogations saved the North Koreans three to six months of effort, but that they would have eventually understood everything from the captured equipment and accompanying technical manuals alone. The crew members were also asked about many U.S. cryptographic systems that were not on board the Pueblo, but only supplied superficial information.
The Pueblo carried key lists for January, February and March 1968, but immediately after the Pueblo was captured, instructions were sent to other holders of those keys not to use them, so damage was limited. However it was discovered in the debriefing that the Pueblo had onboard superseded key lists for November and December 1967 which should have been destroyed by January 15, well before the Pueblo arrived on station, according to standing orders. The report considered the capture of the superseded keys for November and December the most damaging cryptographic loss. The capture of these keys likely allowed North Korea and its allies to read more than 117,000 classified messages sent during those months. The North Koreans would also have gained a thorough knowledge of the workings of the captured systems but that would only have been of use if additional key material was compromised in the future. The existence of the Walker spy ring was, of course, not known at the time of the report.
The report noted that "the North Koreans did not display any of the captured cryptographic material to the crew, except for some equipment diagrams, or otherwise publicize the material for propaganda purposes. When contrasted with the international publicity given to the capture of other highly classified Special Intelligence documents, the fact that this material was not displayed or publicized would indicate that they thoroughly understood its significance and the importance of concealing from the United States the details of the information they had acquired."
In the communist camp
Documents released from National Archives of Romania suggest it was the Chinese rather than the Soviets who actively encouraged the reopening of hostilities in Korea during 1968, promising North Korea vast material support should hostilities in Korea resume. Together with Blue House Raid, the Pueblo incident turned out to be part of an increasing divergence between the Soviet leadership and North Korea. Fostering a resumption of hostilities in Korea, allegedly, was seen in Beijing as a way to mend relations between North Korea and China, and pull North Korea back in the Chinese sphere of influence in the context of the Sino-Soviet split. After the (then secret) diplomatic efforts of the Soviets to have the American crew released fell on deaf ears in Pyongyang, Leonid Brezhnev publicly denounced North Korea's actions at the 8th plenary session of the 23rd Congress of the Communist Party of the Soviet Union. In contrast, the Chinese (state controlled) press published declarations supportive of North Korea's actions in the Pueblo incident.
Furthermore, Soviet archives reveal that the Soviet leadership was particularly displeased that North Korean leader Kim Il-sung had contradicted the assurances he previously gave Moscow that he would avoid a military escalation in Korea. Previously secret documents suggest the Soviets were surprised by the Pueblo incident, first learning of it in the press. The same documents reveal that the North Koreans also kept the Soviets completely in the dark regarding ongoing negotiations with the Americans for the crew's release, which was another bone of contention. The Soviet reluctance at a reopening of hostilities in Korea was partly motivated by the fact that they had a 1961 treaty with North Korea that obliged them to intervene in case the latter got attacked. Brezhnev however had made it clear in 1966 that just as in the case of the similar treaty they had with China, the Soviets were prepared to ignore it rather than go to all-out war with the United States.
Given that Chinese and North Korean archives surrounding the incident remain secret, Kim Il-sung's intentions cannot be known with certainty. The Soviets revealed however that Kim Il-sung sent a letter to Alexei Kosygin on 31 January 1968 demanding further military and economic aid, which was interpreted by the Soviets as the price they would have to pay to restrain Kim Il-sung's bellicosity. Consequently, Kim Il-sung was invited to Moscow, but he refused to go in person owing to "increased defense preparations" he had to attend to, sending instead his defense minister, Kim Chang-bong, who arrived on 26 February 1968. During a long meeting with Brezhnev, the Soviet leader made it clear that they were not willing to go to war with the United States, but agreed to an increase in subsidies for North Korea, which did happen in subsequent years.
Timeline of negotiations
With Major General Pak Chung-kuk representing North Korea (DPRK) and U.S. Navy Rear Admiral John Victor Smith representing the United States until April 1968, at which point he is replaced by U.S. Army Major General Gilbert H. Woodward. Timeline and quotations are taken from Matter of Accountability by Trevor Armbrister.
{| class="wikitable"
|-
! Date !! Chief Negotiator !! Event / Position of respective government
|- style="background:lightgrey"
| 23 January 1968 (around noon local time)
|
| Pueblo is intercepted by North Korean forces close to the North Korean port city of Wonsan.
|- style="background:lightgrey"
| rowspan="4" | 24 January 1968 (11am local time)
| rowspan="2" | Admiral Smith
| Protests the "heinous" Blue House raid and subsequently plays a tape of a captured North Korean soldier's "confession" ...
|- style="background:#6699CC"
| I want to tell you, Pak, that the evidence against you North Korean Communists is overwhelming ... I now have one more subject to raise which is also of an extremely serious nature. It concerns the criminal boarding and seizure of ... Pueblo in international waters. It is necessary that your regime do the following: one, return the vessel and crew immediately; two, apologize to the Government of the United States for this illegal action. You are advised that the United States reserves the right to ask for compensation under international law.|- style="background:lightgrey"
| rowspan = "2" | General Pak
| style="background:#FE6F5E" | Our saying goes, 'A mad dog barks at the moon', ... At the two hundred and sixtieth meeting of this commission held four days ago, I again registered a strong protest with your side against having infiltrated into our coastal waters a number of armed spy boats ... and demanded you immediately stop such criminal acts ... this most overt act of the U.S. imperialist aggressor forces was designed to aggravate tension in Korea and precipitate another war of aggression ... |- style="background:#F88379"
| The United States must admit that Pueblo entered North Korean waters, must apologize for this intrusion, and must assure the Democratic People's Republic of Korea that such intrusions will never happen again. Admit, Apologize and Assure (the "Three As").
|- style="background:lightgrey"
| 4 March 1968
|
| Names of dead and wounded prisoners are provided by the DPRK.
|- style="background:lightgrey"
| late April 1968
|
| Admiral Smith is replaced by U.S. Army Major General Gilbert H. Woodward as chief negotiator.
|- style="background:lightgrey"
| 8 May 1968
|
| General Pak presents General Woodward with the document by which the United States would admit that Pueblo had entered the DPRK's waters, would apologize for the intrusion and assure the DPRK that such an intrusion would never happen again. It cited the Three As the only basis for a settlement and went on to denounce the United States for a whole host of other "crimes".
|- style="background:lightgrey"
| rowspan = "3" | 29 August 1968
| rowspan = "2" | General Woodward
| A proposal drafted by U.S. Under Secretary of State Nicholas Katzenbach [the "overwrite" strategy] is presented.
|- style="background:#6699CC"
| If I acknowledge receipt of the crew on a document satisfactory to you as well as to us, would you then be prepared to release all of the crew?|- style="background:lightgrey"
| General Pak
| style="background:#FE6F5E" | Well, we have already told you what you must sign ... |- style="background:lightgrey"
| 17 September 1968
| General Pak
| style="background:#FE6F5E" | If you will sign our document, something might be worked out ... |- style="background:lightgrey"
| rowspan = "2" | 30 September 1968
| General Pak
| style="background:#FE6F5E" | If you will sign the document, we will at the same time turn over the men.|- style="background:lightgrey"
| General Woodward
| style="background:#6699CC" | We do not feel it is just to sign a paper saying we have done something we haven't done. However, in the interest of reuniting the crew with their families, we might consider an 'acknowledge receipt
|- style="background:lightgrey"
| rowspan = "3" | 10 October 1968
| rowspan = "2" | General Woodward
| (demonstrating to General Pak the nature of the 'signing')
|- style="background:#6699CC"
| I will write here that I hereby acknowledge receipt of eighty-two men and one corpse ... |- style="background:lightgrey"
| General Pak
| style="background:#FE6F5E" | You are employing sophistries and petty stratagems to escape responsibility for the crimes which your side committed ... |- style="background:lightgrey"
| 23 October 1968
|
| The "overwrite" proposal is again set out by General Woodward and General Pak again denounces it as a "petty strategem".
|- style="background:lightgrey"
| rowspan = "2" | 31 October 1968
| General Woodward
| style="background:#6699CC" | If I acknowledge receipt of the crew on a document satisfactory to you as well as to us, would you then be prepared to release all of the crew?|- style="background:lightgrey"
| General Pak
| style="background:#F88379" | The United States must admit that Pueblo had entered North Korean waters, must apologize for this intrusion, and must assure the Democratic People's Republic of Korea that this will never happen again.|- style="background:lightgrey"
| rowspan = "3" | 17 December 1968
| General Woodward
| Explains a proposal by State Department Korea chief James Leonard: the "prior refutation" scheme. The United States would agree to sign the document but General Woodward would then verbally denounce it once the prisoners had been released.
|- style="background:lightgrey"
| rowspan = "2" | General Pak
| [following a 50min recess]
|- style="background:#FE6F5E"
| I note that you will sign my document ... we have reached agreement.|- style="background:lightgrey"
| 23 December 1968
|
| General Woodward on behalf of the United States signs the Three As document and the DPRK at the same time allows Pueblo's prisoners to return to U.S. custody.
|}
Tourist attractionPueblo is a tourist attraction in Pyongyang, North Korea, since being moved to the Taedong River. Pueblo used to be anchored at the spot where it is believed the General Sherman incident took place in 1866. In late November 2012 Pueblo was moved from the Taedong river dock to a casement on the Pothong river next to the new Fatherland War of Liberation Museum. The ship was renovated and made open to tourists with an accompanying video of the North Korean perspective in late July 2013. To commemorate the anniversary of the Korean War, the ship had a new layer of paint added. Visitors are allowed to board the ship and see its secret code room and crew artifacts.
Offer to repatriate
During an August 2005 diplomatic session in North Korea, former U.S. Ambassador to South Korea Donald Gregg received verbal indications from high-ranking North Korean officials that the state would be willing to repatriate Pueblo to United States authorities, on the condition that a prominent U.S. government official, such as the Secretary of State, come to Pyongyang for high level talks. While the U.S. government has publicly stated on several occasions that the return of the still commissioned Navy vessel is a priority, there has been no indication that the matter was brought up by U.S. Secretary of State Mike Pompeo on his April 2018 visit.
Lawsuits
Former Pueblo crew members William Thomas Massie, Dunnie Richard Tuck, Donald Raymond McClarren, and Lloyd Bucher sued the North Korean government for the abuse they suffered at its hands during their captivity. North Korea did not respond to the suit. In December 2008, U.S. District Judge Henry H. Kennedy, Jr., in Washington, D.C., awarded the plaintiffs $65 million in damages, describing their ill treatment by North Korea as "extensive and shocking." The plaintiffs, as of October 2009, were attempting to collect the judgement from North Korean assets frozen by the U.S. government.
In February 2021 a US court awarded the survivors and their families $2.3 billion. It is uncertain if they will be able to collect the money from North Korea.
AwardsPueblo has earned the following awards –
As for the crew members, they did not receive full recognition for their involvement in the incident until decades later. In 1988, the military announced it would award Prisoner of War medals to those captured in the nation's conflicts. While thousands of American prisoners of war were awarded medals, the crew members of Pueblo did not receive them. Instead, they were classified as "detainees". It was not until Congress passed a law overturning this decision that the medals were awarded; the crew finally received the medals at San Diego in May 1990.
Representation in popular culture
The 1968 Star Trek episode "The Enterprise Incident" was very loosely based upon the Pueblo incident. In the episode written by D. C. Fontana, Captain Kirk takes the Federation starship USS Enterprise, apparently without authorization, into enemy Romulan space.
The Pueblo incident was dramatically depicted in the 1973 ABC Theater televised production Pueblo. Hal Holbrook starred as Captain Lloyd Bucher. The two-hour drama was nominated for three Emmy Awards, winning two.
See also
1969 EC-121 shootdown incident
Korean DMZ Conflict (1966–1969)
List of museums in North Korea
Other conflicts:
Gulf of Tonkin incident
Hainan Island incident
Mayaguez incident
USS Liberty incident
General:
Technical research ship
List of hostage crises
References
Sources
NKIDP: Crisis and Confrontation on the Korean Peninsula: 1968–1969, A Critical Oral History
USS Pueblo Today usspueblo.org
Further reading
Armbrister, Trevor. A Matter of Accountability: The True Story of the Pueblo Affair. Guilford, Conn: Lyon's Press, 2004.
Brandt, Ed. The Last Voyage of USS Pueblo. New York: Norton, 1969.
Bucher, Lloyd M., and Mark Rascovich. Pueblo and Bucher. London: M. Joseph, 1971.
Cheevers, Jack. Act of War: Lyndon Johnson, North Korea, and the Capture of the Spy Ship Pueblo. New York : NAL Caliber, 2013.
Crawford, Don. Pueblo Intrigue; A Journey of Faith. Wheaton, Ill: Tyndale House Publishers, 1969.
Gallery, Daniel V. The Pueblo Incident. Garden City, N.Y.: Doubleday, 1970.
Harris, Stephen R., and James C. Hefley. My Anchor Held. Old Tappan, N.J.: F.H. Revell Co, 1970.
Hyland, John L., and John T. Mason. Reminiscences of Admiral John L. Hyland, USN (Ret.). Annapolis, MD: U.S. Naval Institute, 1989.
Lerner, Mitchell B. The Pueblo Incident: A Spy Ship and the Failure of American Foreign Policy. Lawrence, Kan: University Press of Kansas, 2002.
Liston, Robert A. The Pueblo Surrender: A Covert Action by the National Security Agency. New York: M. Evans, 1988.
Michishita, Narushige. North Korea's Military-Diplomatic Campaigns, 1966–2008. London: Routledge, 2010.
Mobley, Richard A. Flash Point North Korea: The Pueblo and EC-121 Crises. Annapolis, Md: Naval Institute Press, 2003.
Murphy, Edward R., and Curt Gentry. Second in Command; The Uncensored Account of the Capture of the Spy Ship Pueblo. New York: Holt, Rinehart and Winston, 1971.
Newton, Robert E. The Capture of the USS Pueblo and Its Effect on SIGINT Operations. [Fort George G. Meade, Md.]: Center for Cryptologic History, National Security Agency, 1992.
External links
"The Pueblo Incident" briefing and analysis by the US Navy (1968)
YouTube video taken of and aboard the USS Pueblo in Korea
Official website by former USS Pueblo crew members
Complaint and court judgment from crew members' lawsuit against North Korea
Puebloon Google Maps satellite image
Naval Vessel Register listing
– a 1973 TV movie about the Pueblo incident
North Korean International Documentation Project
A North Korean video on the issue
A Navy and Marine Corps report of investigation of the "USS Pueblo'' seizure" conducted pursuant to chapter II of the Manual of the Judge Advocate General (JAGMAN) published as six PDF files: 1 2 3 4 5 6
Pueblo Court of Inquiry Scrapbook, 1969-1976, MS 237 held by Special Collection & Archives, Nimitz Library at the United States Naval Academy
"USS Pueblo Crisis," Wilson Center Digital Archive
Reactions to Pueblo Incident (1968), Texas Archive of the Moving Image
1944 ships
1968 in North Korea
1968 in the United States
Cold War auxiliary ships of the United States
USS Pueblo
Conflicts in 1968
Ships of the United States Army
Design 381 coastal freighters
Espionage scandals and incidents
International maritime incidents
Maritime incidents in 1968
Military history of North Korea
United States Navy in the 20th century
1960s in the United States
1970s in the United States
Museum ships in North Korea
North Korea–United States relations
Ships built in Kewaunee, Wisconsin
United States Navy Colorado-related ships
Vessels captured from the United States Navy
Tourist attractions in Pyongyang
History of cryptography
National Security Agency
Signals intelligence
1968 in military history
Banner-class environmental research ships |
416776 | https://en.wikipedia.org/wiki/Timeline%20of%20algorithms | Timeline of algorithms | The following timeline of algorithms outlines the development of algorithms (mainly "mathematical recipes") since their inception.
Medieval Period
Before – writing about "recipes" (on cooking, rituals, agriculture and other themes)
c. 1700–2000 BC – Egyptians develop earliest known algorithms for multiplying two numbers
c. 1600 BC – Babylonians develop earliest known algorithms for factorization and finding square roots
c. 300 BC – Euclid's algorithm
c. 200 BC – the Sieve of Eratosthenes
263 AD – Gaussian elimination described by Liu Hui
628 – Chakravala method described by Brahmagupta
c. 820 – Al-Khawarizmi described algorithms for solving linear equations and quadratic equations in his Algebra; the word algorithm comes from his name
825 – Al-Khawarizmi described the algorism, algorithms for using the Hindu–Arabic numeral system, in his treatise On the Calculation with Hindu Numerals, which was translated into Latin as Algoritmi de numero Indorum, where "Algoritmi", the translator's rendition of the author's name gave rise to the word algorithm (Latin algorithmus) with a meaning "calculation method"
c. 850 – cryptanalysis and frequency analysis algorithms developed by Al-Kindi (Alkindus) in A Manuscript on Deciphering Cryptographic Messages, which contains algorithms on breaking encryptions and ciphers
c. 1025 – Ibn al-Haytham (Alhazen), was the first mathematician to derive the formula for the sum of the fourth powers, and in turn, he develops an algorithm for determining the general formula for the sum of any integral powers, which was fundamental to the development of integral calculus
c. 1400 – Ahmad al-Qalqashandi gives a list of ciphers in his Subh al-a'sha which include both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter; he also gives an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which can not occur together in one word
Before 1940
1540 – Lodovico Ferrari discovered a method to find the roots of a quartic polynomial
1545 – Gerolamo Cardano published Cardano's method for finding the roots of a cubic polynomial
1614 – John Napier develops method for performing calculations using logarithms
1671 – Newton–Raphson method developed by Isaac Newton
1690 – Newton–Raphson method independently developed by Joseph Raphson
1706 – John Machin develops a quickly converging inverse-tangent series for π and computes π to 100 decimal places
1789 – Jurij Vega improves Machin's formula and computes π to 140 decimal places,
1805 – FFT-like algorithm known by Carl Friedrich Gauss
1842 – Ada Lovelace writes the first algorithm for a computing engine
1903 – A fast Fourier transform algorithm presented by Carle David Tolmé Runge
1926 – Borůvka's algorithm
1926 – Primary decomposition algorithm presented by Grete Hermann
1927 – Hartree–Fock method developed for simulating a quantum many-body system in a stationary state.
1934 – Delaunay triangulation developed by Boris Delaunay
1936 – Turing machine, an abstract machine developed by Alan Turing, with others developed the modern notion of algorithm.
1940s
1942 – A fast Fourier transform algorithm developed by G.C. Danielson and Cornelius Lanczos
1945 – Merge sort developed by John von Neumann
1947 – Simplex algorithm developed by George Dantzig
1950s
1952 – Huffman coding developed by David A. Huffman
1953 – Simulated annealing introduced by Nicholas Metropolis
1954 – Radix sort computer algorithm developed by Harold H. Seward
1964 – Box–Muller transform for fast generation of normally distributed numbers published by George Edward Pelham Box and Mervin Edgar Muller. Independently pre-discovered by Raymond E. A. C. Paley and Norbert Wiener in 1934.
1956 – Kruskal's algorithm developed by Joseph Kruskal
1956 – Ford–Fulkerson algorithm developed and published by R. Ford Jr. and D. R. Fulkerson
1957 – Prim's algorithm developed by Robert Prim
1957 – Bellman–Ford algorithm developed by Richard E. Bellman and L. R. Ford, Jr.
1959 – Dijkstra's algorithm developed by Edsger Dijkstra
1959 – Shell sort developed by Donald L. Shell
1959 – De Casteljau's algorithm developed by Paul de Casteljau
1959 – QR factorization algorithm developed independently by John G.F. Francis and Vera Kublanovskaya
1959 – Rabin–Scott powerset construction for converting NFA into DFA published by Michael O. Rabin and Dana Scott
1960s
1960 – Karatsuba multiplication
1961 – CRC (Cyclic redundancy check) invented by W. Wesley Peterson
1962 – AVL trees
1962 – Quicksort developed by C. A. R. Hoare
1962 – Bresenham's line algorithm developed by Jack E. Bresenham
1962 – Gale–Shapley 'stable-marriage' algorithm developed by David Gale and Lloyd Shapley
1964 – Heapsort developed by J. W. J. Williams
1964 – multigrid methods first proposed by R. P. Fedorenko
1965 – Cooley–Tukey algorithm rediscovered by James Cooley and John Tukey
1965 – Levenshtein distance developed by Vladimir Levenshtein
1965 – Cocke–Younger–Kasami (CYK) algorithm independently developed by Tadao Kasami
1965 – Buchberger's algorithm for computing Gröbner bases developed by Bruno Buchberger
1965 – LR parsers invented by Donald Knuth
1966 – Dantzig algorithm for shortest path in a graph with negative edges
1967 – Viterbi algorithm proposed by Andrew Viterbi
1967 – Cocke–Younger–Kasami (CYK) algorithm independently developed by Daniel H. Younger
1968 – A* graph search algorithm described by Peter Hart, Nils Nilsson, and Bertram Raphael
1968 – Risch algorithm for indefinite integration developed by Robert Henry Risch
1969 – Strassen algorithm for matrix multiplication developed by Volker Strassen
1970s
1970 – Dinic's algorithm for computing maximum flow in a flow network by Yefim (Chaim) A. Dinitz
1970 – Knuth–Bendix completion algorithm developed by Donald Knuth and Peter B. Bendix
1970 – BFGS method of the quasi-Newton class
1970 – Needleman–Wunsch algorithm published by Saul B. Needleman and Christian D. Wunsch
1972 – Edmonds–Karp algorithm published by Jack Edmonds and Richard Karp, essentially identical to Dinic's algorithm from 1970
1972 – Graham scan developed by Ronald Graham
1972 – Red–black trees and B-trees discovered
1973 – RSA encryption algorithm discovered by Clifford Cocks
1973 – Jarvis march algorithm developed by R. A. Jarvis
1973 – Hopcroft–Karp algorithm developed by John Hopcroft and Richard Karp
1974 – Pollard's p − 1 algorithm developed by John Pollard
1974 – Quadtree developed by Raphael Finkel and J.L. Bentley
1975 – Genetic algorithms popularized by John Holland
1975 – Pollard's rho algorithm developed by John Pollard
1975 – Aho–Corasick string matching algorithm developed by Alfred V. Aho and Margaret J. Corasick
1975 – Cylindrical algebraic decomposition developed by George E. Collins
1976 – Salamin–Brent algorithm independently discovered by Eugene Salamin and Richard Brent
1976 – Knuth–Morris–Pratt algorithm developed by Donald Knuth and Vaughan Pratt and independently by J. H. Morris
1977 – Boyer–Moore string search algorithm for searching the occurrence of a string into another string.
1977 – RSA encryption algorithm rediscovered by Ron Rivest, Adi Shamir, and Len Adleman
1977 – LZ77 algorithm developed by Abraham Lempel and Jacob Ziv
1977 – multigrid methods developed independently by Achi Brandt and Wolfgang Hackbusch
1978 – LZ78 algorithm developed from LZ77 by Abraham Lempel and Jacob Ziv
1978 – Bruun's algorithm proposed for powers of two by Georg Bruun
1979 – Khachiyan's ellipsoid method developed by Leonid Khachiyan
1979 – ID3 decision tree algorithm developed by Ross Quinlan
1980s
1980 – Brent's Algorithm for cycle detection Richard P. Brendt
1981 – Quadratic sieve developed by Carl Pomerance
1981 – Smith–Waterman algorithm developed by Temple F. Smith and Michael S. Waterman
1983 – Simulated annealing developed by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi
1983 – Classification and regression tree (CART) algorithm developed by Leo Breiman, et al.
1984 – LZW algorithm developed from LZ78 by Terry Welch
1984 – Karmarkar's interior-point algorithm developed by Narendra Karmarkar
1984 - ACORN_PRNG discovered by Roy Wikramaratna and used privately
1985 – Simulated annealing independently developed by V. Cerny
1985 - Car–Parrinello molecular dynamics developed by Roberto Car and Michele Parrinello
1985 – Splay trees discovered by Sleator and Tarjan
1986 – Blum Blum Shub proposed by L. Blum, M. Blum, and M. Shub
1986 – Push relabel maximum flow algorithm by Andrew Goldberg and Robert Tarjan
1986 - Barnes–Hut tree method developed by Josh Barnes and Piet Hut for fast approximate simulation of n-body problems
1987 – Fast multipole method developed by Leslie Greengard and Vladimir Rokhlin
1988 – Special number field sieve developed by John Pollard
1989 - ACORN_PRNG published by Roy Wikramaratna
1989 – Paxos protocol developed by Leslie Lamport
1990s
1990 – General number field sieve developed from SNFS by Carl Pomerance, Joe Buhler, Hendrik Lenstra, and Leonard Adleman
1990 – Coppersmith–Winograd algorithm developed by Don Coppersmith and Shmuel Winograd
1990 – BLAST algorithm developed by Stephen Altschul, Warren Gish, Webb Miller, Eugene Myers, and David J. Lipman from National Institutes of Health
1991 – Wait-free synchronization developed by Maurice Herlihy
1992 – Deutsch–Jozsa algorithm proposed by D. Deutsch and Richard Jozsa
1992 – C4.5 algorithm, a descendant of ID3 decision tree algorithm, was developed by Ross Quinlan
1993 – Apriori algorithm developed by Rakesh Agrawal and Ramakrishnan Srikant
1993 – Karger's algorithm to compute the minimum cut of a connected graph by David Karger
1994 – Shor's algorithm developed by Peter Shor
1994 – Burrows–Wheeler transform developed by Michael Burrows and David Wheeler
1994 – Bootstrap aggregating (bagging) developed by Leo Breiman
1995 – AdaBoost algorithm, the first practical boosting algorithm, was introduced by Yoav Freund and Robert Schapire
1995 – soft-margin support vector machine algorithm was published by Vladimir Vapnik and Corinna Cortes. It adds a soft-margin idea to the 1992 algorithm by Boser, Nguyon, Vapnik, and is the algorithm that people usually refer to when saying SVM
1995 – Ukkonen's algorithm for construction of suffix trees
1996 – Bruun's algorithm generalized to arbitrary even composite sizes by H. Murakami
1996 – Grover's algorithm developed by Lov K. Grover
1996 – RIPEMD-160 developed by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel
1997 – Mersenne Twister a pseudo random number generator developed by Makoto Matsumoto and Tajuki Nishimura
1998 – PageRank algorithm was published by Larry Page
1998 – rsync algorithm developed by Andrew Tridgell
1999 – gradient boosting algorithm developed by Jerome H. Friedman
1999 – Yarrow algorithm designed by Bruce Schneier, John Kelsey, and Niels Ferguson
2000s
2000 – Hyperlink-induced topic search a hyperlink analysis algorithm developed by Jon Kleinberg
2001 – Lempel–Ziv–Markov chain algorithm for compression developed by Igor Pavlov
2001 – Viola–Jones algorithm for real-time face detection was developed by Paul Viola and Michael Jones.
2001 – DHT (Distributed hash table) is invented by multiple people from academia and application systems
2001 – BitTorrent a first fully decentralized peer-to-peer file distribution system is published
2001 – LOBPCG Locally Optimal Block Preconditioned Conjugate Gradient method finding extreme eigenvalues of symmetric eigenvalue problems by Andrew Knyazev
2002 – AKS primality test developed by Manindra Agrawal, Neeraj Kayal and Nitin Saxena
2002 – Girvan–Newman algorithm to detect communities in complex systems
2002 – Packrat parser developed for generating a parser that parses PEG (Parsing expression grammar) in linear time parsing developed by Bryan Ford
2009 – Bitcoin a first trust-less decentralized cryptocurrency system is published
2010s
2013 – Raft consensus protocol published by Diego Ongaro and John Ousterhout
2015 – YOLO (“You Only Look Once”) is an effective real-time object recognition algorithm, first described by Joseph Redmon et al.
References
Algorithms
Algorithms
Algorithms |
420166 | https://en.wikipedia.org/wiki/Ed%20Davey | Ed Davey | Sir Edward Jonathan Davey (born 25 December 1965) is a British politician who has served as Leader of the Liberal Democrats since 2020. He served in the Cameron–Clegg coalition as Secretary of State for Energy and Climate Change from 2012 to 2015 and as Deputy Leader to Jo Swinson from 2019 to 2020. An "Orange Book" liberal, he has been the Member of Parliament (MP) for Kingston and Surbiton since 2017, and from 1997 to 2015.
Davey was born in Mansfield, Nottinghamshire, where he attended Nottingham High School. He then went on to study at Jesus College, Oxford, and Birkbeck, University of London. He was an economics researcher and financial analyst before being elected to the House of Commons. He served as a Liberal Democrat Spokesperson to Charles Kennedy, Menzies Campbell and Nick Clegg from 2005 to 2010, in various portfolios including Education and Skills, Trade and Industry, and Foreign and Commonwealth Affairs.
In 2010, after the Liberal Democrats entered into a coalition government with the Conservative Party, Davey served as Parliamentary Under-Secretary of State for Employment Relations, Consumer and Postal Affairs from 2010 to 2012, and in David Cameron’s Cabinet as Secretary of State for Energy and Climate Change from 2012 to 2015, following Chris Huhne's resignation. Davey focused on increasing competition in the energy market by removing barriers to entry for smaller companies, and streamlining the customer switching process. He also approved the construction of Hinkley Point C nuclear power station.
He lost his seat in the 2015 general election, but regained it in the snap election held two years later. He served as the Liberal Democrat Home Affairs spokesperson from 2017 to 2019. In July 2019, after the retirement of Vince Cable, Davey unsuccessfully ran against Jo Swinson in a leadership election. He was later appointed Liberal Democrat Treasury spokesperson and elected unopposed as Deputy Leader of the Liberal Democrats. After Swinson lost her seat at the 2019 general election, Davey served as Acting Leader alongside the Liberal Democrat Presidents Baroness Sal Brinton and Mark Pack from December 2019 to August 2020. Davey stood in the 2020 leadership election, where he defeated Layla Moran with 63.5% of the vote.
Early life
Davey was born in Mansfield, Nottinghamshire on Christmas Day 1965. His father John died when Davey was four years old, and his mother Nina Davey (née Stanbrook) died 11 years later, after which he was brought up by his maternal grandparents. After attending the private Nottingham High School where Davey was head boy in 1984, he attended Jesus College, Oxford, where he was awarded a first class BA degree in Philosophy, Politics and Economics in 1988. He was JCR President.
During his adolescence, he worked at Pork Farms pork pie factory and at Boots. In 1989, he became an economics researcher for the Liberal Democrats, principally to Alan Beith, the party's then-Treasury spokesman, whilst studying at Birkbeck College, London, for a master's degree (MSc) in Economics. He was closely involved in the development of Liberal Democrat policies such as an additional penny on income tax to fund education, and central bank independence, for the 1992 general election. From 1993 to 1997, he worked in business forecasting and market analysis for management consultancy firm Omega Partners.
Parliamentary career (1997–2015)
Edward Davey was elected to the House of Commons, at his first attempt, in the 1997 general election, where he defeated Richard Tracey, the sitting Conservative MP for the former constituency of Surbiton, with a majority of just 56 votes, and remained the seat's MP for 18 years. In his maiden speech, on 6 June 1997, he gave his support for the setting up of the London Assembly, but was against the idea of a directly elected Mayor of London; he also spoke of the effects governmental cuts were having on education delivery in the Royal Borough of Kingston upon Thames.
In 1998 he was the primary sponsor of an Early Day Motion supporting the repeal of the Greenwich Judgement, which prevents Local Authorities from giving their own residents priority access to school places.
In 2001, he opposed government proposals for restrictions on gambling machines, which he described as a “silly bit of nanny state politics.”
In January 2003, Davey publicly backed local constituent and NHS whistleblower Ian Perkin, who alleged he had been sacked from his director of finance role for exposing statistics manipulation at St George's NHS healthcare trust. Davey condemned the NHS bureaucracy as “Stalinist” and called for an inquiry into Perkin's case, while personally meeting trust executives to discuss the case on behalf of Perkin.
In February 2003 Davey introduced the clause which repealed the prohibition of "promotion of homosexuality" under Section 28 of the Local Government Act 1988. The legislation was successfully repealed in March. He was one of the contributors to the Orange Book (2004).
In 2006 Davey was one of eight Liberal Democrat MPs, including Jeremy Browne and Mark Oaten, who opposed a total ban on smoking in clubs and pubs. He called the ban “a bit too nanny state”.
In an article for the Financial Times in 2007, Davey and LSE economist Tim Leunig proposed replacing the current system of local council planning permissions with community land auctions through sealed bids. They suggested that councils could take in tax the difference between the land owner's asking price and the highest bidder's offer, claiming this would stimulate development and the revenue then used to lower other taxation.
Lib Dem spokesperson
In Parliament, Davey was given a job immediately by Paddy (later Lord) Ashdown and became the party's spokesman on Treasury Affairs, adding the post of Whip in 1998, and a third job to hold as the spokesman on London from 2000.
Davey was re-elected in the 2001 general election with an increased majority over former Conservative MP, David Shaw. He joined the Liberal Democrat frontbench under Leader Charles Kennedy in the same year by becoming Liberal Democrat spokesperson for Treasury matters. In 2002, he became the Liberal Democrat spokesperson for the Office of the Deputy Prime Minister. He was appointed Liberal Democrat spokesperson for Education and Skills in 2005, before becoming Liberal Democrats spokesperson for Trade and Industry in March 2006. In December 2006, he succeeded Norman Lamb as Chief of Staff to Menzies Campbell, the new party leader. Davey was Chair of the party's Campaigns and Communications Committee. Following Nick Clegg's election as Leader of the Liberal Democrats, Davey was awarded the Foreign Affairs brief, and continued to retain his chairmanship of the party's Campaigns and Communications Committee.
On 26 February 2008, Davey was suspended from parliament for the day for ignoring a warning from the Deputy Speaker. He was protesting about the exclusion by the Speaker of a Liberal Democrat motion to debate and vote on whether the UK should have a referendum on staying in the EU.
At the 2009 Liberal Democrat conference, Davey caused controversy calling for dialogue with the Taliban, through declaring that it was 'time for tea with the Taliban', a comment echoed by Malala Yousafzai four years later to the BBC.
Ministerial career (2010–2015)
Parliamentary Under-Secretary of State for Business (2010–2012)
Following the Conservative – Liberal Democrat Coalition Agreement, after the 2010 general election, Davey was appointed Parliamentary Under Secretary of State in the Department for Business, Innovation and Skills with responsibility for Employment Relations, Consumer and Postal Affairs. In addition, he held responsibilities for trade as a Minister for Trade Policy. As a Parliamentary Under Secretary, Davey led the establishment of an unofficial 'like-minded group for growth' ginger group within the European Union, convening several economically liberal European governments behind an agenda of deregulation, free trade, liberalisation of services and a digital single market. He was involved in the provisional application phase of the Free Trade Agreement between the EU and South Korea.
In January 2011 he faced protests by postal workers in his Kingston and Surbiton constituency for his role in the privatisation of Royal Mail. Also in 2011, Davey announced several reforms to the labour market, mainly aimed at improving labour market flexibility. These reforms included cuts to red tape and easing dismissal laws, and were accompanied by reviews from the Institute of Economic Affairs into compensation payments and the TUPE. Davey also announced that the government would abolish the default retirement age.
Secretary of State for Energy and Climate Change (2012–2015)
On 3 February 2012, following the resignation of Secretary of State for Energy and Climate Change Chris Huhne due to his prosecution for perverting the course of justice, Davey was appointed Energy Secretary, and appointed to the Privy Council of the United Kingdom on 8 February. As Secretary of State Davey also became a member of the National Security Council. In late 2012, the Daily Mail published an article questioning Davey's loyalty to Deputy Prime Minister and Leader of the Liberal Democrats Nick Clegg. Responding in an interview, Davey rejected the claims of the article, saying instead he thought Clegg was "the best leader" the Liberal Democrats had ever had and that he personally was a member of Clegg's "Praetorian Guard".
In 2013, Davey set up the Green Growth Group, bringing together environmental and climate ministers from across the European Union in an effort to promote growth, investment in renewable and nuclear energy, liberalisation of the European energy market, a global carbon market, trade in energy, carbon capture technology, energy efficiency, and competition. Domestically, Davey focused on increasing competition in the energy market by removing barriers to entry for smaller companies, and streamlining the customer switching process, declaring in 2013 that "competition works". He also approved the construction of Hinkley Point C nuclear power station. Abroad, Davey promoted investment in the British energy sector by foreign companies including from Japan, South Korea, and China, making significant diplomatic trips to the latter two countries in order to highlight investment opportunities.
In October 2013, during a BBC Newsnight segment on energy bills, in a controversy that was termed by some media as "Jumpergate", Davey was asked by BBC presenter Jeremy Paxman whether or not he wore a jumper (to stay warm) at home, to which Davey replied that he did but stressed that competition and energy efficiency were the solutions to lowering energy bills. The following day, various media outlets reported that Davey had advised for people to wear jumpers at home to save on energy bills, although he had not. The controversy then spread when Prime Minister David Cameron's official spokesman told a reporter that people may wish to "consider" advice by charities to wrap up warmly, leading to media outlets reporting that Number 10 was also suggesting wearing jumpers to cut energy bills, with the supposed suggestion being seized upon by the opposition Labour Party. Number 10 later issued a statement rebutting the media reports. In April 2014, Davey called for the G7 to begin reduction of dependency on Russian energy following the 2014 Ukrainian revolution and commencement of the War in Donbass. Davey argued the benefits of investment in onshore wind energy from companies such as Siemens was a key part of the push to reduce dependence on Russian energy, while "more diversified supplies of gas" including from the US and domestic shale gas would also help. In May 2014 at a meeting in Rome, G7 energy ministers including Davey agreed formally to a process for reducing dependency on Russian energy; "Putin has crossed a line", Davey declared.
Throughout and after the Cameron–Clegg coalition, Davey's ministerial career came under scrutiny from political figures and the media. On the right, Conservatives Nigel Lawson and Peter Lilley were critical of Davey's environmental stances, while journalist Christopher Booker, who does not accept the scientific consensus on anthropogenic global warming, questioned his policy on wind turbines, and he was lampooned by The Telegraph sketch writer Michael Deacon. He was also criticised by left-wing figures such as Green MP for Brighton Pavilion Caroline Lucas over for his support of fracking, and by the Leader of the Opposition and Leader of the Labour Party Ed Miliband for Davey's warning that Labour's price control policy would cause blackouts. Luxembourgish MEP and environmentalist Claude Turmes alleged in his 2017 book Energy Transformation that Davey's Green Growth Group was actually a front for British nuclear interests. Conversely, Davey's promotion to the role of Energy Secretary was hailed by The Economist, which viewed him favourably as a "pragmatic" and "free-market liberal". In "The Liberal Democrats and supply-side economics", published in an issue of the Institute of Economic Affairs' Economic Affairs journal, Davey was identified as the Liberal Democrat who had achieved the most in terms of supply-side reforms. Conservative MP and former Chancellor of the Duchy of Lancaster and Minister of State for Government Policy Sir Oliver Letwin credited Davey and his aforementioned "like-minded" group of economically liberal governments as having helped to curb regulatory enthusiasm within the European Union.
Leading up to the 2015 general election, Davey was viewed by various sources as a potential successor to Liberal Democrat leader Nick Clegg. Political commentator Gary Gibbon speculated that due to Davey's association with the Orange Book wing of the party, the tenuousness of Danny Alexander's parliamentary seat, and David Laws' unwillingness, the role of "heir" would naturally fall to Davey.
Parliamentary career (2017–present)
2015 and 2017 elections
At the 2015 general election, Davey was defeated by Conservative candidate James Berry by 2,834 votes after the Liberal Democrat vote fell by over 15% in Kingston & Surbiton. Davey regained the seat for the Liberal Democrats at the 2017 general election, with a majority of 4,124 votes over Berry.
Return to Parliament
Upon returning to Parliament in 2017, Davey was considered a possible candidate for the Liberal Democrat leadership election following the resignation of Tim Farron. However, he ruled out standing over family concerns, but called on the Liberal Democrats to be "the party of reform" and "super-ambitious -– just like radical centrists in Canada, France and the Netherlands". Davey was then the Liberal Democrat Spokesperson for HM Treasury, having previously served as Liberal Democrat Spokesperson for the Home Office from 2017 to 2019.
He is the Chair of the All-Party Britain-Republic of Korea Parliamentary Group (APPG). He is also the Chair of the APPG on Charity Retail, the Vice Chair of the APPG for the Ahmadiyya Muslim Community, and the Vice Chair of the APPG on Land Value Capture.
Leader of the Liberal Democrats (2019–present)
2019 leadership bid
Following the 2019 European Parliament election, Liberal Democrat leader Sir Vince Cable announced his intention "to hand over a bigger, stronger party" to a new leader, triggering a party leadership contest. Davey announced his candidacy for the role on 30 May, stating his belief that action must be taken in Parliament to prevent a "no deal" Brexit, and highlighting his support for stronger action to limit global warming. Davey lost this race to Jo Swinson, with 36.9% of the vote to Swinson's 63.1%. On 3 September 2019, Davey was elected as Swinson's deputy leader.
2019 general election and acting co-leadership
Following Jo Swinson's resignation as a result of losing her seat in the December 2019 general election, due to his position as deputy leader, Davey became interim co-leader alongside the party president (at first Baroness Sal Brinton, and then Mark Pack).
2020 leadership bid
In June 2020, acting leader Davey launched his bid to become leader saying that his "experience as a carer can help rebuild Britain after coronavirus". He proposed the establishment of a basic income to support carers, and said that the Liberal Democrats should be "the party of social care". Davey ruled out a formal electoral agreement with the Labour Party, but said that he would prioritise defeating the Conservatives, and ruled out working with the Conservatives following the next election. He proposed a plan to reduce carbon emissions from domestic flights to zero by 2030 through investment in research and technology. In a hustings event with Welsh members, he said that the 2021 Senedd election was a priority and he expected success for the Liberal Democrats.
Davey was one of two candidates running for leader in the Liberal Democrats leadership election, competing with Layla Moran. One recurring theme of the leadership campaign was Davey's record in the Conservative and Liberal Democrat coalition government, and the policies that government had enacted. Moran is considered to be more left-wing than Davey and representing a break from the coalition years. Alongside former leader Nick Clegg and many of the Lib Dems who served in the governing Conservative-Lib Dem coalition of 2010–2015, Davey is associated with the party's right-wing Orange Booker branch. The record of the coalition, which caused a decline in popularity of the Lib Dems after 2015, has been defended by Davey.
On 27 August, Davey won the leadership election with 42,756 votes, which translated to 63.5% of total votes. In his victory speech, Davey said that the Liberal Democrats must "wake up and smell the coffee" and "start listening" to ordinary people and those who "don't believe we share their values." He also stressed his experience in the coalition government, and his commitments to tackle climate change. Moran later congratulated Davey on Twitter, saying "I look forward to working with him to campaign for a better future for Britain."
Views
Davey identifies as a liberal politically, telling magazine Total Politics: “I personally think liberalism is the strongest political philosophy in the modern world. Socialism has failed. I think even social democracy, the watered down version which Labour sort of understand depending on which day of the week it is, is not very convincing, and I don’t really understand where the Conservatives are coming from because they have so many philosophies within one party. There’s no philosophy of the modern Conservative Party.” He has said that he believes "in the free market and in competition", and during a parliamentary public bill committee debate in November 2010 argued in defence of privatisation, deregulation, and the private sector against Labour MP Gregg McClymont.
Davey also describes himself as a "strong free-trader", rejecting reciprocity in trade tariffs as "the classic protectionist argument". He believes Britain should be open to foreign investment, except for investment tainted by “smells that you have from Putin." He dismisses worries over foreign ownership and investment in the British economy such as that of Chinese and French companies' involvement in the British energy market. Davey describes himself as "an economist by trade."
He was a supporter of the coalition government, writing in a 2011 column for London newspaper Get West London that the coalition would "restore liberty to the people" and that "Labour's nanny state will be cut back" in reference to the coalition's policies on civil liberties. In 2012, Davey predicted the coalition government would be more pro-European Union than Tony Blair's Labour government, praising Conservative ministers and the then Prime Minister David Cameron for relations they had developed with European counterparts. Retrospectively, Davey said of the coalition in 2017: “I think the coalition government, when history looks at it, will go down as actually a pretty good government.” In 2017, Davey warned against a Conservative Party proposal for fines on large internet companies who fail to remove extremist and terrorist material from their platforms within 24 hours, which he claimed could lead to censorship if companies are forced to rush and pointed to Germany as an example of where this approach has the potential to lead to censorship. He thinks technology giants must not be treated as the "enemy" and accused the Conservatives of declaring an "all-out war" on the internet. Similarly he is critical of Conservative proposals to weaken encryption because, according to Davey, encryption is important for individual security and helping businesses to thrive.
In 2018, after the government's Investigatory Powers Act mass surveillance law was declared to be in breach of EU law, Davey commented that UK surveillance needed a “major overhaul” which puts “our freedoms and civil liberties at its very core” (Davey's party opposes the mass surveillance law and had voted against it). Since the 2000s, Davey has been vocal on the issue of detention without trial, in particular Guantanamo and Bagram, which he believed required transparency and formal investigation of torture allegations. He has opposed indefinite detention for illegal immigrants.
Davey is supportive of market solutions in the conventional energy sector, The Guardian describing him as a 'zealot' for markets. He has been highly critical of price controls such as those proposed by former Labour leader Ed Miliband; he considers them to be detrimental to competition and lowering prices for consumers. He has promoted removal of barriers to entry to encourage new entrants into the energy market; “We began with deregulation. This stimulated a doubling of smaller firms” he wrote of his policy as Energy Secretary in 2014. Additionally he welcomed the rise of consumer switching websites. He has also been in support of trade to import natural gas from countries including the USA and Qatar, and importation of green energy via new interconnectors from Norway and Ireland. He has, however, supported “properly designed and carefully targeted” short-term subsidies for some emerging green energy technologies in order to meet climate change targets.
When cutting green energy subsidies as Energy and Climate Change Secretary, Davey said he “tended to try and marketise the reduction so people were competing for any remaining subsidies” through Contracts for Difference (CfDs). After leaving the office of Energy Secretary in 2015 he explained that he had planned to “eliminate subsidies over the coming years” and had previously stated, “ultimately I don’t want the government—the Secretary of State—to decide what that low carbon mix is . . . I want the markets and technology development and innovation to decide what that mix is.”
He has argued in favour of both nuclear power and shale gas fracking as potential energy sources, and natural gases as transitional fuels, though he has warned that there should not be an over-reliance on them. Davey previously had argued against nuclear power but in 2013 he urged fellow Lib Dem members to support nuclear power, stating, "I've changed my mind because of climate change."
Ed Davey does not support the United Kingdom rejoining the European Union in the short-term, in 2020 stating that the idea that people would want to consider re-joining the EU in two or three years time as "being for the birds". In January 2021 he clarified this position, stating that he is "determined the Liberal Democrats remain a pro-European party committed to the UK being members of the European Union again", adding that his party is "practical" about the matter.
In 2013, Davey supported Operation Shader, and the overthrow of Bashar al-Assad.
Following the death of Sarah Everard, Davey said that "Men have got to change" and suggested that we "educate boys and men to show more respect". In May 2021, alongside celebrities and other public figures, Davey was a signatory to an open letter from Stylist magazine which called on the government to address what it described as an "epidemic of male violence" by funding an "ongoing, high-profile, expert-informed awareness campaign on men’s violence against women and girls".
As a supporter of trans rights, Davey believes that trans women should be given the same rights as cisgender women, which he made clear in a series of interviews on the day that a report into violence against women, commissioned in the wake of the Everard affair, was published.
Davey criticised Boris Johnson after the North Shropshire by-election where a Lib Dem candidate, Helen Morgan overturned a Conservative majority of nearly 23,000 to win the seat. Davey said it was a, "watershed moment in our politics. Millions of people are fed up with Boris Johnson and his failure to provide leadership throughout the pandemic and last night the voters of North Shropshire spoke for all of them."
Business appointments
Davey took up several business appointments after leaving his role as Secretary of State for Energy and Climate Change in May 2015.
Mongoose Energy appointed Davey as chairman in September 2015.
Davey set up an independent consultancy in September 2015 to provide advice on energy and climate change.
In January 2016 Davey was appointed as a part-time consultant to MHP Communications, the public relations and lobbying firm representing EDF Energy. Davey was criticised by press commentators for the potential conflict of interest between his previous role as Secretary of State for Energy and Climate Change and his role at MHP. As Secretary of State Davey awarded EDF the contract to build a new nuclear plant at Hinkley Point in Somerset.
Davey's appointment as Global Partner and non-Executive director of private equity investor Nord Engine Capital was announced in February 2016.
In July 2016 he became non-paid patron of the Sustainable Futures Foundation, a charity promoting environmental sustainability for the public benefit.
Personal life
In the summer of 2005, Davey married Emily Gasson who was the Lib Dem candidate for North Dorset at the general election that year. Their first child, a son was born in December 2007. Their son has speech difficulties, leading to Davey's interest in speech therapy. They live in Surbiton, London; Davey lived there before his election to Parliament in 1997. Emily had the number two position on the Lib Dem London-wide candidate list for the 2016 London Assembly elections, but was not elected. Emily then stood for election as a councillor for the three seat Norbiton Ward in 2018, as part of the Royal Borough of Kingston Council and topped the poll with 20% of the vote.
Davey speaks English, French, German, and Spanish.
In December 2021, Davey tested positive for COVID-19.
Honours
In 1995, Davey won a Royal Humane Society bravery award and commendation from the Chief Constable of the British Transport Police for rescuing a woman who had fallen onto the railway line in the face of an oncoming train at Clapham Junction railway station.
In 2001 he was elected a Fellow of the Royal Society of Arts (FRSA).
He was sworn in as a member of Her Majesty's Most Honourable Privy Council on 8 February 2012, giving him the Honorific Prefix "The Right Honourable" for life.
Davey was knighted in the 2016 New Years Honours List for 'political and public service'.
Publications
Davey, Edward (2000), Making MPs Work For Our Money: Reforming Parliament's Role In Budget Scrutiny by 2000, Centre for Reform,
Davey, Edward. "Liberalism and localism", Chapter 2 in The Orange Book: Reclaiming Liberalism by David Laws and Paul Marshall (contributions et al.), 2004, Profile Books,
Davey, Edward; Hunter, Rebecca. "People Who Help Us: Member of Parliament", 2004, Cherrytree Books,
See also
Liberal Democrat Federal Conference
Liberal Democrat Frontbench Team
Notes
References
External links
Profile at the Liberal Democrats
Edward Davey MP (BIS archive)
Profile: Edward Davey BBC News profile, 17 October 2007
Debrett's People of Today
|-
1965 births
Alumni of Jesus College, Oxford
Knights Bachelor
Leaders of the Liberal Democrats (UK)
Liberal Democrats (UK) MPs for English constituencies
Living people
Members of the Privy Council of the United Kingdom
Fellows of the Royal Society of Arts
Politicians awarded knighthoods
Politicians from Nottingham
UK MPs 1997–2001
UK MPs 2001–2005
UK MPs 2005–2010
UK MPs 2010–2015
UK MPs 2017–2019
UK MPs 2019–present |
420373 | https://en.wikipedia.org/wiki/%CE%A0-calculus | Π-calculus | In theoretical computer science, the -calculus (or pi-calculus) is a process calculus. The -calculus allows channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation.
The -calculus has few terms and is a small, yet expressive language (see ). Functional programs can be encoded into the -calculus, and the encoding emphasises the dialogue nature of computation, drawing connections with game semantics. Extensions of the -calculus, such as the spi calculus and applied , have been successful in reasoning about cryptographic protocols. Beside the original use in describing concurrent systems, the -calculus has also been used to reason about business processes and molecular biology.
Informal definition
The -calculus belongs to the family of process calculi, mathematical formalisms for describing and analyzing properties of concurrent computation. In fact, the -calculus, like the λ-calculus, is so minimal that it does not contain primitives such as numbers, booleans, data structures, variables, functions, or even the usual control flow statements (such as if-then-else, while).
Process constructs
Central to the -calculus is the notion of name. The simplicity of the calculus lies in the dual role that names play as communication channels and variables.
The process constructs available in the calculus are the following (a precise definition is given in the following section):
concurrency, written , where and are two processes or threads executed concurrently.
communication, where
input prefixing is a process waiting for a message that was sent on a communication channel named before proceeding as binding the name received to the name Typically, this models either a process expecting a communication from the network or a label c usable only once by a goto c operation.
output prefixing describes that the name is emitted on channel before proceeding as Typically, this models either sending a message on the network or a goto c operation.
replication, written , which may be seen as a process which can always create a new copy of Typically, this models either a network service or a label c waiting for any number of goto c operations.
creation of a new name, written , which may be seen as a process allocating a new constant within The constants of are defined by their names only and are always communication channels. Creation of a new name in a process is also called restriction.
the nil process, written , is a process whose execution is complete and has stopped.
Although the minimalism of the -calculus prevents us from writing programs in the normal sense, it is easy to extend the calculus. In particular, it is easy to define both control structures such as recursion, loops and sequential composition and datatypes such as first-order functions, truth values, lists and integers. Moreover, extensions of the have been proposed which take into account distribution or public-key cryptography. The applied due to Abadi and Fournet put these various extensions on a formal footing by extending the with arbitrary datatypes.
A small example
Below is a tiny example of a process which consists of three parallel components. The channel name is only known by the first two components.
The first two components are able to communicate on the channel , and the name becomes bound to . The next step in the process is therefore
Note that the remaining is not affected because it is defined in an inner scope.
The second and third parallel components can now communicate on the channel name , and the name becomes bound to . The next step in the process is now
Note that since the local name has been output, the scope of is extended to cover the third component as well. Finally, the channel can be used for sending the name . After that all concurrently executing processes have stopped
Formal definition
Syntax
Let Χ be a set of objects called names. The abstract syntax for the -calculus is built from the following BNF grammar (where x and y are any names from Χ):
In the concrete syntax below, the prefixes bind more tightly than the parallel composition (|), and parentheses are used to disambiguate.
Names are bound by the restriction and input prefix constructs. Formally, the set of free names of a process in –calculus are defined inductively by the table below. The set of bound names of a process are defined as the names of a process that are not in the set of free names.
Structural congruence
Central to both the reduction semantics and the labelled transition semantics is the notion of structural congruence. Two processes are structurally congruent, if they are identical up to structure. In particular, parallel composition is commutative and associative.
More precisely, structural congruence is defined as the least equivalence relation preserved by the process constructs and satisfying:
Alpha-conversion:
if can be obtained from by renaming one or more bound names in .
Axioms for parallel composition:
Axioms for restriction:
Axiom for replication:
Axiom relating restriction and parallel:
if is not a free name of .
This last axiom is known as the "scope extension" axiom. This axiom is central, since it describes how a bound name may be extruded by an output action, causing the scope of to be extended. In cases where is a free name of , alpha-conversion may be used to allow extension to proceed.
Reduction semantics
We write if can perform a computation step, following which it is now .
This reduction relation is defined as the least relation closed under a set of reduction rules.
The main reduction rule which captures the ability of processes to communicate through channels is the following:
where denotes the process in which the free name has been substituted for the free occurrences of . If a free occurrence of occurs in a location where would not be free, alpha-conversion may be required.
There are three additional rules:
If then also .
This rule says that parallel composition does not inhibit computation.
If , then also .
This rule ensures that computation can proceed underneath a restriction.
If and and , then also .
The latter rule states that processes that are structurally congruent have the same reductions.
The example revisited
Consider again the process
Applying the definition of the reduction semantics, we get the reduction
Note how, applying the reduction substitution axiom, free occurrences of are now labeled as .
Next, we get the reduction
Note that since the local name has been output, the scope of is extended to cover the third component as well. This was captured using the scope extension axiom.
Next, using the reduction substitution axiom, we get
Finally, using the axioms for parallel composition and restriction, we get
Labelled semantics
Alternatively, one may give the pi-calculus a labelled transition semantics (as has been done with the Calculus of Communicating Systems).
In this semantics, a transition from a state to some other state after an action is notated as:
Where states and represent processes and is either an input action , an output action , or a silent action .
A standard result about the labelled semantics is that it agrees with the reduction semantics up to structural congruence, in the sense that
if and only if
Extensions and variants
The syntax given above is a minimal one. However, the syntax may be modified in various ways.
A nondeterministic choice operator can be added to the syntax.
A test for name equality can be added to the syntax. This match operator can proceed as if and only if and are the same name.
Similarly, one may add a mismatch operator for name inequality. Practical programs which can pass names (URLs or pointers) often use such functionality: for directly modeling such functionality inside the calculus, this and related extensions are often useful.
The asynchronous -calculus
allows only outputs with no suffix, i.e. output atoms of the form , yielding a smaller calculus. However, any process in the original calculus can be represented by the smaller asynchronous -calculus using an extra channel to simulate explicit acknowledgement from the receiving process. Since a continuation-free output can model a message-in-transit, this fragment shows that the original -calculus, which is intuitively based on synchronous communication, has an expressive asynchronous communication model inside its syntax. However, the nondeterministic choice operator defined above cannot be expressed in this way, as an unguarded choice would be converted into a guarded one; this fact has been used to demonstrate that the asynchronous calculus is strictly less expressive than the synchronous one (with the choice operator).
The polyadic -calculus allows communicating more than one name in a single action: (polyadic output) and (polyadic input). This polyadic extension, which is useful especially when studying types for name passing processes, can be encoded in the monadic calculus by passing the name of a private channel through which the multiple arguments are then passed in sequence. The encoding is defined recursively by the clauses
is encoded as
is encoded as
All other process constructs are left unchanged by the encoding.
In the above, denotes the encoding of all prefixes in the continuation in the same way.
The full power of replication is not needed. Often, one only considers replicated input , whose structural congruence axiom is .
Replicated input process such as can be understood as servers, waiting on channel
to be invoked by clients. Invocation of a server spawns a new copy of
the process , where a is the name passed by the client to the
server, during the latter's invocation.
A higher order -calculus can be defined where not only names but processes are sent through channels.
The key reduction rule for the higher order case is
Here, denotes a process variable which can be instantiated by a process term. Sangiorgi
established that the ability to pass processes does not
increase the expressivity of the -calculus: passing a process P can be
simulated by just passing a name that points to P instead.
Properties
Turing completeness
The -calculus is a universal model of computation. This was first observed by Milner in his paper "Functions as Processes", in which he presents two encodings of the lambda-calculus in the -calculus. One encoding simulates the eager (call-by-value) evaluation strategy, the other encoding simulates the normal-order (call-by-name) strategy. In both of these, the crucial insight is the modeling of environment bindings – for instance, " is bound to term " – as replicating agents that respond to requests for their bindings by sending back a connection to the term .
The features of the -calculus that make these encodings possible are name-passing and replication (or, equivalently, recursively defined agents). In the absence of replication/recursion, the -calculus ceases to be Turing-complete. This can be seen by the fact that bisimulation equivalence becomes decidable for the recursion-free calculus and even for the finite-control -calculus where the number of parallel components in any process is bounded by a constant.
Bisimulations in the -calculus
As for process calculi, the -calculus allows for a definition of bisimulation equivalence. In the -calculus, the definition of bisimulation equivalence (also known as bisimilarity) may be based on either the reduction semantics or on the labelled transition semantics.
There are (at least) three different ways of defining labelled bisimulation equivalence in the -calculus: Early, late and open bisimilarity. This stems from the fact that the -calculus is a value-passing process calculus.
In the remainder of this section, we let and denote processes and denote binary relations over processes.
Early and late bisimilarity
Early and late bisimilarity were both formulated by Milner, Parrow and Walker in their original paper on the -calculus.
A binary relation over processes is an early bisimulation if for every pair of processes ,
whenever then for every name there exists some such that and ;
for any non-input action , if then there exists some such that and ;
and symmetric requirements with and interchanged.
Processes and are said to be early bisimilar, written if the pair for some early bisimulation .
In late bisimilarity, the transition match must be independent of the name being transmitted.
A binary relation over processes is a late bisimulation if for every pair of processes ,
whenever then for some it holds that and for every name y;
for any non-input action , if implies that there exists some such that and ;
and symmetric requirements with and interchanged.
Processes and are said to be late bisimilar, written if the pair for some late bisimulation .
Both and suffer from the problem that they are not congruence relations in the sense that they are not preserved by all process constructs. More precisely, there exist processes and such that but . One may remedy this problem by considering the maximal congruence relations included in and , known as early congruence and late congruence, respectively.
Open bisimilarity
Fortunately, a third definition is possible, which avoids this problem, namely that of open bisimilarity, due to Sangiorgi.
A binary relation over processes is an open bisimulation if for every pair of elements and for every name substitution and every action , whenever then there exists some such that and .
Processes and are said to be open bisimilar, written if the pair for some open bisimulation .
Early, late and open bisimilarity are distinct
Early, late and open bisimilarity are distinct. The containments are proper, so .
In certain subcalculi such as the asynchronous pi-calculus, late, early and open bisimilarity are known to coincide. However, in this setting a more appropriate notion is that of asynchronous bisimilarity.
In the literature, the term open bisimulation usually refers to a more sophisticated notion, where processes and relations are indexed by distinction relations; details are in Sangiorgi's paper cited above.
Barbed equivalence
Alternatively, one may define bisimulation equivalence directly from the reduction semantics. We write if process immediately allows an input or an output on name .
A binary relation over processes is a barbed bisimulation if it is a symmetric relation which satisfies that for every pair of elements we have that
(1) if and only if for every name
and
(2) for every reduction there exists a reduction
such that .
We say that and are barbed bisimilar if there exists a barbed bisimulation where .
Defining a context as a term with a hole [] we say that two processes P and Q are barbed congruent, written , if for every context we have that and are barbed bisimilar. It turns out that barbed congruence coincides with the congruence induced by early bisimilarity.
Applications
The -calculus has been used to describe many different kinds of concurrent systems. In fact, some of the most recent applications lie outside the realm of traditional computer science.
In 1997, Martin Abadi and Andrew Gordon proposed an extension of the -calculus, the Spi-calculus, as a formal notation for describing and reasoning about cryptographic protocols. The spi-calculus extends the -calculus with primitives for encryption and decryption. In 2001, Martin Abadi and Cedric Fournet generalised the handling of cryptographic protocols to produce the applied calculus. There is now a large body of work devoted to variants of the applied calculus, including a number of experimental verification tools. One example is the tool ProVerif due to Bruno Blanchet, based on a translation of the applied -calculus into Blanchet's logic programming framework. Another example is Cryptyc , due to Andrew Gordon and Alan Jeffrey, which uses Woo and Lam's method of correspondence assertions as the basis for type systems that can check for authentication properties of cryptographic protocols.
Around 2002, Howard Smith and Peter Fingar became interested that -calculus would become a description tool for modeling business processes. By July 2006, there is discussion in the community about how useful this would be. Most recently, the -calculus has formed the theoretical basis of Business Process Modeling Language (BPML), and of Microsoft's XLANG.
The -calculus has also attracted interest in molecular biology. In 1999, Aviv Regev and Ehud Shapiro showed that one can describe a cellular signaling pathway (the so-called RTK/MAPK cascade) and in particular the molecular "lego" which implements these tasks of communication in an extension of the -calculus. Following this seminal paper, other authors described the whole metabolic network of a minimal cell. In 2009, Anthony Nash and Sara Kalvala proposed a -calculus framework to model the signal transduction that directs Dictyostelium discoideum aggregation.
History
The -calculus was originally developed by Robin Milner, Joachim Parrow and David Walker in 1992, based on ideas by Uffe Engberg and Mogens Nielsen. It can be seen as a continuation of Milner's work on the process calculus CCS (Calculus of Communicating Systems). In his Turing lecture, Milner describes the development of the -calculus as an attempt to capture the uniformity of values and processes in actors.
Implementations
The following programming languages implement the -calculus or one of its variants:
Business Process Modeling Language (BPML)
occam-π
Pict
JoCaml (based on the Join-calculus)
RhoLang
Notes
References
Process calculi
Theoretical computer science |
421422 | https://en.wikipedia.org/wiki/List%20of%20security%20hacking%20incidents | List of security hacking incidents | The list of security hacking incidents covers important or noteworthy events in the history of security hacking and cracking.
1900
1903
Magician and inventor Nevil Maskelyne disrupts John Ambrose Fleming's public demonstration of Guglielmo Marconi's purportedly secure wireless telegraphy technology, sending insulting Morse code messages through the auditorium's projector.
1930s
1932
Polish cryptologists Marian Rejewski, Henryk Zygalski and Jerzy Różycki broke the Enigma machine code.
1939
Alan Turing, Gordon Welchman and Harold Keen worked together to develop the Bombe (on the basis of Rejewski's works on Bomba). The Enigma machine's use of a reliably small key space makes it vulnerable to brute force.
1940s
1943
René Carmille, comptroller general of the Vichy French Army, hacked the punched card system used by the Nazis to locate Jews.
1949
The theory that underlies computer viruses was first made public in 1949, when computer pioneer John von Neumann presented a paper titled "Theory and Organization of Complicated Automata". In the paper von Neumann speculated that computer programs could reproduce themselves.
1950s
1955
At MIT, "hack" first came to mean fussing with machines. The minutes of an April 1955 meeting of the Tech Model Railroad Club state that "Mr. Eccles requests that anyone working or hacking on the electrical system turn the power off to avoid fuse blowing."
1957
Joe "Joybubbles" Engressia, a blind seven-year-old boy with perfect pitch, discovered that whistling the fourth E above middle C (a frequency of 2600 Hz) would interfere with AT&T's automated telephone systems, thereby inadvertently opening the door for phreaking.
1960s
Various phreaking boxes are used to interact with automated telephone systems.
1963
The first ever reference to malicious hacking is 'telephone hackers' in MIT's student newspaper, The Tech of hackers tying up the lines with Harvard, configuring the PDP-1 to make free calls, war dialing and accumulating large phone bills.
1965
William D. Mathews from MIT found a vulnerability in a CTSS running on an IBM 7094. The standard text editor on the system was designed to be used by one user at a time, working in one directory, and so created a temporary file with a constant name for all instantiations of the editor. The flaw was discovered when two system programmers were editing at the same time and the temporary files for the message-of-the day and the password file became swapped, causing the contents of the system CTSS password file to display to any user logging into the system.
1967
The first known incidence of network penetration hacking took place when members of a computer club at a suburban Chicago area high school were provided access to IBM's APL network. In the Fall of 1967, IBM (through Science Research Associates) approached Evanston Township High School with the offer of four 2741 Selectric teletypewriter based terminals with dial-up modem connectivity to an experimental computer system which implemented an early version of the APL programming language. The APL network system was structured in Workspaces which were assigned to various clients using the system. Working independently, the students quickly learned the language and the system. They were free to explore the system, often using existing code available in public Workspaces as models for their own creations. Eventually, curiosity drove the students to explore the system's wider context. This first informal network penetration effort was later acknowledged as helping harden the security of one of the first publicly accessible networks:
1970s
1971
John T. Draper (later nicknamed Captain Crunch), his friend Joe Engressia (also known as Joybubbles), and blue box phone phreaking hit the news with an Esquire magazine feature story.
1979
Kevin Mitnick breaks into his first major computer system, the Ark, the computer system Digital Equipment Corporation (DEC) used for developing their RSTS/E operating system software.
1980s
1980
The FBI investigates a breach of security at National CSS (NCSS). The New York Times, reporting on the incident in 1981, describes hackers as
The newspaper describes white hat activities as part of a "mischievous but perversely positive 'hacker' tradition". When a National CSS employee revealed the existence of his password cracker, which he had used on customer accounts, the company chastised him not for writing the software but for not disclosing it sooner. The letter of reprimand stated that "The Company realizes the benefit to NCSS and in fact encourages the efforts of employees to identify security weaknesses to the VP, the directory, and other sensitive software in files".
1981
Chaos Computer Club forms in Germany.
Ian Murphy aka Captain Zap, was the first cracker to be tried and convicted as a felon. Murphy broke into AT&T's computers in 1981 and changed the internal clocks that metered billing rates. People were getting late-night discount rates when they called at midday. Of course, the bargain-seekers who waited until midnight to call long distance were hit with high bills.
1983
The 414s break into 60 computer systems at institutions ranging from the Los Alamos National Laboratory to Manhattan's Memorial Sloan-Kettering Cancer Center. The incident appeared as the cover story of Newsweek with the title "Beware: Hackers at play". As a result, the U.S. House of Representatives held hearings on computer security and passed several laws.
The group KILOBAUD is formed in February, kicking off a series of other hacker groups which form soon after.
The movie WarGames introduces the wider public to the phenomenon of hacking and creates a degree of mass paranoia of hackers and their supposed abilities to bring the world to a screeching halt by launching nuclear ICBMs.
The U.S. House of Representatives begins hearings on computer security hacking.
In his Turing Award lecture, Ken Thompson mentions "hacking" and describes a security exploit that he calls a "Trojan horse".
1984
Someone calling himself Lex Luthor founds the Legion of Doom. Named after a Saturday morning cartoon, the LOD had the reputation of attracting "the best of the best"—until one of the most talented members called Phiber Optik feuded with Legion of Doomer Erik Bloodaxe and got 'tossed out of the clubhouse'. Phiber's friends formed a rival group, the Masters of Deception.
The Comprehensive Crime Control Act gives the Secret Service jurisdiction over computer fraud.
Cult of the Dead Cow forms in Lubbock, Texas, and begins publishing its ezine.
The hacker magazine 2600 begins regular publication, right when TAP was putting out its final issue. The editor of 2600, "Emmanuel Goldstein" (whose real name is Eric Corley), takes his handle from the leader of the resistance in George Orwell's 1984. The publication provides tips for would-be hackers and phone phreaks, as well as commentary on the hacker issues of the day. Today, copies of 2600 are sold at most large retail bookstores.
The Chaos Communication Congress, the annual European hacker conference organized by the Chaos Computer Club, is held in Hamburg, Germany.
William Gibson's groundbreaking science fiction novel Neuromancer, about "Case", a futuristic computer hacker, is published. Considered the first major cyberpunk novel, it brought into hacker jargon such terms as "cyberspace", "the matrix", "simstim", and "ICE".
1985
KILOBAUD is re-organized into The P.H.I.R.M. and begins sysopping hundreds of BBSs throughout the United States, Canada, and Europe.
The online 'zine Phrack is established.
The Hacker's Handbook is published in the UK.
The FBI, Secret Service, Middlesex County NJ Prosecutor's Office and various local law enforcement agencies execute seven search warrants concurrently across New Jersey on July 12, 1985, seizing equipment from BBS operators and users alike for "complicity in computer theft", under a newly passed, and yet untested criminal statute. This is famously known as the Private Sector Bust, or the 2600 BBS Seizure, and implicated the Private Sector BBS sysop, Store Manager (also a BBS sysop), Beowulf, Red Barchetta, The Vampire, the NJ Hack Shack BBS sysop, and the Treasure Chest BBS sysop.
1986
After more and more break-ins to government and corporate computers, Congress passes the Computer Fraud and Abuse Act, which makes it a crime to break into computer systems. The law, however, does not cover juveniles.
Robert Schifreen and Stephen Gold are convicted of accessing the Telecom Gold account belonging to the Duke of Edinburgh under the Forgery and Counterfeiting Act 1981 in the United Kingdom, the first conviction for illegally accessing a computer system. On appeal, the conviction is overturned as hacking is not within the legal definition of forgery.
Arrest of a hacker who calls himself The Mentor. He published a now-famous treatise shortly after his arrest that came to be known as the Hacker Manifesto in the e-zine Phrack. This still serves as the most famous piece of hacker literature and is frequently used to illustrate the mindset of hackers.
Astronomer Clifford Stoll plays a pivotal role in tracking down hacker Markus Hess, events later covered in Stoll's 1990 book The Cuckoo's Egg.
1987
The Christmas Tree EXEC "worm" causes major disruption to the VNET, BITNET and EARN networks.
1988
The Morris Worm. Graduate student Robert T. Morris, Jr. of Cornell University launches a worm on the government's ARPAnet (precursor to the Internet). The worm spreads to 6,000 networked computers, clogging government and university systems. Robert Morris is dismissed from Cornell, sentenced to three years' probation, and fined $10,000.
First National Bank of Chicago is the victim of $70 million computer theft.
The Computer Emergency Response Team (CERT) is created by DARPA to address network security.
The Father Christmas (computer worm) spreads over DECnet networks.
1989
Jude Milhon (aka St Jude) and R. U. Sirius launch MONDO 2000, a major '90s tech-lifestyle magazine, in Berkeley, California.
The politically motivated WANK worm spreads over DECnet.
Dutch magazine Hack-Tic begins.
The Cuckoo's Egg by Clifford Stoll is published.
The detection of AIDS (Trojan horse) is the first instance of a ransomware detection.
1990s
1990
Operation Sundevil introduced. After a prolonged sting investigation, Secret Service agents swoop down on organizers and prominent members of BBSs in 14 U.S. cities including the Legion of Doom, conducting early-morning raids and arrests. The arrests involve and are aimed at cracking down on credit-card theft and telephone and wire fraud. The result is a breakdown in the hacking community, with members informing on each other in exchange for immunity. The offices of Steve Jackson Games are also raided, and the role-playing sourcebook GURPS Cyberpunk is confiscated, possibly because the government fears it is a "handbook for computer crime". Legal battles arise that prompt the formation of the Electronic Frontier Foundation, including the trial of Knight Lightning.
Australian federal police tracking Realm members Phoenix, Electron and Nom are the first in the world to use a remote data intercept to gain evidence for a computer crime prosecution.
The Computer Misuse Act 1990 is passed in the United Kingdom, criminalising any unauthorised access to computer systems.
1992
Release of the movie Sneakers, in which security experts are blackmailed into stealing a universal decoder for encryption systems.
One of the first ISPs, MindVox, opens to the public.
Bulgarian virus writer Dark Avenger wrote 1260, the first known use of polymorphic code, used to circumvent the type of pattern recognition used by antivirus software, and nowadays also intrusion detection systems.
Publication of a hacking instruction manual for penetrating TRW credit reporting agency by Infinite Possibilities Society (IPS) gets Dr. Ripco, the sysop of Ripco BBS mentioned in the IPS manual, arrested by the United States Secret Service.
1993
The first DEF CON hacking conference takes place in Las Vegas. The conference is meant to be a one-time party to say good-bye to BBSs (now replaced by the Web), but the gathering was so popular it became an annual event.
AOL gives its users access to Usenet, precipitating Eternal September.
1994
Summer: Russian crackers siphon $10 million from Citibank and transfer the money to bank accounts around the world. Vladimir Levin, the 30-year-old ringleader, used his work laptop after hours to transfer the funds to accounts in Finland and Israel. Levin stands trial in the United States and is sentenced to three years in prison. Authorities recover all but $400,000 of the stolen money.
Hackers adapt to emergence of the World Wide Web quickly, moving all their how-to information and hacking programs from the old BBSs to new hacker web sites.
AOHell is released, a freeware application that allows a burgeoning community of unskilled script kiddies to wreak havoc on America Online. For days, hundreds of thousands of AOL users find their mailboxes flooded with multi-megabyte email bombs and their chat rooms disrupted with spam messages.
December 27: After experiencing an IP spoofing attack by Kevin Mitnick, computer security expert Tsutomu Shimomura started to receive prank calls that popularized the phrase "My kung fu is stronger than yours".
1995
The movies The Net and Hackers are released.
The Canadian ISP dlcwest.com is hacked and website replaced with a graphic and the caption "You've been hacked MOFO"
February 22: The FBI raids the "Phone Masters".
1996
Hackers alter Web sites of the United States Department of Justice (August), the CIA (October), and the U.S. Air Force (December).
Canadian hacker group, Brotherhood, breaks into the Canadian Broadcasting Corporation.
Arizona hacker, John Sabo A.K.A FizzleB/Peanut, was arrested for hacking Canadian ISP dlcwest.com claiming the company was defrauding customers through over billing.
The US general accounting office reports that hackers attempted to break into Defense Department computer files some 250,000 times in 1995 alone with a success rate of about 65% and doubling annually.
Cryptovirology is born with the invention of the cryptoviral extortion protocol that would later form the basis of modern ransomware.
1997
A 16-year-old Croatian youth penetrates computers at a U.S. Air Force base in Guam.
June: Eligible Receiver 97 tests the American government's readiness against cyberattacks.
December: Information Security publishes first issue.
First high-profile attacks on Microsoft's Windows NT operating system
1998
January: Yahoo! notifies Internet users that anyone visiting its site in the past month might have downloaded a logic bomb and worm planted by hackers claiming a "logic bomb" will go off if computer hacker Kevin Mitnick is not released from prison.
February: The Internet Software Consortium proposes the use of DNSSEC (Domain Name System Security Extensions) to secure DNS servers.
May 19: The seven members of the hacker think tank known as L0pht testify in front of the US congressional Government Affairs committee on "Weak Computer Security in Government".
June: Information Security publishes its first annual Industry Survey, finding that nearly three-quarters of organizations suffered a security incident in the previous year.
September: Electronic Disturbance Theater, an online political performance-art group, attacks the websites of The Pentagon, Mexican president Ernesto Zedillo, and the Frankfurt Stock Exchange, calling it conceptual art and claiming it to be a protest against the suppression of the Zapatista Army of National Liberation in southern Mexico. EDT uses the FloodNet software to bombard its opponents with access requests.
October: "U.S. Attorney General Janet Reno announces National Infrastructure Protection Center."
1999
Software security goes mainstream In the wake of Microsoft's Windows 98 release, 1999 becomes a banner year for security (and hacking). Hundreds of advisories and patches are released in response to newfound (and widely publicized) bugs in Windows and other commercial software products. A host of security software vendors release anti-hacking products for use on home computers.
U.S. President Bill Clinton announces a $1.46 billion initiative to improve government computer security. The plan would establish a network of intrusion detection monitors for certain federal agencies and encourage the private sector to do the same.
January 7: The "Legion of the Underground" (LoU) declares "war" against the governments of Iraq and the People's Republic of China. An international coalition of hackers (including Cult of the Dead Cow, 2600s staff, Phracks staff, L0pht, and the Chaos Computer Club) issued a joint statement (CRD 990107 - Hackers on planet earth against infowar) condemning the LoU's declaration of war. The LoU responded by withdrawing its declaration.
March: The Melissa worm is released and quickly becomes the most costly malware outbreak to date.
July: Cult of the Dead Cow releases Back Orifice 2000 at DEF CON.
August: Kevin Mitnick, sentenced to 5 years, of which over 4 years had already been spent pre-trial including 8 months' solitary confinement.
September: Level Seven Crew hacks the U.S. Embassy in China's website and places racist, anti-government slogans on embassy site in regards to 1998 U.S. embassy bombings.
September 16: The United States Department of Justice sentences the "Phone Masters".
October: American Express introduces the "Blue" smart card, the industry's first chip-based credit card in the US.
November 17: A hacker interviewed by Hilly Rose during the radio show Coast to Coast AM (then hosted by Art Bell) exposes a plot by al-Qaeda to derail Amtrak trains. This results in all trains being forcibly stopped over Y2K as a safety measure.
2000s
2000
May: The ILOVEYOU worm, also known as VBS/Loveletter and Love Bug worm, is a computer worm written in VBScript. It infected millions of computers worldwide within a few hours of its release. It is considered to be one of the most damaging worms ever. It originated in the Philippines; made by an AMA Computer College student Onel de Guzman for his thesis.
September: Computer hacker Jonathan James became the first juvenile to serve jail time for hacking.
2001
Microsoft becomes the prominent victim of a new type of hack that attacks the domain name server. In these denial-of-service attacks, the DNS paths that take users to Microsoft's websites are corrupted.
February: A Dutch cracker releases the Anna Kournikova virus, initiating a wave of viruses that tempts users to open the infected attachment by promising a sexy picture of the Russian tennis star.
April: FBI agents trick two Russian crackers into coming to the U.S. and revealing how they were hacking U.S. banks.
July: Russian programmer Dmitry Sklyarov is arrested at the annual DEF CON hacker convention. He was the first person criminally charged with violating the Digital Millennium Copyright Act (DMCA).
August: Code Red worm, infects tens of thousands of machines.
The National Cyber Security Alliance (NCSA) is established in response to the September 11 attacks on the World Trade Center.
2002
January: Bill Gates decrees that Microsoft will secure its products and services, and kicks off a massive internal training and quality control campaign.
March: Gary McKinnon is arrested following unauthorized access to US military and NASA computers.
May: Klez.H, a variant of the worm discovered in November 2001, becomes the biggest malware outbreak in terms of machines infected, but causes little monetary damage.
June: The Bush administration files a bill to create the Department of Homeland Security, which, among other things, will be responsible for protecting the nation's critical IT infrastructure.
August: Researcher Chris Paget publishes a paper describing "shatter attacks", detailing how Windows' unauthenticated messaging system can be used to take over a machine. The paper raises questions about how securable Windows could ever be. It is however largely derided as irrelevant as the vulnerabilities it described are caused by vulnerable applications (placing windows on the desktop with inappropriate privileges) rather than an inherent flaw within the Operating System.
October: The International Information Systems Security Certification Consortium—(ISC)²—confers its 10,000th CISSP certification.
2003
The hacktivist group Anonymous was formed.
March: Cult of the Dead Cow and Hacktivismo are given permission by the United States Department of Commerce to export software utilizing strong encryption.
2004
March: New Zealand's Government (National Party) website defaced by hacktivist group BlackMask
July: North Korea claims to have trained 500 hackers who successfully crack South Korean, Japanese, and their allies' computer systems.
October: National Cyber Security Awareness Month was launched by the National Cyber Security Alliance and U.S. Department of Homeland Security.
2005
April 2: Rafael Núñez (aka RaFa), a notorious member of the hacking group World of Hell, is arrested following his arrival at Miami International Airport for breaking into the Defense Information Systems Agency computer system in June 2001.
September 13: Cameron Lacroix is sentenced to 11 months for gaining access to T-Mobile's network and exploiting Paris Hilton's Sidekick.
November 3: Jeanson James Ancheta, whom prosecutors say was a member of the "Botmaster Underground", a group of script kiddies mostly noted for their excessive use of bot attacks and propagating vast amounts of spam, was taken into custody after being lured to FBI offices in Los Angeles.
2006
January: One of the few worms to take after the old form of malware, destruction of data rather than the accumulation of zombie networks to launch attacks from, is discovered. It had various names, including Kama Sutra (used by most media reports), Black Worm, Mywife, Blackmal, Nyxem version D, Kapser, KillAV, Grew and CME-24. The worm would spread through e-mail client address books, and would search for documents and fill them with garbage, instead of deleting them to confuse the user. It would also hit a web page counter when it took control, allowing the programmer who created it as well as the world to track the progress of the worm. It would replace documents with random garbage on the third of every month. It was hyped by the media but actually affected relatively few computers, and was not a real threat for most users.
May: Jeanson James Ancheta receives a 57-month prison sentence, and is ordered to pay damages amounting to $15,000 to the Naval Air Warfare Center in China Lake and the Defense Information Systems Agency, for damage done due to DDoS attacks and hacking. Ancheta also had to forfeit his gains to the government, which include $60,000 in cash, a BMW, and computer equipment.
May: The largest defacement in Web History as of that time is performed by the Turkish hacker iSKORPiTX who successfully hacked 21,549 websites in one shot.
July: Robert Moore and Edwin Pena were the first people to be charged by U.S. authorities for VoIP hacking. Robert Moore served 2 years in federal prison and was given $152,000 restitution. Once Edwin Pena was caught after fleeing the country, evading authorities for almost 2 years, he was sentenced to 10 years and given $1 million restitution.
September: Viodentia releases FairUse4WM tool which would remove DRM information off Windows Media Audio (WMA) files downloaded from music services such as Yahoo! Unlimited, Napster, Rhapsody Music and Urge.
2007
May 17: Estonia recovers from massive denial-of-service attack
June 13: FBI Operation Bot Roast finds over 1 million botnet victims
June 21: A spear phishing incident at the Office of the Secretary of Defense steals sensitive U.S. defense information, leading to significant changes in identity and message-source verification at OSD.
August 11: United Nations website hacked by Indian Hacker Pankaj Kumar Singh.
November 14: Panda Burning Incense which is known by several other names, including Fujacks and Radoppan.T lead to the arrest of eight people in China. Panda Burning Incense was a parasitic virus that infected executable files on a PC. When infected, the icon of the executable file changes to an image of a panda holding three sticks of incense. The arrests were the first for virus writing in China.
2008
January 17: Project Chanology; Anonymous attacks Scientology website servers around the world. Private documents are stolen from Scientology computers and distributed over the Internet.
March 7: Around 20 Chinese hackers claim to have gained access to the world's most sensitive sites, including the Pentagon. They operated from an apartment on a Chinese Island.
March 14: Trend Micro website successfully hacked by Turkish hacker Janizary (aka Utku).
2009
April 4: Conficker worm infiltrated millions of PCs worldwide including many government-level top-security computer networks.
2010s
2010
January 12: Operation Aurora Google publicly reveals that it has been on the receiving end of a "highly sophisticated and targeted attack on our corporate infrastructure originating from China that resulted in the theft of intellectual property from Google"
June: Stuxnet The Stuxnet worm is found by VirusBlokAda. Stuxnet was unusual in that while it spread via Windows computers, its payload targeted just one specific model and type of SCADA systems. It slowly became clear that it was a cyber attack on Iran's nuclear facilities—with most experts believing that Israel was behind it—perhaps with US help.
December 3: The first Malware Conference, MALCON took place in India. Founded by Rajshekhar Murthy, malware coders are invited to showcase their skills at this annual event supported by the Government of India. An advanced malware for Symbian OS is released by hacker A0drul3z.
2011
The hacker group Lulz Security is formed.
April 9: Bank of America website got hacked by a Turkish hacker named JeOPaRDY. An estimated 85,000 credit card numbers and accounts were reported to have been stolen due to the hack. Bank officials say no personal customer bank information is available on that web-page. Investigations are being conducted by the FBI to trace down the incriminated hacker.
April 17: An "external intrusion" sends the PlayStation Network offline, and compromises personally identifying information (possibly including credit card details) of its 77 million accounts, in what is claimed to be one of the five largest data breaches ever.
Computer hacker sl1nk releases information of his penetration in the servers of the Department of Defense (DoD), Pentagon, NASA, NSA, US Military, Department of the Navy, Space and Naval Warfare System Command and other UK/US government websites.
September: Bangladeshi hacker TiGER-M@TE made a world record in defacement history by hacking 700,000 websites in a single shot.
October 16: The YouTube channel of Sesame Street was hacked, streaming pornographic content for about 22 minutes.
November 1: The main phone and Internet networks of the Palestinian territories sustained a hacker attack from multiple locations worldwide.
November 7: The forums for Valve's Steam service were hacked. Redirects for a hacking website, Fkn0wned, appeared on the Steam users' forums, offering "hacking tutorials and tools, porn, free giveaways and much more."
December 14: Five members of the Norwegian hacker group, Noria, were arrested, allegedly suspected for hacking into the email account of the militant extremist Anders Behring Breivik (who perpetrated the 2011 attacks in the country).
2012
A hacker, Big-Smoke, published over 400,000 credit cards online, and threatened Israel to release 1 million credit cards in the future. In response to that incident, an Israeli hacker published over 200 Albanian' credit cards online.
Gottfrid Svartholm Warg, the co-founder of Pirate Bay, was convicted in Denmark of hacking a mainframe computer, what was then Denmark's biggest hacking case.
January 7: "Team Appunity", a group of Norwegian hackers, were arrested for breaking into Norway's largest prostitution website then publishing the user database online.
February 3: Marriott was hacked by a New Age ideologist, Attila Nemeth who was resisting against the New World Order where he said that corporations are allegedly controlling the world. As a response Marriott reported him to the United States Secret Service.
February 8: Foxconn is hacked by a hacker group, "Swagg Security", releasing a massive amount of data including email and server logins, and even more alarming—bank account credentials of large companies like Apple and Microsoft. Swagg Security stages the attack just as a Foxconn protest ignites against terrible working conditions in southern China.
May 4: The websites of several Turkish representative offices of international IT-companies are defaced within the same day by F0RTYS3V3N (Turkish Hacker), including the websites of Google, Yandex, Microsoft, Gmail, MSN, Hotmail, PayPal.
May 24: WHMCS is hacked by UGNazi, they claim that the reason for this is because of the illegal sites that are using their software.
May 31: MyBB is hacked by newly founded hacker group, UGNazi, the website was defaced for about a day, they claim their reasoning for this was because they were upset that the forum board Hackforums.net uses their software.
June 5: The social networking website LinkedIn has been hacked and the passwords for nearly 6.5 million user accounts are stolen by cybercriminals. As a result, a United States grand jury indicted Nikulin and three unnamed co-conspirators on charges of aggravated identity theft and computer intrusion.
August 15: The most valuable company in the world Saudi Aramco is crippled by a cyber warfare attack for months by malware called Shamoon. Considered the biggest hack in history in terms of cost and destructiveness . Carried out by an Iranian attacker group called Cutting Sword of Justice. Iranian hackers retaliated against Stuxnet by releasing Shamoon. The malware destroyed over 35,000 Saudi Aramco computers, affecting business operations for months.
December 17: Computer hacker sl1nk announced that he has hacked a total of 9 countries' SCADA systems. The proof includes 6 countries: France, Norway, Russia, Spain, Sweden and the United States.
2013
The social networking website Tumblr is attacked by hackers. Consequently, 65,469,298 unique emails and passwords were leaked from Tumblr. The data breach's legitimacy is confirmed by computer security researcher Troy Hunt.
August: Yahoo! data breaches occurred. More than 1 billion users data are being leaked.
2014
February 7: The bitcoin exchange Mt. Gox filed for bankruptcy after $460million was apparently stolen by hackers due to "weaknesses in [their] system" and another $27.4million went missing from its bank accounts.
October: The White House computer system was hacked. It was said that the FBI, the Secret Service, and other U.S. intelligence agencies categorized the attacks "among the most sophisticated attacks ever launched against U.S. government systems."
November 24: In response to the release of the film The Interview, the servers of Sony Pictures are hacked by a hacker group calling itself "Guardian of Peace".
November 28: The website of the Philippine telecommunications company Globe Telecom was hacked in response to the poor internet service they are distributing.
2015
June: the records of 21.5 million people, including social security numbers, dates of birth, addresses, fingerprints, and security clearance-related information, are stolen from the United States Office of Personnel Management (OPM). Most of the victims are employees of the United States government and unsuccessful applicants to it. The Wall Street Journal and The Washington Post report that government sources believe the hacker is the government of China.
July: The servers of extramarital affairs website Ashley Madison were breached.
2016
February: The 2016 Bangladesh Bank heist attempted to steal US$951 million from a Bangladesh Bank, and succeeded in getting $101 million—although some of this was later recovered.
July 22: WikiLeaks published the documents from the 2016 Democratic National Committee email leak.
July 29: a group suspected coming from China launched hacker attacks on the website of Vietnam Airlines.
August 13: The Shadow Brokers (TSB) started publishing several leaks containing hacking tools from the National Security Agency (NSA), including several zero-day exploits. Ongoing leaks until April 2017 (The Shadow Brokers)
September: Hacker Ardit Ferizi is sentenced to 20 years in prison after being arrested for hacking U.S. servers and passing the leaked information to members of ISIL terrorist group back in 2015.
October: The 2016 Dyn cyberattack is being conducted with a botnet consisting of IOTs infected with Mirai by the hacktivist groups SpainSquad, Anonymous, and New World Hackers, reportedly in retaliation for Ecuador's rescinding Internet access to WikiLeaks founder Julian Assange at their embassy in London, where he has been granted asylum.
Late 2016: Hackers steal international personal user data from the company Uber, including phone numbers, email addresses, and names, of 57 million people and 600,000 driver's license numbers of drivers for the company. Uber's GitHub account was accessed through Amazon's cloud-based service. Uber paid the hackers $100,000 for assurances the data was destroyed.
December 2016: Yahoo! data breaches reported and affected more than 1 billion users. The data leakage includes user names, email addresses, telephone numbers, encrypted or unencrypted security questions and answers, dates of birth, and hashed passwords
2017
April: A hacker group calling itself "The Dark Overlord" posted unreleased episodes of Orange Is the New Black TV series online after failing to extort the online entertainment company Netflix.
May: WannaCry ransomware attack started on Friday, May 12, 2017, and has been described as unprecedented in scale, infecting more than 230,000 computers in over 150 countries. A hacked unreleased Disney film is held for ransom, to be paid in Bitcoin.
May: 25,000 digital photos and ID scans relating to patients of the Grozio Chirurgija cosmetic surgery clinic in Lithuania were obtained and published without consent by an unknown group demanding ransoms. Thousands of clients from more than 60 countries were affected. The breach brought attention to weaknesses in Lithuania's information security.
June: 2017 Petya cyberattack.
June: TRITON (TRISIS), a malware framework designed to reprogram Triconex safety instrumented systems (SIS) of industrial control systems (ICS), discovered in Saudi Arabian Petrochemical plant.
August: Hackers demand $7.5 million in Bitcoin to stop pre-releasing HBO shows and scripts, including Ballers, Room 104 and Game of Thrones.
May–July 2017: The Equifax breach.
September 2017: Deloitte breach.
December: Mecklenburg County, North Carolina computer systems were hacked. They did not pay the ransom.
2018
March: Computer systems in the city of Atlanta, in the U.S. state of Georgia, are seized by hackers with ransomware. They did not pay the ransom, and two Iranians were indicted by the FBI on cyber crime charges for the breach.
The town of Wasaga Beach in Ontario, Canada computer systems are seized by hackers with ransomware.
September: Facebook was hacked, exposing to hackers the personal information of an estimated 30 million Facebook users (initially estimated at 50 million) when the hackers "stole" the "access tokens" of 400,000 Facebook users. The information accessible to the hackers included users' email addresses, phone numbers, their lists of friends, Groups they are members of, users' search information, posts on their timelines, and names of recent Messenger conversations.
October: West Haven, Connecticut USA computer systems are seized by hackers with ransomware, they paid $2,000 in ransom.
November:
The first U.S. indictment of individual people for ransomware attacks occurs. The U.S. Justice Department indicted two men Faramarz Shahi Savandi and Mohammad Mehdi Shah Mansouri who allegedly used the SamSam ransomware for extortion, netting them more than $6 million in ransom payments. The companies infected with the ransomware included Allscripts, Medstar Health, and Hollywood Presbyterian Medical Center. Altogether, the attacks caused victims to lose more than $30 million, in addition to the ransom payments.
Marriott disclosed that its Starwood Hotel brand had been subject to a security breach.
2019
March: Jackson County computer systems in the U.S. state of Georgia are seized by hackers with ransomware, they paid $400,000 in ransom. The city of Albany in the U.S. state of New York experiences a ransomware cyber attack.
April: Computer systems in the city of Augusta, in the U.S. state of Maine, are seized by hackers using ransomware. The City of Greenville (North Carolina)'s computer systems are seized by hackers using ransomware known as RobbinHood. Imperial County, in the U.S. state of California, computer systems are seized by hackers using Ryuk ransomware.
May: computer systems belonging to the City of Baltimore are seized by hackers using ransomware known as RobbinHood that encrypts files with a "file-locking" virus, as well as the tool EternalBlue.
June: The city of Riviera Beach, Florida paid roughly $600,000 ransom in Bitcoin to hackers who seized their computers using ransomware. Hackers stole 18 hours of unreleased music from the band Radiohead demanding $150,000 ransom. Radiohead released the music to the public anyway and did not pay the ransom.
November: The Anonymous hacktivist collective announced that they have hacked into four Chinese computer databases and donated those to data breach indexing/notification service vigilante.pw. The hack was conducted in order to support the 2019 Hong Kong protests, amidst the Hong Kong police's siege of the city's Polytechnic University. They also brought up a possible peace plan first proposed by a professor at Inha University in hopes of having the Korean reunification and the five key demands of the Hong Kong protest being fulfilled at once.
2020s
2020
February: Anonymous hacked the United Nations website and created a page for Taiwan, a country which had not had a seat at the UN since 1971. The hacked page featured the Flag of Taiwan, the KMT emblem, a Taiwan Independence flag, the Anonymous logo, embedded YouTube videos such as the Taiwanese national anthem and the closing score for the 2019 film Avengers: Endgame titled "It's Been a Long, Long Time", and a caption. The hacked server belonged to the United Nations Department of Economic and Social Affairs.
May: Anonymous declared a large hack on May 28, three days after the murder of George Floyd. An individual claiming to represent Anonymous stated that "We are Legion. We do not forgive. We do not forget. Expect us." in a now-deleted video. Anonymous addressed police brutality and said they "will be exposing [their] many crimes to the world". It was suspected that Anonymous were the cause for the downtime and public suspension of the Minneapolis Police Department website and its parent site, the website of the City of Minneapolis.
May: Indian national Shubham Upadhyay posed as Superintendent of Police and, using social engineering, used a free caller identification app to call up the in-charge of the Kotwali police station, K. K. Gupta, in order to threaten him to get his phone repaired amidst the COVID-19 lockdown. The attempt was foiled.
June: Anonymous claimed responsibility for stealing and leaking a trove of documents collectively nicknamed 'BlueLeaks'. The 269-gigabyte collection was published by a leak-focused activist group known as Distributed Denial of Secrets. Furthermore, the collective took down Atlanta Police Department's website via DDoS, and defaced websites such as a Filipino governmental webpage and that of Brookhaven National Labs. They expressed support for Julian Assange and press freedom, while briefly "taking a swing" against Facebook, Reddit and Wikipedia for having 'engaged in shady practices behind our prying eyes'. In the case of Reddit, they posted a link to a court document describing the possible involvement of a moderator of a large traffic subreddit (/r/news) in an online harassment-related case.
June: The Buffalo, NY police department's website was supposedly hacked by Anonymous. While the website was up and running after a few minutes, Anonymous tweeted again on Twitter urging that it be taken down. A few minutes later, the Buffalo NY website was brought down again. They also hacked Chicago police radios to play N.W.A's "Fuck tha Police".
June: Over 1,000 accounts on multiplayer online game Roblox were hacked to display that they supported U.S. President Donald Trump.
July: The 2020 Twitter bitcoin scam occurred.
July: User credentials of writing website Wattpad were stolen and leaked on a hacker forum. The database contained over 200 million records.
August: A large number of subreddits were hacked to post materials endorsing Donald Trump. The affected subreddits included r/BlackPeopleTwitter, r/3amJokes, r/NFL, r/PhotoshopBattles. An entity with the name of "calvin goh and Melvern" had purportedly claimed responsibility for the massive defacement, and also made violent threats against a Chinese embassy.
August: The US Air Force's Hack-A-Sat event was hosted at DEF CON's virtual conference where groups such as Poland Can Into Space, FluxRepeatRocket, AddVulcan, Samurai, Solar Wine, PFS, 15 Fitty Tree, and 1064CBread competed in order to control a satellite in space. The Poland Can Into Space team stood out for having successfully manipulated a satellite to take a picture of the Moon.
August: The website of Belarusian company "BrestTorgTeknika" was defaced by a hacker nicknaming herself "Queen Elsa", in order to support the 2020–21 Belarusian protests. In it, the page hacker exclaimed "Get Iced Iced already" and "Free Belarus, revolution of our times" with the latter alluding to the famous slogan used by 2019 Hong Kong protests. The results of the hack were then announced on Reddit's /r/Belarus subreddit by a poster under the username "Socookre".
August: Multiple DDoS attacks forced New Zealand's stock market to temporarily shut down.
September: The first suspected death from a cyberattack was reported after cybercriminals hit a hospital in Düsseldorf, Germany with ransomware.
October: A wave of botnet-coordinated ransomware attacks against hospital infrastructure occurred in the United States, identified as . State security officials and American corporate security officers were concerned that these attacks might be a prelude to hacking of election infrastructure during the elections of the subsequent month, like similar incidents during the 2016 United States elections and other attacks; there was, however, no evidence that they performed attacks on election infrastructure in 2020.
December: A supply chain attack targeting upstream dependencies from Texas IT service provider "SolarWinds" results in serious, wide-ranging security breaches at the U.S. Treasury and Commerce departments. White House officials did not immediately publicly identify a culprit; Reuters, citing sources "familiar with the investigation", pointed toward the Russian government. An official statement shared by Senate Finance Committee ranking member, Ron Wyden said: "Hackers broke into systems in the Departmental Offices division of Treasury, home to the department’s highest-ranking officials."
December: A bomb threat posted from a Twitter account that was seemingly hacked by persons with the aliases of "Omnipotent" and "choonkeat", against the Aeroflot Flight 102, a passenger flight with the plane tail number of VQ-BIL coming from Moscow to New York City. Due to that, a runway of New York's John F. Kennedy International Airport was temporarily closed and resulted in the delay of Aeroflot Flight 103, a return flight back to Moscow.
December: The Anonymous group initiated 'Christmas gift' defacements against multiple Russian portals including a municipal website in Tomsk and that of a regional football club. Inside the defacements, they made multiple references such as Russian opposition activist Alexei Navalny, freedom protests in Thailand and Belarus, and opposition to the Chinese Communist Party. They also held a mock award based on an event on the game platform Roblox that was called "RB Battles" where YouTubers Tanqr and KreekCraft, the winner and the runner up of the actual game event, were compared to both Taiwan and New Zealand respectively due to the latter's reportedly stellar performance in fighting the COVID-19 pandemic.
2021
January: Microsoft Exchange Server data breach
February: Anonymous announced cyber-attacks of at least five Malaysian websites. As a result, eleven individuals were nabbed as suspects.
February: Hackers including those with names of "张卫能 utoyo" and "full_discl0sure" hijacked an events website Aucklife in order to craft a phony bomb threat against the Chinese consulate in Auckland, New Zealand, and also a similar facility in Sydney, Australia. Their motive was a punitive response against China due to COVID-19. As a result, a physical search was conducted at the consulate by New Zealand's Police Specialist Search Group while Aucklife owner Hailey Newton had since regained her access to the website. Wellington-based cybersecurity consultant Adam Boileau remarked that the hack isn't 'highly technical'.
February: The group "Myanmar Hackers" attacked several websites belonging to Myanmar government agencies such as the Central Bank of Myanmar and the military-run Tatmadaw True News Information Team. The group also targeted the Directorate of Investment and Company Administration, Trade Department, Customs Department, Ministry of Commerce, Myawady TV and state-owned broadcaster Myanmar Radio and Television and some private media outlets. A computer technician in Yangon found that the hacks were denial-of-service attacks, while the group's motive is to protest the 2021 Myanmar coup.
April: Over 500 million Facebook users' personal info—including info on 32 million in the United States—was discovered posted on a hackers' website, though Facebook claimed that the information was from a 2019 hack, and that the company had already taken mitigation measures; however, the company declined to say whether it had notified the affected users of the breach.
April: The Ivanti Pulse Connect Secure data breach of unauthorized access to the networks of high-value targets since at least June 2020 via across the U.S. and some E.U. nations due to their use of vulnerable, proprietary software was reported.
May: Operation of the U.S. Colonial Pipeline is interrupted by a ransomware cyber operation.
May: On 21 May 2021 Air India was subjected to a cyberattack wherein the personal details of about 4.5 million customers around the world were compromised including passport, credit card details, birth dates, name and ticket information.
July: On 22 July 2021 Saudi Aramco data were leaked by a third-party contractor and demanded $50 million ransom from Saudi Aramco. Saudi Aramco confirmed the incident after a hacker claimed on dark web that he had stolen 1 terabyte of data about location of oil refineries and employees data in a post that was posted on June 23.
August: T-Mobile reported that data files with information from about 40 million former or prospective T-Mobile customers, including first and last names, date of birth, SSN, and driver's license/ID information, were compromised.
September and October: 2021 Epik data breach. Anonymous obtained and released over 400gigabytes of data from the domain registrar and web hosting company Epik. The data was shared in three releases between September 13 and October 4. The first release included domain purchase and transfer details, account credentials and logins, payment history, employee emails, and unidentified private keys. The hackers claimed they had obtained "a decade's worth of data", including all customer data and records for all domains ever hosted or registered through the company, and which included poorly encrypted passwords and other sensitive data stored in plaintext. The second release consisted of bootable disk images and API keys for third-party services used by Epik; the third contained additional disk images and an archive of data belonging to the Republican Party of Texas, who are an Epik customer.
October: On October 6, 2021, an anonymous 4chan reportedly hacked and leaked the source code of Twitch, as well as information on how much the streaming service paid almost 2.4 million streamers since August 2019. Source code from almost 6,000 GitHub repositories was leaked, and the 4chan user said it was "part one" of a much larger release.
November and December: On November 24th, Chen Zhaojun of Alibaba's Cloud Security Team reported a zero-day vulnerability (later dubbed Log4Shell) involving the use of arbitrary code execution in the ubiquitous Java logging framework software Log4j. The report was privately disclosed to project developers of Log4j, a team at The Apache Software Foundation, on November 24. On December 8, Zhaojun contacted the developers again detailing how the vulnerability was being discussed in public security chat rooms, was already known by some security researchers, and pleaded that the team expedite the solution to the vulnerability in the official release version of Log4j. Early exploitations were noticed on Minecraft servers on December 9; however, forensic analysis indicates that Log4Shell may have been exploited as early as December 1 or 2nd. Due to the ubiquity of devices with the Log4j software (hundreds of millions) and the simplicity in executing the vulnerability, it is considered to be arguably one of the largest and most critical vulnerabilities ever. Yet, big names in security hacking helped in regaining control over server, like Graham Ivan Clark, and Elhamy A. Elsebaey. A portion of the vulnerability was fixed in a patch distributed on December 6, three days before the vulnerability was publicly disclosed on December 9.
2022
February: The German Chaos Computer Club has reported more than fifty data leaks. Government institutions and companies from various business sectors were affected. In total, the researchers had access to over 6.4 million personal data records as well as terabytes of log data and source code.
See also
List of cyberattacks
List of data breaches
References
Further reading
Computer Security Hacker History
Computer security
Hacking (computer security) |
422014 | https://en.wikipedia.org/wiki/Wired%20Equivalent%20Privacy | Wired Equivalent Privacy | Wired Equivalent Privacy (WEP) was a security algorithm for 802.11 wireless networks. Introduced as part of the original IEEE 802.11 standard ratified in 1997, its intention was to provide data confidentiality comparable to that of a traditional wired network. WEP, recognizable by its key of 10 or 26 hexadecimal digits (40 or 104 bits), was at one time widely used, and was often the first security choice presented to users by router configuration tools.
In 2003, the Wi-Fi Alliance announced that WEP had been superseded by Wi-Fi Protected Access (WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP-40 and WEP-104 have been deprecated.
WEP was the only encryption protocol available to 802.11a and 802.11b devices built before the WPA standard, which was available for 802.11g devices. However, some 802.11b devices were later provided with firmware or software updates to enable WPA, and newer devices had it built in.
History
WEP was ratified as a Wi-Fi security standard in 1999. The first versions of WEP were not particularly strong, even for the time they were released due to U.S. restrictions on the export of various cryptographic technology. These restrictions led to manufacturers restricting their devices to only 64-bit encryption. When the restrictions were lifted, the encryption was increased to 128-bit. Despite the introduction of 256-bit WEP, 128-bit remains one of the most common implementations.
Encryption details
WEP was included as the privacy component of the original IEEE 802.11 standard ratified in 1997. WEP uses the stream cipher RC4 for confidentiality, and the CRC-32 checksum for integrity. It was deprecated in 2004 and is documented in the current standard.
Standard 64-bit WEP uses a 40 bit key (also known as WEP-40), which is concatenated with a 24-bit initialization vector (IV) to form the RC4 key. At the time that the original WEP standard was drafted, the U.S. Government's export restrictions on cryptographic technology limited the key size. Once the restrictions were lifted, manufacturers of access points implemented an extended 128-bit WEP protocol using a 104-bit key size (WEP-104).
A 64-bit WEP key is usually entered as a string of 10 hexadecimal (base 16) characters (0–9 and A–F). Each character represents 4 bits, 10 digits of 4 bits each gives 40 bits; adding the 24-bit IV produces the complete 64-bit WEP key (4 bits × 10 + 24 bits IV = 64 bits of WEP key). Most devices also allow the user to enter the key as 5 ASCII characters (0–9, a–z, A–Z), each of which is turned into 8 bits using the character's byte value in ASCII (8 bits × 5 + 24 bits IV = 64 bits of WEP key); however, this restricts each byte to be a printable ASCII character, which is only a small fraction of possible byte values, greatly reducing the space of possible keys.
A 128-bit WEP key is usually entered as a string of 26 hexadecimal characters. 26 digits of 4 bits each gives 104 bits; adding the 24-bit IV produces the complete 128-bit WEP key (4 bits × 26 + 24 bits IV = 128 bits of WEP key). Most devices also allow the user to enter it as 13 ASCII characters (8 bits × 13 + 24 bits IV = 128 bits of WEP key).
152-bit and 256-bit WEP systems are available from some vendors. As with the other WEP variants, 24 bits of that is for the IV, leaving 128 or 232 bits for actual protection. These 128 or 232 bits are typically entered as 32 or 58 hexadecimal characters (4 bits × 32 + 24 bits IV = 152 bits of WEP key, 4 bits × 58 + 24 bits IV = 256 bits of WEP key). Most devices also allow the user to enter it as 16 or 29 ASCII characters (8 bits × 16 + 24 bits IV = 152 bits of WEP key, 8 bits × 29 + 24 bits IV = 256 bits of WEP key).
Authentication
Two methods of authentication can be used with WEP: Open System authentication and Shared Key authentication.
In Open System authentication, the WLAN client does not provide its credentials to the Access Point during authentication. Any client can authenticate with the Access Point and then attempt to associate. In effect, no authentication occurs. Subsequently, WEP keys can be used for encrypting data frames. At this point, the client must have the correct keys.
In Shared Key authentication, the WEP key is used for authentication in a four-step challenge-response handshake:
The client sends an authentication request to the Access Point.
The Access Point replies with a clear-text challenge.
The client encrypts the challenge-text using the configured WEP key and sends it back in another authentication request.
The Access Point decrypts the response. If this matches the challenge text, the Access Point sends back a positive reply.
After the authentication and association, the pre-shared WEP key is also used for encrypting the data frames using RC4.
At first glance, it might seem as though Shared Key authentication is more secure than Open System authentication since the latter offers no real authentication. However, it is quite the reverse. It is possible to derive the keystream used for the handshake by capturing the challenge frames in Shared Key authentication. Therefore, data can be more easily intercepted and decrypted with Shared Key authentication than with Open System authentication. If privacy is a primary concern, it is more advisable to use Open System authentication for WEP authentication, rather than Shared Key authentication; however, this also means that any WLAN client can connect to the AP. (Both authentication mechanisms are weak; Shared Key WEP is deprecated in favor of WPA/WPA2.)
Weak security
Because RC4 is a stream cipher, the same traffic key must never be used twice. The purpose of an IV, which is transmitted as plain text, is to prevent any repetition, but a 24-bit IV is not long enough to ensure this on a busy network. The way the IV was used also opened WEP to a related key attack. For a 24-bit IV, there is a 50% probability the same IV will repeat after 5,000 packets.
In August 2001, Scott Fluhrer, Itsik Mantin, and Adi Shamir published a cryptanalysis of WEP that exploits the way the RC4 ciphers and IV are used in WEP, resulting in a passive attack that can recover the RC4 key after eavesdropping on the network. Depending on the amount of network traffic, and thus the number of packets available for inspection, a successful key recovery could take as little as one minute. If an insufficient number of packets are being sent, there are ways for an attacker to send packets on the network and thereby stimulate reply packets, which can then be inspected to find the key. The attack was soon implemented, and automated tools have since been released. It is possible to perform the attack with a personal computer, off-the-shelf hardware, and freely available software such as aircrack-ng to crack any WEP key in minutes.
Cam-Winget et al. surveyed a variety of shortcomings in WEP. They write "Experiments in the field show that, with proper equipment, it is practical to eavesdrop on WEP-protected networks from distances of a mile or more from the target." They also reported two generic weaknesses:
the use of WEP was optional, resulting in many installations never even activating it, and
by default, WEP relies on a single shared key among users, which leads to practical problems in handling compromises, which often leads to ignoring compromises.
In 2005, a group from the U.S. Federal Bureau of Investigation gave a demonstration where they cracked a WEP-protected network in three minutes using publicly available tools. Andreas Klein presented another analysis of the RC4 stream cipher. Klein showed that there are more correlations between the RC4 keystream and the key than the ones found by Fluhrer, Mantin, and Shamir which can additionally be used to break WEP in WEP-like usage modes.
In 2006, Bittau, Handley, and Lackey showed that the 802.11 protocol itself can be used against WEP to enable earlier attacks that were previously thought impractical. After eavesdropping a single packet, an attacker can rapidly bootstrap to be able to transmit arbitrary data. The eavesdropped packet can then be decrypted one byte at a time (by transmitting about 128 packets per byte to decrypt) to discover the local network IP addresses. Finally, if the 802.11 network is connected to the Internet, the attacker can use 802.11 fragmentation to replay eavesdropped packets while crafting a new IP header onto them. The access point can then be used to decrypt these packets and relay them on to a buddy on the Internet, allowing real-time decryption of WEP traffic within a minute of eavesdropping the first packet.
In 2007, Erik Tews, Andrei Pychkine, and Ralf-Philipp Weinmann were able to extend Klein's 2005 attack and optimize it for usage against WEP. With the new attack it is possible to recover a 104-bit WEP key with a probability of 50% using only 40,000 captured packets. For 60,000 available data packets, the success probability is about 80% and for 85,000 data packets about 95%. Using active techniques like Wi-Fi deauthentication attacks and ARP re-injection, 40,000 packets can be captured in less than one minute under good conditions. The actual computation takes about 3 seconds and 3 MB of main memory on a Pentium-M 1.7 GHz and can additionally be optimized for devices with slower CPUs. The same attack can be used for 40-bit keys with an even higher success probability.
In 2008 the Payment Card Industry (PCI) Security Standards Council updated the Data Security Standard (DSS) to prohibit use of WEP as part of any credit-card processing after 30 June 2010, and prohibit any new system from being installed that uses WEP after 31 March 2009. The use of WEP contributed to the TJ Maxx parent company network invasion.
Caffe Latte attack
The Caffe Latte attack is another way to defeat WEP. It is not necessary for the attacker to be in the area of the network using this exploit. By using a process that targets the Windows wireless stack, it is possible to obtain the WEP key from a remote client. By sending a flood of encrypted ARP requests, the assailant takes advantage of the shared key authentication and the message modification flaws in 802.11 WEP. The attacker uses the ARP responses to obtain the WEP key in less than 6 minutes.
Remedies
Use of encrypted tunneling protocols (e.g., IPSec, Secure Shell) can provide secure data transmission over an insecure network. However, replacements for WEP have been developed with the goal of restoring security to the wireless network itself.
802.11i (WPA and WPA2)
The recommended solution to WEP security problems is to switch to WPA2. WPA was an intermediate solution for hardware that could not support WPA2. Both WPA and WPA2 are much more secure than WEP. To add support for WPA or WPA2, some old Wi-Fi access points might need to be replaced or have their firmware upgraded. WPA was designed as an interim software-implementable solution for WEP that could forestall immediate deployment of new hardware. However, TKIP (the basis of WPA) has reached the end of its designed lifetime, has been partially broken, and had been officially deprecated with the release of the 802.11-2012 standard.
Implemented non-standard fixes
WEP2
This stopgap enhancement to WEP was present in some of the early 802.11i drafts. It was implementable on some (not all) hardware not able to handle WPA or WPA2, and extended both the IV and the key values to 128 bits. It was hoped to eliminate the duplicate IV deficiency as well as stop brute force key attacks.
After it became clear that the overall WEP algorithm was deficient (and not just the IV and key sizes) and would require even more fixes, both the WEP2 name and original algorithm were dropped. The two extended key lengths remained in what eventually became WPA's TKIP.
WEPplus
WEPplus, also known as WEP+, is a proprietary enhancement to WEP by Agere Systems (formerly a subsidiary of Lucent Technologies) that enhances WEP security by avoiding "weak IVs". It is only completely effective when WEPplus is used at both ends of the wireless connection. As this cannot easily be enforced, it remains a serious limitation. It also does not necessarily prevent replay attacks, and is ineffective against later statistical attacks that do not rely on weak IVs.
Dynamic WEP
Dynamic WEP refers to the combination of 802.1x technology and the Extensible Authentication Protocol. Dynamic WEP changes WEP keys dynamically. It is a vendor-specific feature provided by several vendors such as 3Com.
The dynamic change idea made it into 802.11i as part of TKIP, but not for the actual WEP algorithm.
See also
Stream cipher attack
Wireless cracking
Wi-Fi Protected Access
References
External links
The Evolution of 802.11 Wireless Security, by Kevin Benton, April 18th 2010
Broken cryptography algorithms
Cryptographic protocols
Computer network security
IEEE 802.11
Wireless networking |
422017 | https://en.wikipedia.org/wiki/Wi-Fi%20Protected%20Access | Wi-Fi Protected Access | Wi-Fi Protected Access (WPA), Wi-Fi Protected Access II (WPA2), and Wi-Fi Protected Access 3 (WPA3) are the three security and security certification programs developed by the Wi-Fi Alliance to secure wireless computer networks. The Alliance defined these in response to serious weaknesses researchers had found in the previous system, Wired Equivalent Privacy (WEP).
WPA (sometimes referred to TKIP standard) became available in 2003. The Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the more secure and complex WPA2, which became available in 2004 and is a common shorthand for the full IEEE 802.11i (or IEEE 802.11i-2004) standard.
In January 2018, Wi-Fi Alliance announced the release of WPA3 with several security improvements over WPA2.
Versions
WPA
The Wi-Fi Alliance intended WPA as an intermediate measure to take the place of WEP pending the availability of the full IEEE 802.11i standard. WPA could be implemented through firmware upgrades on wireless network interface cards designed for WEP that began shipping as far back as 1999. However, since the changes required in the wireless access points (APs) were more extensive than those needed on the network cards, most pre-2003 APs could not be upgraded to support WPA.
The WPA protocol implements the Temporal Key Integrity Protocol (TKIP). WEP used a 64-bit or 128-bit encryption key that must be manually entered on wireless access points and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically generates a new 128-bit key for each packet and thus prevents the types of attacks that compromised WEP.
WPA also includes a Message Integrity Check, which is designed to prevent an attacker from altering and resending data packets. This replaces the cyclic redundancy check (CRC) that was used by the WEP standard. CRC's main flaw was that it did not provide a sufficiently strong data integrity guarantee for the packets it handled. Well-tested message authentication codes existed to solve these problems, but they required too much computation to be used on old network cards. WPA uses a message integrity check algorithm called TKIP to verify the integrity of the packets. TKIP is much stronger than a CRC, but not as strong as the algorithm used in WPA2. Researchers have since discovered a flaw in WPA that relied on older weaknesses in WEP and the limitations of the message integrity code hash function, named Michael, to retrieve the keystream from short packets to use for re-injection and spoofing.
WPA2
Ratified in 2004, WPA2 replaced WPA. WPA2, which requires testing and certification by the Wi-Fi Alliance, implements the mandatory elements of IEEE 802.11i. In particular, it includes mandatory support for CCMP, an AES-based encryption mode. Certification began in September, 2004. From March 13, 2006, to June 30, 2020, WPA2 certification was mandatory for all new devices to bear the Wi-Fi trademark.
WPA3
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2. Certification began in June 2018.
The new standard uses an equivalent 192-bit cryptographic strength in WPA3-Enterprise mode (AES-256 in GCM mode with SHA-384 as HMAC), and still mandates the use of CCMP-128 (AES-128 in CCM mode) as the minimum encryption algorithm in WPA3-Personal mode.
The WPA3 standard also replaces the pre-shared key (PSK) exchange with Simultaneous Authentication of Equals (SAE) exchange, a method originally introduced with IEEE 802.11s, resulting in a more secure initial key exchange in personal mode and forward secrecy. The Wi-Fi Alliance also claims that WPA3 will mitigate security issues posed by weak passwords and simplify the process of setting up devices with no display interface.
Protection of management frames as specified in the IEEE 802.11w amendment is also enforced by the WPA3 specifications.
Hardware support
WPA has been designed specifically to work with wireless hardware produced prior to the introduction of WPA protocol, which provides inadequate security through WEP. Some of these devices support WPA only after applying firmware upgrades, which are not available for some legacy devices.
Wi-Fi devices certified since 2006 support both the WPA and WPA2 security protocols. WPA3 is required since July 1, 2020. The newer versions may not work with some older network cards.
WPA terminology
Different WPA versions and protection mechanisms can be distinguished based on the target end-user (according to the method of authentication key distribution), and the encryption protocol used.
Target users (authentication key distribution)
WPA-Personal Also referred to as WPA-PSK (pre-shared key) mode, this is designed for home and small office networks and doesn't require an authentication server. Each wireless network device encrypts the network traffic by deriving its 128-bit encryption key from a 256-bit shared key. This key may be entered either as a string of 64 hexadecimal digits, or as a passphrase of 8 to 63 printable ASCII characters. This pass-phrase-to-PSK mapping is nevertheless not binding, as Annex J is informative in the latest 802.11 standard. If ASCII characters are used, the 256-bit key is calculated by applying the PBKDF2 key derivation function to the passphrase, using the SSID as the salt and 4096 iterations of HMAC-SHA1. WPA-Personal mode is available on all three WPA versions.
WPA-Enterprise Also referred to as WPA-802.1X mode, and sometimes just WPA (as opposed to WPA-PSK), this is designed for enterprise networks and requires a RADIUS authentication server. This requires a more complicated setup, but provides additional security (e.g. protection against dictionary attacks on short passwords). Various kinds of the Extensible Authentication Protocol (EAP) are used for authentication. WPA-Enterprise mode is available on all three WPA versions.
Wi-Fi Protected Setup (WPS) This is an alternative authentication key distribution method intended to simplify and strengthen the process, but which, as widely implemented, creates a major security hole via WPS PIN recovery.
Encryption protocol
TKIP (Temporal Key Integrity Protocol) The RC4 stream cipher is used with a 128-bit per-packet key, meaning that it dynamically generates a new key for each packet. This is used by WPA.
CCMP (CTR mode with CBC-MAC Protocol) The protocol used by WPA2, based on the Advanced Encryption Standard (AES) cipher along with strong message authenticity and integrity checking is significantly stronger in protection for both privacy and integrity than the RC4-based TKIP that is used by WPA. Among informal names are "AES" and "AES-CCMP". According to the 802.11n specification, this encryption protocol must be used to achieve fast 802.11n high bitrate schemes, though not all implementations enforce this. Otherwise, the data rate will not exceed 54 Mbit/s.
EAP extensions under WPA and WPA2 Enterprise
Originally, only EAP-TLS (Extensible Authentication Protocol - Transport Layer Security) was certified by the Wi-Fi alliance. In April 2010, the Wi-Fi Alliance announced the inclusion of additional EAP types to its WPA- and WPA2-Enterprise certification programs. This was to ensure that WPA-Enterprise certified products can interoperate with one another.
the certification program includes the following EAP types:
EAP-TLS (previously tested)
EAP-TTLS/MSCHAPv2 (April 2005)
PEAPv0/EAP-MSCHAPv2 (April 2005)
PEAPv1/EAP-GTC (April 2005)
PEAP-TLS
EAP-SIM (April 2005)
EAP-AKA (April 2009)
EAP-FAST (April 2009)
802.1X clients and servers developed by specific firms may support other EAP types. This certification is an attempt for popular EAP types to interoperate; their failure to do so is one of the major issues preventing rollout of 802.1X on heterogeneous networks.
Commercial 802.1X servers include Microsoft Internet Authentication Service and Juniper Networks Steelbelted RADIUS as well as Aradial Radius server. FreeRADIUS is an open source 802.1X server.
Security issues
Weak password
Pre-shared key WPA and WPA2 remain vulnerable to password cracking attacks if users rely on a weak password or passphrase. WPA passphrase hashes are seeded from the SSID name and its length; rainbow tables exist for the top 1,000 network SSIDs and a multitude of common passwords, requiring only a quick lookup to speed up cracking WPA-PSK.
Brute forcing of simple passwords can be attempted using the Aircrack Suite starting from the four-way authentication handshake exchanged during association or periodic re-authentication.
WPA3 replaces cryptographic protocols susceptible to off-line analysis with protocols that require interaction with the infrastructure for each guessed password, supposedly placing temporal limits on the number of guesses. However, design flaws in WPA3 enable attackers to plausibly launch brute-force attacks (see Dragonblood attack).
Lack of forward secrecy
WPA and WPA2 don't provide forward secrecy, meaning that once an adverse person discovers the pre-shared key, they can potentially decrypt all packets encrypted using that PSK transmitted in the future and even past, which could be passively and silently collected by the attacker. This also means an attacker can silently capture and decrypt others' packets if a WPA-protected access point is provided free of charge at a public place, because its password is usually shared to anyone in that place. In other words, WPA only protects from attackers who don't have access to the password. Because of that, it's safer to use Transport Layer Security (TLS) or similar on top of that for the transfer of any sensitive data. However starting from WPA3, this issue has been addressed.
WPA packet spoofing and decryption
Mathy Vanhoef and Frank Piessens significantly improved upon the WPA-TKIP attacks of Erik Tews and Martin Beck. They demonstrated how to inject an arbitrary number of packets, with each packet containing at most 112 bytes of payload. This was demonstrated by implementing a port scanner, which can be executed against any client using WPA-TKIP. Additionally they showed how to decrypt arbitrary packets sent to a client. They mentioned this can be used to hijack a TCP connection, allowing an attacker to inject malicious JavaScript when the victim visits a website.
In contrast, the Beck-Tews attack could only decrypt short packets with mostly known content, such as ARP messages, and only allowed injection of 3 to 7 packets of at most 28 bytes. The Beck-Tews attack also requires quality of service (as defined in 802.11e) to be enabled, while the Vanhoef-Piessens attack does not. Neither attack leads to recovery of the shared session key between the client and Access Point. The authors say using a short rekeying interval can prevent some attacks but not all, and strongly recommend switching from TKIP to AES-based CCMP.
Halvorsen and others show how to modify the Beck-Tews attack to allow injection of 3 to 7 packets having a size of at most 596 bytes. The downside is that their attack requires substantially more time to execute: approximately 18 minutes and 25 seconds. In other work Vanhoef and Piessens showed that, when WPA is used to encrypt broadcast packets, their original attack can also be executed. This is an important extension, as substantially more networks use WPA to protect broadcast packets, than to protect unicast packets. The execution time of this attack is on average around 7 minutes, compared to the 14 minutes of the original Vanhoef-Piessens and Beck-Tews attack.
The vulnerabilities of TKIP are significant because WPA-TKIP had been held before to be an extremely safe combination; indeed, WPA-TKIP is still a configuration option upon a wide variety of wireless routing devices provided by many hardware vendors. A survey in 2013 showed that 71% still allow usage of TKIP, and 19% exclusively support TKIP.
WPS PIN recovery
A more serious security flaw was revealed in December 2011 by Stefan Viehböck that affects wireless routers with the Wi-Fi Protected Setup (WPS) feature, regardless of which encryption method they use. Most recent models have this feature and enable it by default. Many consumer Wi-Fi device manufacturers had taken steps to eliminate the potential of weak passphrase choices by promoting alternative methods of automatically generating and distributing strong keys when users add a new wireless adapter or appliance to a network. These methods include pushing buttons on the devices or entering an 8-digit PIN.
The Wi-Fi Alliance standardized these methods as Wi-Fi Protected Setup; however, the PIN feature as widely implemented introduced a major new security flaw. The flaw allows a remote attacker to recover the WPS PIN and, with it, the router's WPA/WPA2 password in a few hours. Users have been urged to turn off the WPS feature, although this may not be possible on some router models. Also, the PIN is written on a label on most Wi-Fi routers with WPS, and cannot be changed if compromised.
WPA3 introduces a new alternative for the configuration of devices that lack sufficient user interface capabilities by allowing nearby devices to serve as an adequate UI for network provisioning purposes, thus mitigating the need for WPS.
MS-CHAPv2 and lack of AAA server CN validation
Several weaknesses have been found in MS-CHAPv2, some of which severely reduce the complexity of brute-force attacks, making them feasible with modern hardware. In 2012 the complexity of breaking MS-CHAPv2 was reduced to that of breaking a single DES key, work by Moxie Marlinspike and Marsh Ray. Moxie advised: "Enterprises who are depending on the mutual authentication properties of MS-CHAPv2 for connection to their WPA2 Radius servers should immediately start migrating to something else."
Tunneled EAP methods using TTLS or PEAP which encrypt the MSCHAPv2 exchange are widely deployed to protect against exploitation of this vulnerability. However, prevalent WPA2 client implementations during the early 2000s were prone to misconfiguration by end users, or in some cases (e.g. Android), lacked any user-accessible way to properly configure validation of AAA server certificate CNs. This extended the relevance of the original weakness in MSCHAPv2 within MiTM attack scenarios. Under stricter WPA2 compliance tests announced alongside WPA3, certified client software will be required to conform to certain behaviors surrounding AAA certificate validation.
Hole196
Hole196 is a vulnerability in the WPA2 protocol that abuses the shared Group Temporal Key (GTK). It can be used to conduct man-in-the-middle and denial-of-service attacks. However, it assumes that the attacker is already authenticated against Access Point and thus in possession of the GTK.
Predictable Group Temporal Key (GTK)
In 2016 it was shown that the WPA and WPA2 standards contain an insecure expository random number generator (RNG). Researchers showed that, if vendors implement the proposed RNG, an attacker is able to predict the group key (GTK) that is supposed to be randomly generated by the access point (AP). Additionally, they showed that possession of the GTK enables the attacker to inject any traffic into the network, and allowed the attacker to decrypt unicast internet traffic transmitted over the wireless network. They demonstrated their attack against an Asus RT-AC51U router that uses the MediaTek out-of-tree drivers, which generate the GTK themselves, and showed the GTK can be recovered within two minutes or less. Similarly, they demonstrated the keys generated by Broadcom access daemons running on VxWorks 5 and later can be recovered in four minutes or less, which affects, for example, certain versions of Linksys WRT54G and certain Apple AirPort Extreme models. Vendors can defend against this attack by using a secure RNG. By doing so, Hostapd running on Linux kernels is not vulnerable against this attack and thus routers running typical OpenWrt or LEDE installations do not exhibit this issue.
KRACK attack
In October 2017, details of the KRACK (Key Reinstallation Attack) attack on WPA2 were published. The KRACK attack is believed to affect all variants of WPA and WPA2; however, the security implications vary between implementations, depending upon how individual developers interpreted a poorly specified part of the standard. Software patches can resolve the vulnerability but are not available for all devices.
Dragonblood attack
In April 2019, serious design flaws in WPA3 were found which allow attackers to perform downgrade attacks and side-channel attacks, enabling brute-forcing the passphrase, as well as launching denial-of-service attacks on Wi-Fi base stations.
References
External links
Official standards document:
Wi-Fi Alliance's Interoperability Certificate page
Weakness in Passphrase Choice in WPA Interface, by Robert Moskowitz. Retrieved March 2, 2004.
The Evolution of 802.11 Wireless Security, by Kevin Benton, April 18th 2010
Computer network security
Cryptographic protocols
IEEE 802.11 |
422068 | https://en.wikipedia.org/wiki/List%20of%20Israelis | List of Israelis | Israelis ( Yiśraʾelim, al-ʾIsrāʾīliyyin) are the citizens or permanent residents of the State of Israel, a multiethnic state populated by people of different ethnic backgrounds. The largest ethnic groups in Israel are Jews (75%), followed by Arabs (20%) and other minorities (5%).
Academics
Archaeology
Israel Finkelstein
Amihai Mazar
Benjamin Mazar
Eilat Mazar
Yigael Yadin
Amit Romano
Biology and medicine
Aaron Valero – Professor of Medicine, founder of Faculty of Medicine at the Technion, director of government hospital
Aaron Ciechanover and Avram Hershko – ubiquitin system; Lasker Award (2000), Nobel Prize in Chemistry (2004)
Moshe Feldenkrais – invented Feldenkrais Method used in movement therapy
Hossam Haick – inventor of an electric nose for diagnosis of cancer
Israel Hanukoglu – structures of cytoskeletal keratins, NADP binding proteins, steroidogenic enzymes, Epithelial Sodium Channels (ENaC)
Gavriel Iddan – inventor of capsule endoscopy
Benjamin Kahn – marine biologist, defender of the Red Sea reef
Alexander Levitzki – cancer research; Wolf Prize in Medicine (2005)
Yadin Dudai – memory research
Gideon Mer – scientist, malaria control
Saul Merin – ophthalmologist, author of Inherited Eye Diseases
Raphael Mechoulam – chemist, discoverer of tetrahydrocannabinol and anandamide
Leo Sachs – blood cell research; Wolf Prize in Medicine (1980)
Asya Rolls – psychoneuroimmunologist
Michael Sela and Ruth Arnon – developed Copaxone; Wolf Prize in Medicine (1998)
Rahel Straus (1880–1963) – German-Jewish medical doctor and feminist
Joel Sussman – 3D structure of acetylcholinesterase, Elkeles Prize for Research in Medicine (2005)
Meir Wilchek – affinity chromatography; Wolf Prize in Medicine (1987)
Ada Yonath – structure of ribosome, Nobel Prize in Chemistry (2009)
Amotz Zahavi – Handicap Principle
Abraham Zangen – psycholobiology
Computing and mathematics
Ron Aharoni – mathematician
Noga Alon – mathematician, computer scientist, winner of the Gödel Prize (2005)
Shimshon Amitsur – mathematician ring theory abstract algebra
Robert Aumann – mathematician game theory; Nobel Memorial Prize in Economic Sciences (2005)
Amir Ban and Shay Bushinsky – programmers of Junior (chess)
Yehoshua Bar-Hillel – machine translation
Joseph Bernstein – mathematician
Eli Biham – differential cryptanalysis
Yair Censor – mathematician
Aryeh Dvoretzky – mathematician, eighth president of the Weizmann Institute of Science
Uriel Feige – computer scientist, winner of the Gödel Prize (2001)
Abraham Fraenkel – ZF set theory
Hillel Furstenberg – mathematician; Wolf Prize in Mathematics (2006/7)
Shafi Goldwasser – computer scientist, winner of the Gödel Prize (1993 and 2001)
David Harel – computer science; Israel Prize (2004)
Gad M. Landau – computer scientist
Abraham Lempel and Jacob Ziv – LZW compression; IEEE Richard W. Hamming Medal (2007 and 1995)
Joram Lindenstrauss – mathematician Johnson–Lindenstrauss lemma
Elon Lindenstrauss – mathematician
Michel Loève – probabilist
Joel Moses – MIT provost and writer of Macsyma
Yoram Moses – computer scientist, winner of the Gödel Prize (1997)
Judea Pearl – artificial intelligence, philosophy of action; Turing Award (2011)
Ilya Piatetski-Shapiro – representation theory; Wolf Prize in Mathematics (1990)
Amir Pnueli – temporal logic; Turing Award (1996)
Michael O. Rabin – nondeterminism, primality testing; Turing Award (1976)
Shmuel Safra – computer scientist, winner of the Gödel Prize (2001)
Nir Shavit – computer scientist, winner of the Gödel Prize (2004)
Adi Shamir – RSA encryption, differential cryptanalysis; Turing Award (2002)
Saharon Shelah – logic; Wolf Prize in Mathematics (2001)
Ehud Shapiro – Concurrent Prolog, DNA computing pioneer
Moshe Y. Vardi – computer scientist, winner of the Gödel Prize (2000)
Avi Wigderson – randomized algorithms; Nevanlinna Prize (1994)
Doron Zeilberger – combinatorics
Engineering
David Faiman – solar engineer and director of the National Solar Energy Center
Liviu Librescu – Professor of Engineering Science and Mechanics at Virginia Tech, killed in the Virginia Tech massacre
Hagit Messer Yaron – professor of electrical engineering
Moshe Zakai – electrical engineering
Jacob Ziv – electrical engineering
Humanities
Aharon Dolgopolsky – linguist: Nostratic
Moshe Goshen-Gottstein – Biblical scholar
Elias Khoury – law
Hans Jakob Polotsky – linguist
Chaim Rabin – Biblical scholar
Alice Shalvi – English literature, educator
Gershon Shaked – Hebrew literature
Shemaryahu Talmon – Biblical scholar
Emanuel Tov – Biblical scholar
Ghil'ad Zuckermann – linguist, revivalist
Philosophy
Martin Buber
Berl Katznelson
Yeshayahu Leibowitz
Avishai Margalit
Joseph Raz
Gershom Scholem
Physics and chemistry
Yakir Aharonov – Aharonov–Bohm effect; Wolf Prize in Physics (1998)
Amiram Barkai – biochemist
Jacob Bekenstein – black hole thermodynamics; Wolf Prize in Physics (2012)
David Deutsch – quantum computing pioneer; Paul Dirac Prize (1998)
Joshua Jortner and Rafi Levine – molecular energy; Wolf Prize in Chemistry (1988)
Josef Imry – physicist
Aaron Katzir – physical chemistry
Ephraim Katzir – immobilized enzymes; Japan Prize (1985) The fourth president of Israel List of presidents of Israel
Michael Levitt – Nobel Prize in Chemistry (2011)
Zvi Lipkin – physicist
Dan T. Major – professor of chemistry
Boris Mavashev – seismologist
Mordehai Milgrom – Modified Newtonian Dynamics (MOND)
Yuval Ne'eman – the "Eightfold way"
Asher Peres – quantum theory
Giulio Racah – spectroscopy
Nathan Rosen – EPR paradox
Nathan Seiberg – string theory
Dan Shechtman – quasicrystals; Wolf Prize in Physics (1999), Nobel Prize in Chemistry (2013)
Igal Talmi – nuclear physics
Reshef Tenne – discovered inorganic fullerenes and inorganic nanotubes
Arieh Warshel – Nobel Prize in Chemistry (2013)
Chaim Weizmann – acetone production
Uri Banin – chemist
Social sciences
Yehuda Bauer – historian
Daniel Elazar – political scientist
Esther Farbstein – historian
Haim Ginott – psychologist: child psychology
Eliyahu Goldratt – business consultant: Theory of Constraints
Louis Guttman – sociologist
Yuval Noah Harari – historian and author who wrote best selling book Sapiens: A Brief History of Humankind
Michael Harris – public policy scholar and university administrator
Elhanan Helpman – economist: international trade
Daniel Kahneman – behavioural scientist: prospect theory; Nobel Memorial Prize in Economic Sciences (2002)
Smadar Lavie – anthropologist
Benny Morris – historian
Erich Neumann – analytical psychologist: development, consciousness
Nurit Peled-Elhanan – educator
Renee Rabinowitz – psychologist and lawyer
Sheizaf Rafaeli – management, information, communication
Anat Rafaeli – organisational behaviour
Ariel Rubinstein – economist
Moshe Sharon – historian
Avi Shlaim – historian
Abraham Solomonick – semiotician, linguist
Amos Tversky – behavioral scientist: prospect theory with Daniel Kahneman
Hanan Yoran – historian
Activists
Uri Avnery – peace activist, Gush Shalom
Yael Dayan – writer, politician, activist
Esther Eillam – feminist activist
Uzi Even – gay rights activist
Yehuda Glick – activist for Jewish rights at the Temple Mount
Shula Keshet – Mizrahi feminist, activist and artist
Uri Savir – peace negotiator, Peres Center for Peace
Israel Shahak – political activist
Natan Sharansky – Soviet-era human rights activist
Ronny Edry and Michal Tamir – originators of the Israel-Loves-Iran peace movement and its offshoots
Architects
Michael Arad
Ram Karmi
Richard Kauffmann
David Kroyanker
David Resnick
Moshe Safdie
Arieh Sharon
Athletes
Association Football
Ryan Adeleye – Israeli-American defender (Hapoel Be'er Sheva)
Eyal Ben Ami – midfielder various clubs, national team
Yaniv Ben-Nissan – midfielder
Dudu Aouate – goalkeeper (RCD Mallorca, national team)
Jonathan Assous – defensive midfielder (Hapoel Petah Tikva), of French origin
Gai Assulin – winger/attacking midfielder (Manchester City, national team)
Ronen Badash – midfielder
Pini Balili – striker (Bnei Yehuda Tel Aviv, Israel national team)
Yossi Benayoun – attacking midfielder, national team captain, Hapoel Be'er Sheva, Maccabi Haifa, Racing Santander, West Ham United, Liverpool, Chelsea
Gil Cain – defender, Hapoel Azor
David "Dedi" Ben Dayan – left defender (Hapoel Tel Aviv, national team)
Tal Ben Haim – center back/right back, Maccabi Tel Aviv, Bolton Wanderers, Chelsea, West Ham United
Eyal Berkovic – midfielder (national team), Maccabi Haifa, Southampton, West Ham United, Celtic, Manchester City, Portsmouth
Daniel Brailovski – midfielder (Argentina, Uruguay, and Israel national teams)
Roberto Colautti – Argentian born-striker
Tomer Chencinski – goaltender (Vaasan Palloseura)
Avi Cohen – defender, Liverpool and national team
Tamir Cohen – midfielder (Bolton Wanderers and national team)
Rami Gershon – centre back / left back
Tvrtko Kale – Croatia/Israel, goalkeeper (Hapoel Haifa)
Yaniv Katan – forward/winger (Maccabi Haifa, national team)
Eli Ohana – won UEFA Cup Winners' Cup and Bravo Award (most outstanding young player in Europe); national team; manager
Haim Revivo – attacking/side midfielder (Israel national team), Maccabi Haifa, Celta de Vigo, Fenerbahçe, Galatasaray
Ronnie Rosenthal – left winger/striker (Israel national team), Maccabi Haifa, Liverpool, Tottenham, Watford
Ben Sahar – striker/winger (Hapoel Tel Aviv, national team)
Mordechai Spiegler – striker (Israel national team), manager
Idan Tal – midfielder (Beitar Jerusalem FC and Israel national team)
Toto Tammuz – Nigerian born-forward
Nicolás Tauber – goalkeeper (Chacarita Juniors) of Argentine origin
Salim Tuama – soccer player playing for Hapoel Tel Aviv who has in the past played for Standard Liège, Maccabi Petah Tikva, Kayserispor, Larissa and the youth club Gadna Tel Aviv Yehuda.
Yochanan Vollach – defender (Israel national team); current president of Maccabi Haifa
Pini Zahavi – UK-based super-agent
Itzik Zohar – attacking midfielder (Israel national team), Maccabi Jaffa, Maccabi Tel Aviv, Royal Antwerp, Beitar Jerusalem, Crystal Palace, Maccabi Haifa, Maccabi Herzliya, Maccabi Netanya, F.C. Ashdod, Hapoel Nazareth Illit
Basketball
Miki Berkovich – Maccabi Tel Aviv
David Blu – (formerly "Bluthenthal"), Euroleague 6' 7" forward (Maccabi Tel Aviv)
Tal Brody – Euroleague 6' 2" shooting guard, Maccabi Tel Aviv
Tal Burstein – Maccabi Tel Aviv
Omri Casspi – 6' 9" small forward, drafted in 1st round of 2009 NBA Draft (Golden State Warriors)
Tanhum Cohen-Mintz – 6' 8" center; 2x Euroleague All-Star
Shay Doron – WNBA 5' 9" guard, University of Maryland (New York Liberty)
Lior Eliyahu – 6' 9" power forward, NBA draft 2006 (Orlando Magic; traded to Houston Rockets), but completed mandatory service in the Israel Defense Forces and played in the Euroleague (Maccabi Tel Aviv)
Tamir Goodman – U.S. and Israel, 6' 3" shooting guard
Yotam Halperin – 6' 5" guard, drafted in 2006 NBA draft by Seattle SuperSonics (Olympiacos)
T. J. Leaf – NBA basketball player
Gal Mekel – NBA basketball player
Yehoshua Rozin – basketball coach
Derrick Sharp – American-Israeli basketball player
Amit Tamir – 6' 10" center/forward, University of California, PAOK Thessaloniki (Hapoel Jerusalem)
Bodybuilding
Alana Shipp – American/Israeli IFBB professional bodybuilder
Boxing
Salamo Arouch (The Ballet Dancer) – middleweight champion of Greece, lightweight, welterweight, middleweight. He survived the Holocaust by boxing (over 200 bouts) for the entertainment of Nazi officers in Auschwitz Concentration Camp. His story was portrayed in the 1989 film "Triumph of the Spirit"
Sarah Avraham – kickboxer, 2014 Women's World Thai-Boxing Champion in 57–63 kilos (125–140 pounds)
Hagar Finer – WIBF bantamweight champion
Yuri Foreman – U.S. middleweight and World Boxing Association super welterweight champion
Roman Greenberg – ("The Lion from Zion"), International Boxing Organization's Intercontinental heavyweight champion
Pavlo Ishchenko – 2-time European Amateur Boxing Championships medalist, and European Games medalist
Yulia Sachkov – world champion kickboxer
Fencing
Boaz Ellis (born 1981) – foil, 5-time Israeli champion
Yuval Freilich (born 1995) – épée, 2019 European Epee Champion
Lydia Hatoel-Zuckerman (born 1963) – foil, 6-time Israeli champion
Delila Hatuel (born 1980) – Olympic foil fencer
Noam Mills (born 1986) – epee, junior female world champion, 4-time Israeli champion
Ayelet Ohayon (born 1974) – foil, European champion
Tomer Or (born 1978) – foil, junior world champion
Andre Spitzer (1945–1972) – killed by terrorists
Figure skating
Alexei Beletski – ice dancer, Olympian
Oleksii Bychenko – figure skater, Olympian, European silver medallist 2016
Galit Chait – ice dancer, World Championship bronze 2002
Natalia Gudina – figure skater, Olympian
Tamar Katz – figure skater
Lionel Rumi – ice dancer
Sergei Sakhnovsky – ice dancer, World Championship Bronze medal 2002
Daniel Samohin – figure skater, Olympian, 2016 World Junior Champion
Michael Shmerkin – figure skater
Alexandra Zaretski – ice dancer, Olympian
Roman Zaretski – ice dancer, Olympian
Golf
Laetitia Beck – golfer
Gymnastics
Artem Dolgopyat (born 1997) – artistic gymnast, (World silver)
Alexander Shatilov – World bronze (artistic gymnast; floor exercise)
Veronika Vitenberg – rhythmic gymnast
Judo
Yael Arad – judoka (Olympic silver: 1992, European champion: 1993, world silver: 1993). First Israeli Olympic medalist; light-middleweight
Yarden Gerbi – judoka (Olympic bronze: 2016)
Andrian Kordon – European Championship bronze; heavyweight
Daniela Krukower – Israel/Argentina judoka, World Champion (under 63 kg)
Yoel Razvozov – 2-time European Championship silver; lightweight
Or Sasson – judoka (Olympic bronze: 2016)
Oren Smadja – judoka (Olympic bronze: 1992; lightweight)
Ehud Vaks – judoka (half-lightweight)
Gal Yekutiel – European championship bronze
Ariel Zeevi – judoka (European champion: 2000, 2003, 2004; Olympic bronze: 2004; 100 kg)
Motor racing
Alon Day – racing driver
Chanoch Nissany – Formula One racing test-driver
Roy Nissany – World Series Formula V8 3.5 racing driver
Sailing
Zefania Carmel – yachtsman, world champion (420 class)
Gal Fridman – windsurfer (Olympic gold: 2004 (Israel's first gold medalist), bronze: 1996 (Mistral class); world champion: 2002)
Lee Korzits – windsurfer (two-time Olympian and four-time world champion)
Lydia Lazarov – yachting world champion (420 class)
Nimrod Mashiah – windsurfer; World Championship silver, ranked # 1 in world.
Katy Spychakov – windsurfer; World Championship silver
Shahar Tzuberi – windsurfer, Olympic bronze (RS:X discipline); 2009 & 2010 European Windsurf champion
Swimming
Vadim Alexeev – swimmer, breaststroke
Adi Bichman – 400-m and 800-m freestyle, 400-m medley
Yoav Bruck – 50-m freestyle and 100-m freestyle
Anastasia Gorbenko (born 2003) – backstroke, breaststroke, and freestyle
Eran Groumi – 100 and 200 m backstroke, 100-m butterfly
Michael "Miki" Halika – 200-m butterfly, 200- and 400-m individual medley
Judith Haspel – (born "Judith Deutsch"), of Austrian origin, held every Austrian women's middle and long distance freestyle record in 1935; refused to represent Austria in 1936 Summer Olympics along with Ruth Langer and Lucie Goldner, protesting Hitler, stating, "We do not boycott Olympia, but Berlin".
Marc Hinawi – record holder in the European Games
Amit Ivry – Maccabiah and Israeli records in Women's 100 m butterfly, Israeli record in Women's 200 m Individual Medley, bronze medal in 100 m butterfly at the European Swimming Championships.
Dan Kutler – of U.S. origin; 100-m butterfly, 4×100-m medley relay
Keren Leibovitch – Paralympic swimmer, 4x-gold-medal-winner, 100-m backstroke, 50- and 100-m freestyle, 200-m individual medley
Tal Stricker – 100- and 200-m breaststroke, 4×100-m medley relay
Eithan Urbach – backstroke swimmer, European championship silver and bronze; 100-m backstroke
Table Tennis
Marina Kravchenko – table tennis player, Soviet and Israel national teams
Angelica Rozeanu – (Adelstin), of Romanian origin, 17-time world table tennis champion, ITTFHoF
Tennis
Noam Behr
Ilana Berger
Gilad Bloom
Jonathan Erlich – 6 doubles titles, 6 doubles finals; won 2008 Australian Open Men's Doubles (w/Andy Ram), highest world doubles ranking # 5
Shlomo Glickstein – highest world singles ranking # 22, highest world doubles ranking # 28
Julia Glushko
Amir Hadad
Harel Levy – highest world singles ranking # 30
Evgenia Linetskaya
Amos Mansdorf – highest world singles ranking # 18
Tzipora Obziler
Noam Okun
Yshai Oliel
Shahar Pe'er – (3 WTA career titles), highest world singles ranking # 11, highest world doubles ranking # 14
Shahar Perkiss
Andy Ram – 6 doubles titles, 6 doubles finals, 1 mixed double title (won 2006 Wimbledon Mixed Doubles (w/Vera Zvonareva), 2007 French Open Mixed Doubles (w/Nathalie Dechy), 2008 Australian Open Men's Doubles (w/Jonathan Erlich), highest world doubles ranking # 5
Eyal Ran
Dudi Sela – highest world singles ranking # 29
Denis Shapovalov (born 1999) – Israeli-Canadian tennis player, born in Tel Aviv, highest world singles ranking # 29
Anna Smashnova – (12 WTA career titles), highest world singles ranking # 15
Track and field
Alex Averbukh – pole vaulter (European champion: 2002, 2006)
Ayele Seteng – long distance runner, he was the oldest track and field athlete competing at the 2004 Olympics and 2008 Olympics.
Danielle Frenkel – high jump champion
Hanna Knyazyeva-Minenko – triple jumper and long jumper; participated in 2012 Summer Olympics
Shaul Ladany – world-record-holding racewalker, Bergen-Belsen survivor, Munich Massacre survivor, Professor of Industrial Engineering
Lonah Chemtai Salpeter – Kenyan-Israeli Olympic marathon runner
Esther Roth-Shachamarov – track and field, hurdler and sprinter (5 Asian Game golds)
Other
1972 Olympic team – see Munich Massacre
David Mark Berger – weightlifter originally from US, Maccabiah champion (middleweight); killed in the Munich Massacre
Max Birbraer – ice hockey player drafted by NHL team (New Jersey Devils)
Nili Block (born 1995) – world champion kickboxer and Muay Thai fighter
Noam Dar – Israeli-born Scottish wrestler
Oren Eizenman – ice hockey player, Israel national team; Connecticut Whale)
Eli Elezra – professional poker player
Boris Gelfand, Emil Sutovsky, Ilya Smirin – chess Grandmasters (~2700 peak ELO rating)
Baruch Hagai – wheelchair athlete (multiple paralympic golds)
Michael Kolganov – sprint canoer/kayak paddler, world champion, Olympic bronze 2000 (K-1 500-meter)
Dean Kremer (born 1996) – Israeli-American Major League Baseball pitcher
Ido Pariente – mixed martial artist
Eliezer Sherbatov (born 1991) – Israeli-Canadian ice hockey player
Chagai Zamir – Israel, 4-time Paralympic Games champion
Chefs
Yisrael Aharoni – chef and restaurateur
Jamie Geller – American born-Israeli chef
Erez Komarovsky – first artisanal bread baker in Israel
Yotam Ottolenghi – Israeli-British chef
Entertainment
Artists
Yaacov Agam – kinetic artist
Ron Arad – designer
Mordecai Ardon – painter
David Ascalon – sculptor and synagogue designer
Maurice Ascalon – sculptor and industrial designer
Isidor Ascheim – painter and printmaker
Mordechai Avniel – painter and sculptor
Yigal Azrouel – fashion designer
Ralph Bakshi – animation (director)
Eyal ben-Moshe (Eyal B) – animator and director
Tuvia Beeri – printmaker
Alexander Bogen – painter
Rhea Carmi – painter
Yitzhak Danziger – sculptor
Alber Elbaz – fashion designer
Ohad Elimelech – artist, director, editor, photographer, animator, lecturer, and graphic designer
Osnat Elkabir – dancer, artist and theatre direction
Yitzhak Frenkel – painter
Gideon Gechtman – sculptor
Moshe Gershuni – painter
Dudu Geva – artist and comic-strip illustrator
Pinhas Golan – sculptor
Nachum Gutman – painter
Israel Hershberg – realist painter
Shimshon Holzman – painter
Leo Kahn – painter
Shemuel Katz – illustrator
Uri Katzenstein – visual artist
Dani Karavan – sculptor
Joseph Kossonogi – painter
Elyasaf Kowner – video artist
Sigalit Landau – video, installation, sculpture
Alex Levac – photographer
Batia Lishansky – sculptor
Ranan Lurie – political cartoonist
Lea Nikel – painter
Zvi Malnovitzer – painter
Tamara Musakhanova – sculptor and ceramist
Mushail Mushailov – painter
Ilana Raviv – painter
Leo Roth – painter
Reuven Rubin – painter
Hagit Shahal – painter
David Tartakover – graphic designer
Anna Ticho – painter
Igael Tumarkin – sculptor
Yemima Ergas Vroman – painter, sculptor, installation artist
Sergey Zagraevsky – painter
Moshe Ziffer – sculptor
Film, TV, radio, and stage
Hiam Abbas – actress
Avital Abergel – film and TV actress
Gila Almagor – actress
Aviv Alush – actor
Lior Ashkenazi – actor
Yvan Attal – actor and director
Mili Avital – actress
Aki Avni – actor
Orna Banai – actress
Theodore Bikel – actor
Eddie Carmel, born Oded Ha-Carmeili – actor, singer, and circus sideshow act
Jason Danino-Holt – television presenter
Ronit Elkabetz – actress
David Faitelson – Mexican television sports commentator, born in Israel
Oded Fehr – actor
Eytan Fox – director
Tal Friedman – actor and comedian
Gal Gadot – actress and model
Uri Geller – TV personality, self-proclaimed psychic
Amos Gitai – director
Yasmeen Godder – choreographer and dancer
Arnon Goldfinger – director
Yael Grobglas – actress
Shira Haas – actress
Tzachi Halevy – actor
Moshe Ivgi – actor
Dana Ivgy – actress
Roman Izyaev - actor and director
Michael Karpin – broadcast journalist and author
Daphna Kastner – actress; married to actor Harvey Keitel
Juliano Mer-Khamis – actor
Hila Klein – YouTuber
Amos Kollek – director and writer
Dover Kosashvili – director
Hanna Laslo – actress
Daliah Lavi – actress
Inbar Lavi – actress
Jonah Lotan – actor
Rod Lurie – director and film critic
Gad Lerner – journalist (currently living in Italy)
Arnon Milchan – producer
Samuel Maoz – director
Ohad Naharin – choreographer
Eyal Podell – actor
Orna Porat – actress
Natalie Portman – actress
Lior Raz – actor and screenwriter
Ze'ev Revach – actor and comedian
Agam Rodberg – actress
Avner Strauss – musician
Haim Saban – TV producer
Elia Suleiman – director
Alona Tal – actress
Noa Tishby – actress and producer
Chaim Topol – actor
Yon Tumarkin – actor
Raviv Ullman – actor
Yaron London – TV interviewer
Keren Yedaya – director
Ayelet Zurer – actress
Naor Zion – comedian, actor and director
Musicians
Classical composers
Rami Bar-Niv
Ofer Ben-Amots
Paul Ben-Haim
Avner Dorman
Dror Elimelech
Andre Hajdu
Gilad Hochman
Mark Kopytman
Matti Kovler
Betty Olivero
Shulamit Ran
Leon Schidlowsky
Noam Sheriff
Gil Shohat
Josef Tal
Yitzhak Yedid
Classical musicians
Moshe Atzmon – conductor
Daniel Barenboim – conductor and pianist
Rami Bar-Niv – pianist and composer
Bart Berman – pianist
Gary Bertini – conductor
Natan Brand – pianist
Yefim Bronfman – pianist
Ammiel Bushakevitz – pianist
Giora Feidman – clarinetist
Ivry Gitlis – violinist
Matt Haimovitz – cellist
Alice Herz-Sommer – pianist
Ofra Harnoy – cellist
Eliahu Inbal – conductor
Sharon Kam – clarinetist
Amir Katz – pianist
Evgeny Kissin – pianist
Yoel Levi – conductor
Mischa Maisky – cellist
Shlomo Mintz – violinist
Itzhak Perlman – violinist
Inbal Segev – cellist
Gil Shaham – violinist
Hagai Shaham – violinist
Michael Shani – conductor
Edna Stern – pianist
Yoav Talmi – conductor
Arie Vardi – pianist
Maxim Vengerov - violinist, violist, and conductor
Ilana Vered – pianist
Pinchas Zukerman – violinist
Popular musicians
Chava Alberstein – singer/songwriter
Etti Ankri – singer/songwriter
Yardena Arazi – singer and TV host
Shlomo Artzi – singer/songwriter
Ehud Banai – singer/songwriter
Abatte Barihun – jazz saxophonist and composer
Eef Barzelay – founder of Clem Snide
Netta Barzilai – singer
Miri Ben-Ari – jazz and hip hop violinist
Mosh Ben-Ari – singer/songwriter
Borgore – electronic dance music producer and DJ
Mike Brant – French-language singer
David Broza – singer/songwriter
Matti Caspi – singer, multi-instrumentalist and composer
Avishai Cohen – jazz bassist
David D'Or – singer/songwriter
Arkadi Duchin – singer/songwriter, musical producer
Arik Einstein – singer, actor, writer
Gad Elbaz – singer
Ethnix – pop-rock band
Rita Yahan-Farouz – singer, actress
Uri Frost – rock guitarist, producer and director
Aviv Geffen – singer/songwriter
Eyal Golan – singer
Gidi Gov – singer
Dedi Graucher – Orthodox Jewish singer
Shlomo Gronich – singer and composer
Nadav Guedj – singer
Sarit Hadad – Mizrahi singer
Victoria Hanna – singer/songwriter
Ofra Haza – singer
Dana International – pop singer
Ishtar – vocalist for Alabina
Rami Kleinstein – singer/songwriter, composer
Ehud Manor – songwriter and translator
Amal Murkus – singer
Infected Mushroom – musical duo
Yael Naïm – solo singer/musician
Ahinoam Nini (Noa) – singer
Esther Ofarim – singer
Yehuda Poliker – singer
Ester Rada – singer
Idan Raichel – Ethiopian and Israeli music
Yoni Rechter – composer and arranger
Berry Sakharof – singer
Naomi Shemer – songwriter
Gene Simmons (real name Chaim Weitz) – lead member of KISS
Hillel Slovak – original guitarist for Red Hot Chili Peppers
Pe'er Tasi – singer/songwriter
Ninet Tayeb – pop rock singer and actress
Hagit Yaso – singer
Rika Zaraï – singer
Nir Zidkyahu – drummer, briefly in Genesis
Zino and Tommy – popular duo, songs in U.S. films
News anchors
Yonit Levi
Haim Yavin
Miki Haimovich
Ya'akov Eilon
Yigal Ravid
Ya'akov Ahimeir
Poets
Nathan Alterman
Yehuda Amichai
Sivan Beskin
Erez Biton
Leah Goldberg
Uri Zvi Greenberg
Vaan Nguyen
Dahlia Ravikovich
Naomi Shemer – songwriter and lyricist
Avraham Shlonsky
Avraham Stern
Abraham Sutzkever
Yona Wallach
Nathan Zach
Zelda
Writers
Shmuel Yosef Agnon (Shmuel Yosef Halevi Czaczkes) – author, Nobel Prize in Literature (1966)
Aharon Appelfeld – Prix Médicis étranger (2004)
Yoni Ben-Menachem – journalist
Ron Ben-Yishai – journalist
Nahum Benari – author and playwright
Max Brod – author, composer and friend of Kafka
Orly Castel-Bloom – author
Yehonatan Geffen – author, poet and lyricist
David Grossman – author
Batya Gur – author
Emile Habibi – author
Amira Hass – journalist and author
Sayed Kashua – author and journalist
Shmuel Katz – author and journalist
Etgar Keret – author
Adi Keissar – poet
Ephraim Kishon – satirist
Hanoch Levin – playwright
Julius Margolin – writer
Aharon Megged – author
Sami Michael – author
Samir Naqqash – author
Uri Orlev – author, Hans Christian Andersen Award (1996)
Amos Oz (Amos Klausner) – author and journalist, Goethe Prize (2005)
Ruchoma Shain – author
Meir Shalev – author and journalist
Zeruya Shalev – author
Moshe Shamir – author, poet
Mati Shemoelof – poet, editor and journalist
Chaim Walder – Haredi children's writer
A.B. Yehoshua – author
Benny Ziffer – author, journalist and translator
Entrepreneurs
High-tech
Beny Alagem – founder of Packard Bell
Moshe Bar – founder of XenSource, Qumranet
Safra Catz – president of Oracle
Yossi Gross – recipient of almost 600 patents, founder of 27 medical technology companies in Israel and the Chief Technology Office officer of Rainbow Medical.
Itzik Kotler – founder and CTO of SafeBreach, Information Security Specialist
Daniel M. Lewin – founder of Akamai Technologies
Shai Reshef – educational entrepreneur, founder and president of University of the People
Bob Rosenschein – founder of GuruNet, Answers.com (Israeli-based)
Gil Schwed – founder of Check Point
Zeev Suraski and Andi Gutmans – founders of Zend Technologies (developers of PHP)
Arik and Yossi Vardi, Yair Goldfinger, Sefi Vigiser and Amnon Amir – founders of Mirabilis (developers of ICQ)
Zohar Zisapel – co-founder of the RAD Group
Iftach Ian Amit – co-founder of BeeFence, prominent Hacker and Information Security Practitioner
Other
Avi Arad and Isaac Perlmutter – owners of Marvel Comics
Ted, Micky and Shari Arison – founder/owners of Carnival Corporation
Yossi Dina – pawnbroker
Dan Gertler – diamond tycoon
Alec Gores – Israeli-American businessman and investor.
Eli Hurvitz – head of Teva Pharmaceuticals
Lev Leviev – diamond tycoon
Mordecai Meirowitz – inventor of the Mastermind board game
Aviad Meitar
Dorrit Moussaieff – Israeli-British businesswoman, entrepreneur, philanthropist and the First Lady of Iceland
Sammy Ofer – shipping magnate
Guy Oseary – head of Maverick Records, manager of Madonna
Guy Spier – author and investor
Beny Steinmetz – diamond tycoon
Stef Wertheimer – industrialist
Fashion models
Neta Alchimister (female)
Moran Atias (female)
Sendi Bar (female)
Nina Brosh (female)
Esti Ginzburg (female)
Becky Griffin (female)
Shlomit Malka (female)
Esti Mamo (female)
Yael Markovich (female)
Raz Meirman (male)
Michael Lewis (male)
Bar Paly (female)
Rana Raslan (female)
Bar Refaeli (female)
Pnina Rosenblum (female)
Odeya Rush (female)
Military
Ron Arad – MIA navigator
Gabi Ashkenazi – Chief of the IDF General Staff
Yohai Ben-Nun – sixth commander of the Israeli Navy
Eli Cohen – spy
Moshe Dayan – military leader
Rafael Eitan – Chief of the IDF General Staff
Gadi Eizenkot – Chief of the IDF General Staff
David Elazar – Chief of the IDF General Staff
Giora Epstein – combat pilot, modern-day "ace of aces"
Hoshea Friedman – brigadier general in the IDF
Uziel Gal – designer of the Uzi submachine gun
Benny Gantz – Chief of the IDF General Staff
Dan Halutz – Chief of the IDF General Staff
Wolfgang Lotz – spy
Tzvi Malkhin – Mossad agent, captured Adolf Eichmann
Eli Marom, former commander of the Israeli Navy
Yonatan Netanyahu – Sayeret Matkal commando, leader of Operation Entebbe
Ilan Ramon – astronaut on Columbia flight STS-107
Gilad Shalit – kidnapped soldier held in Gaza, released in 2011
Avraham Stern – underground military leader
Yoel Strick – general
Israel Tal – general, father of Merkava tank
Moshe Ya'alon – Chief of the IDF General Staff
Yigael Yadin – Chief of the IDF General Staff
Amos Yarkoni – Bedouin-Israeli military officer
Politicians
Ehud Barak – prime minister (1999–2001)
Menachem Begin – prime minister (1977–83); Nobel Peace Prize (1978)
Yossi Beilin – leader of the Meretz-Yachad party and peace negotiator
David Ben-Gurion – first Prime Minister of Israel (1948–54, 1955–63)
Yitzhak Ben-Zvi – first elected/second president President of Israel (1952–63)
Naftali Bennett – leader of The Jewish Home party, minister of economy and minister of religious services (2013–present)
Geula Cohen – politician, activist and "Israel Prize" recipient
Abba Eban – diplomat and Foreign Affairs Minister of Israel (1966–74)
Yuli-Yoel Edelstein – speaker of the Knesset
Uzi Eilam – ex-director of Israel's Atomic Energy Commission
Effie Eitam – former leader of the National Religious Party, now head of the Renewed Religious National Zionist party
Levi Eshkol – prime minister (1963–69)
Chaim Herzog – former president of Israel, first and only Irish-born Israeli President
Moshe Katsav – president (2000–07), and convicted rapist
Teddy Kollek – former mayor of Jerusalem
Yair Lapid – leader of the Yesh Atid party, minister of finance (2013–March 2015)
Yosef Lapid – former leader of the Shinui party
Raleb Majadele – member of the Knesset for the Labor Party. Majadele became the country's first Muslim minister when appointed Minister without Portfolio on 28 January 2007.
Golda Meir – prime minister (1969–74)
Benjamin Netanyahu – prime minister (1996–99), (2009–); Likud party chairman
Amir Ohana – first openly gay right-wing member of the Knesset
Ehud Olmert – prime minister (2006–09); former mayor of Jerusalem
Shimon Peres – President of Israel (2007–2014); prime minister (1984–86, 1995–96); Nobel Peace Prize (1994)
Yitzhak Rabin – prime minister (1974–77, 1992–95); Nobel Peace Prize (1994) (assassinated November 1995)
Josh Reinstein – director of the Knesset Christian Allies Caucus
Reuven Rivlin – President of Israel
Ayelet Shaked – Knesset member (2013–)
Yitzhak Shamir – prime minister (1983–84, 1986–92)
Yisrael Yeshayahu Sharabi – former speaker of the Knesset
Moshe Sharett – prime minister (1954–55)
Ariel Sharon – prime minister (2001–06)
Chaim Weizmann – first President of Israel (1949–52)
Rabbi Ovadia Yosef – spiritual leader of the Shas party
Rehavam Zeevi – founder of the Moledet party (assassinated October 2001)
Criminals
Yigal Amir – assassin of Yitzhak Rabin
Baruch Goldstein – murderer
Ami Popper – murderer
Benny Sela – rapist
Eden-Nathan Zada – murderer
Religious figures
Priests and Christian religious leaders
Father Gabriel Naddaf – Greek Orthodox Church – priest and judge in religious courts. Spokesman for the Greek Orthodox Patriarchate of Jerusalem. He is one of the founders of the Forum for recruiting Christians in the Israel Defense Forces.
Munib Younan – the elected president of the Lutheran World Federation since 2010 and the Evangelical Lutheran Church Bishop of Palestine and Jordan since 1998.
Archbishop Theodosios (Hanna) of Sebastia – Bishop of the Orthodox Patriarchate of Jerusalem.
Elias Chacour – Archbishop of Akko, Haifa, Nazareth and Galilee of the Melkite Eastern Catholic Church.
Riah Hanna Abu El-Assal – former Anglican Bishop in Jerusalem.
Jesus of Nazareth – Date of birth of Jesus, Bethlehem - Friday, April 3, AD 33, Golgotha
Haredi Rabbis
Yaakov Aryeh Alter Gerrer – Rebbe
Shlomo Zalman Auerbach
Yaakov Blau
Yisroel Moshe Dushinsky – Chief Rabbi of Jerusalem (Edah HaChareidis)
Yosef Tzvi Dushinsky – Chief Rabbi of Jerusalem (Edah HaChareidis)
Yosef Sholom Eliashiv
Mordechai Eliyahu – Sephardi Chief Rabbi of Israel 1983–93, (1929–2010)
Chaim Kanievsky
Avraham Yeshayeh Karelitz, Chazon Ish – (1878–1953)
Nissim Karelitz – Head Justice of Rabbinical Court of Bnei Brak
Meir Kessler – Chief Rabbi of Modi'in Illit
Zundel Kroizer – author of Ohr Hachamah
Dov Landau – rosh yeshiva of Slabodka yeshiva of Bnei Brak
Yissachar Dov Rokeach – the fifth Belzer rebbe
Yitzchok Scheiner – rosh yeshiva of Kamenitz yeshiva of Jerusalem
Elazar Menachem Shach, Rav Shach – (1899–2001)
Moshe Shmuel Shapira – rosh yeshiva of Beer Yaakov
Dovid Shmidel – Chairman of Asra Kadisha
Yosef Chaim Sonnenfeld – Chief Rabbi of Jerusalem (Edah HaChareidis)
Yitzchok Tuvia Weiss – Chief Rabbi of Jerusalem (Edah HaChareidis)
Ovadia Yosef
Amram Zaks – rosh yeshiva of the Slabodka yeshiva of Bnei Brak
Reform Rabbis
Gilad Kariv
Religious-Zionist Rabbis
Shlomo Amar – Sephardic Chief Rabbi of Israel
David Hartman
Avraham Yitzchak Kook – pre-state Ashkenazic Chief Rabbi of the Land of Israel, (1865–1935)
Israel Meir Lau – Ashkenazic Chief Rabbi of Israel (1993–2003), Chief Rabbi of Netanya (1978–88), (1937–)
Aharon Lichtenstein
Yona Metzger – Ashkenazic Chief Rabbi of Israel
Shlomo Riskin – Ashkenazic Chief Rabbi of Efrat
See also
List of Israeli Nobel laureates
List of Israel Prize recipients
List of people by nationality
Politics of Israel, List of Knesset members
Culture of Israel, Music of Israel
Science and technology in Israel
List of Hebrew language authors, poets and playwrights
List of Israeli Arab Muslims
List of Dutch Israelis
List of Israeli Druze
List of notable Mizrahi Jews and Sephardi Jews in Israel
List of notable Ashkenazi Jews in Israel
List of notable Ethiopian Jews in Israel
List of people from Jerusalem
List of people from Haifa
References
Related links
Presidents of Israel
Prime Ministers of Israel |
422387 | https://en.wikipedia.org/wiki/Martin%20Hellman | Martin Hellman | Martin Edward Hellman (born October 2, 1945) is an American cryptologist, best known for his involvement with public key cryptography in cooperation with Whitfield Diffie and Ralph Merkle. Hellman is a longtime contributor to the computer privacy debate, and has applied risk analysis to a potential failure of nuclear deterrence.
Hellman was elected a member of the National Academy of Engineering in 2002 for contributions to the theory and practice of cryptography.
In 2016, wrote a book with his wife, Dorothie Hellman, that links creating love at home to bringing peace to the planet (A New Map for Relationships: Creating True Love at Home and Peace on the Planet).
Early life
Born in New York to a Jewish family, Hellman graduated from the Bronx High School of Science. He went on to take his bachelor's degree in electrical engineering from New York University in 1966, and at Stanford University he received a master's degree and a Ph.D. in the discipline in 1967 and 1969.
Career
From 1968 to 1969 he worked at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York, where he encountered Horst Feistel. From 1969 to 1971, he was an assistant professor of electrical engineering at the Massachusetts Institute of Technology. He joined Stanford University electrical engineering department in 1971 as an assistant professor and served on the full-time faculty for twenty-five years before taking emeritus status as a full professor in 1996.
Public key cryptography
Hellman and Whitfield Diffie's paper New Directions in Cryptography was published in 1976. It introduced a radically new method of distributing cryptographic keys, which went far toward solving one of the fundamental problems of cryptography, key distribution. It has become known as Diffie–Hellman key exchange, although Hellman has argued that it ought to be called Diffie-Hellman-Merkle key exchange because of Merkle's separate contribution. The article stimulated the development of a new class of encryption algorithms, known variously as public key encryption and asymmetric encryption. Hellman and Diffie were awarded the Marconi Fellowship and accompanying prize in 2000 for work on public-key cryptography and for helping make cryptography a legitimate area of academic research, and they were awarded the 2015 Turing Award for the same work.
Computer privacy debate
Hellman has been a longtime contributor to the computer privacy debate. He and Diffie were the most prominent critics of the short key size of the Data Encryption Standard (DES) in 1975. An audio recording survives of their review of DES at Stanford in 1976 with Dennis Branstad of NBS and representatives of the National Security Agency. Their concern was well-founded: subsequent history has shown not only that NSA actively intervened with IBM and NBS to shorten the key size, but also that the short key size enabled exactly the kind of massively parallel key crackers that Hellman and Diffie sketched out. In response to RSA Security's DES Challenges starting in 1997, brute force crackers were built that could break DES, making it clear that DES was insecure and obsolete. As of 2012, a $10,000 commercially available machine could recover a DES key in days.
Hellman also served (1994–96) on the National Research Council's Committee to Study National Cryptographic Policy, whose main recommendations have since been implemented.
International security
Hellman has been active in researching international security since 1985.
Beyond War
Hellman was involved in the original Beyond War movement, serving as the principal editor for the "BEYOND WAR: A New Way of Thinking" booklet.
Breakthrough
In 1987 more than 30 scholars came together to produce Russian and English editions of the book Breakthrough: Emerging New Thinking, Soviet and Western Scholars Issue a Challenge to Build a World Beyond War. Anatoly Gromyko and Martin Hellman served as the chief editors. The authors of the book examine questions such as: How can we overcome the inexorable forces leading toward a clash between the United States and the Soviet Union? How do we build a common vision for the future? How can we restructure our thinking to synchronize with the imperative of our modern world?
Defusing the nuclear threat
Hellman's current project in international security is to defuse the nuclear threat. In particular, he is studying the probabilities and risks associated with nuclear weapons and encouraging further international research in this area. His website NuclearRisk.org has been endorsed by a number of prominent individuals, including a former Director of the National Security Agency, Stanford's President Emeritus, and two Nobel Laureates.
Hellman is a member of the Board of Directors for Daisy Alliance, a non-governmental organization based in Atlanta, Georgia, seeking global security through nuclear nonproliferation and disarmament.
Awards and honors
In 1997 he was awarded The Franklin Institute's Louis E. Levy Medal, in 1981 the IEEE Donald G. Fink Prize Paper Award (together with Whitfield Diffie), in 2000, he won the Marconi Prize for his invention of public-key cryptography to protect privacy on the Internet, also together with Whit Diffie. In 1998, Hellman was a Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society, and in 2010 the IEEE Richard W. Hamming Medal.
In 2011, he was inducted into the National Inventors Hall of Fame.
Also in 2011, Hellman was made a Fellow of the Computer History Museum for his work, with Whitfield Diffie and Ralph Merkle, on public key cryptography.
Hellman won the Turing Award for 2015 together with Whitfield Diffie. The Turing award is widely considered the most prestigious award in the field of computer science. The citation for the award was: "For fundamental contributions to modern cryptography. Diffie and Hellman's groundbreaking 1976 paper, "New Directions in Cryptography," introduced the ideas of public-key cryptography and digital signatures, which are the foundation for most regularly-used security protocols on the internet today."
References
External links
Oral history interview with Martin Hellman Oral history interview 2004, Palo Alto, California. Charles Babbage Institute, University of Minnesota, Minneapolis. Hellman describes his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s. He also relates his subsequent work in cryptography with Steve Pohlig (the Pohlig–Hellman algorithm) and others. Hellman addresses the National Security Agency’s (NSA) early efforts to contain and discourage academic work in the field, the Department of Commerce’s encryption export restrictions (under the International Traffic of Arms Regulation, or ITAR), and key escrow (the so-called Clipper chip). He also touches on the commercialization of cryptography with RSA Data Security and VeriSign.
Martin Hellman's website on the risk of nuclear threat from nuclear war or nuclear terrorism
"Defusing the nuclear threat and making the world safer" Announcement of Hellman presentation at U.C. Santa Cruz; Oct. 2008
Hellman at the 2009 RSA conference, video with Hellman participating on the Cryptographer's Panel, April 21, 2009, Moscone Center, San Francisco
1945 births
Living people
Members of the United States National Academy of Engineering
American cryptographers
Jewish American scientists
IBM employees
MIT School of Engineering faculty
Stanford University School of Engineering faculty
Public-key cryptographers
Modern cryptographers
The Bronx High School of Science alumni
International Association for Cryptologic Research fellows
Polytechnic Institute of New York University alumni
Turing Award laureates
IEEE Centennial Medal laureates
Computer security academics
Mathematicians from New York (state)
Recipients of the Order of the Cross of Terra Mariana, 3rd Class
Stanford University School of Engineering alumni |
422784 | https://en.wikipedia.org/wiki/Client-side | Client-side | Client-side refers to operations that are performed by the client in a client–server relationship in a computer network.
General concepts
Typically, a client is a computer application, such as a web browser, that runs on a user's local computer, smartphone, or other device, and connects to a server as necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe the operations or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use less bandwidth, and incur a lesser security risk.
When the server serves data in a commonly used manner, for example according to standard protocols such as HTTP or FTP, users may have their choice of a number of client programs (e.g. most modern web browsers can request and receive data using both HTTP and FTP). In the case of more specialized applications, programmers may write their own server, client, and communications protocol which can only be used with one another.
Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be termed client-side operations.
Computer security
In a computer security context, client-side vulnerabilities or attacks refer to those that occur on the client / user's computer system, rather than on the server side, or in between the two. As an example, if a server contained an encrypted file or message which could only be decrypted using a key housed on the user's computer system, a client-side attack would normally be an attacker's only opportunity to gain access to the decrypted contents. For instance, the attacker might cause malware to be installed on the client system, allowing the attacker to view the user's screen, record the user's keystrokes, and steal copies of the user's encryption keys, etc. Alternatively, an attacker might employ cross-site scripting vulnerabilities to execute malicious code on the client's system without needing to install any permanently resident malware.
Examples
Distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, as well as Internet-dependent applications like Google Earth, rely primarily on client-side operations. They initiate a connection with the server (either in response to a user query, as with Google Earth, or in an automated fashion, as with SETI@home), and request some data. The server selects a data set (a server-side operation) and sends it back to the client. The client then analyzes the data (a client-side operation), and, when the analysis is complete, displays it to the user (as with Google Earth) and/or transmits the results of calculations back to the server (as with SETI@home).
In the context of the World Wide Web, commonly encountered computer languages which are evaluated or run on the client side include:
Cascading Style Sheets (CSS)
HTML
JavaScript
See also
Client-side prediction
Server-side
References
Clients (computing) |
423418 | https://en.wikipedia.org/wiki/Meet-in-the-middle%20attack | Meet-in-the-middle attack | The meet-in-the-middle attack (MITM), a known plaintext attack, is a generic space–time tradeoff cryptographic attack against encryption schemes that rely on performing multiple encryption operations in sequence. The MITM attack is the primary reason why Double DES is not used and why a Triple DES key (168-bit) can be bruteforced by an attacker with 256 space and 2112 operations.
Description
When trying to improve the security of a block cipher, a tempting idea is to encrypt the data several times using multiple keys. One might think this doubles or even n-tuples the security of the multiple-encryption scheme, depending on the number of times the data is encrypted, because an exhaustive search on all possible combination of keys (simple brute-force) would take 2n·k attempts if the data is encrypted with k-bit keys n times.
The MITM is a generic attack which weakens the security benefits of using multiple encryptions by storing intermediate values from the encryptions or decryptions and using those to improve the time required to brute force the decryption keys. This makes a Meet-in-the-Middle attack (MITM) a generic space–time tradeoff cryptographic attack.
The MITM attack attempts to find the keys by using both the range (ciphertext) and domain (plaintext) of the composition of several functions (or block ciphers) such that the forward mapping through the first functions is the same as the backward mapping (inverse image) through the last functions, quite literally meeting in the middle of the composed function. For example, although Double DES encrypts the data with two different 56-bit keys, Double DES can be broken with 257 encryption and decryption operations.
The multidimensional MITM (MD-MITM) uses a combination of several simultaneous MITM attacks like described above, where the meeting happens in multiple positions in the composed function.
History
Diffie and Hellman first proposed the meet-in-the-middle attack on a hypothetical expansion of a block cipher in 1977.
Their attack used a space–time tradeoff to break the double-encryption scheme in only twice the time needed to break the single-encryption scheme.
In 2011, Bo Zhu and Guang Gong investigated the multidimensional meet-in-the-middle attack and presented new attacks on the block ciphers GOST, KTANTAN and Hummingbird-2.
Meet-in-the-middle (1D-MITM)
Assume someone wants to attack an encryption scheme with the following characteristics for a given plaintext P and ciphertext C:
where ENC is the encryption function, DEC the decryption function defined as ENC−1 (inverse mapping) and k1 and k2 are two keys.
The naive approach at brute-forcing this encryption scheme is to decrypt the ciphertext with every possible k2, and decrypt each of the intermediate outputs with every possible k1, for a total of 2|k1| × 2|k2| (or 2|k1|+|k2|) operations.
The meet-in-the-middle attack uses a more efficient approach. By decrypting C with k2, one obtains the following equivalence:
The attacker can compute ENCk1(P) for all values of k1 and DECk2(C) for all possible values of k2, for a total of 2|k1| + 2|k2| (or 2|k1|+1, if k1 and k2 have the same size) operations. If the result from any of the ENCk1(P) operations matches a result from the DECk2(C) operations, the pair of k1 and k2 is possibly the correct key. This potentially-correct key is called a candidate key. The attacker can determine which candidate key is correct by testing it with a second test-set of plaintext and ciphertext.
The MITM attack is one of the reasons why Data Encryption Standard (DES) was replaced with Triple DES and not Double DES. An attacker can use a MITM attack to bruteforce Double DES with 257 operations and 256 space, making it only a small improvement over DES. Triple DES uses a "triple length" (168-bit) key and is also vulnerable to a meet-in-the-middle attack in 256 space and 2112 operations, but is considered secure due to the size of its keyspace.
MITM algorithm
Compute the following:
:
and save each together with corresponding in a set A
:
and compare each new with the set A
When a match is found, keep as candidate key-pair in a table T. Test pairs in T on a new pair of to confirm validity. If the key-pair does not work on this new pair, do MITM again on a new pair of .
MITM complexity
If the keysize is k, this attack uses only 2k+1 encryptions (and decryptions) and O(2k) memory to store the results of the forward computations in a lookup table, in contrast to the naive attack, which needs 22·k encryptions but O(1) space.
Multidimensional MITM (MD-MITM)
While 1D-MITM can be efficient, a more sophisticated attack has been developed: multidimensional meet-in-the-middle attack, also abbreviated MD-MITM.
This is preferred when the data has been encrypted using more than 2 encryptions with different keys.
Instead of meeting in the middle (one place in the sequence), the MD-MITM attack attempts to reach several specific intermediate states using the forward and backward computations at several positions in the cipher.
Assume that the attack has to be mounted on a block cipher, where the encryption and decryption is defined as before:
that is a plaintext P is encrypted multiple times using a repetition of the same block cipher
The MD-MITM has been used for cryptanalysis of, among many, the GOST block cipher, where it has been shown that a 3D-MITM has significantly reduced the time complexity for an attack on it.
MD-MITM algorithm
Compute the following:
and save each together with corresponding in a set .
and save each together with corresponding in a set .
For each possible guess on the intermediate state compute the following:
and for each match between this and the set , save and in a new set .
and save each together with corresponding in a set .
For each possible guess on an intermediate state compute the following:
and for each match between this and the set , check also whether
it matches with and then save the combination of sub-keys together in a new set .
For each possible guess on an intermediate state compute the following:
Use the found combination of sub-keys on another pair of plaintext/ciphertext to verify the correctness of the key.
Note the nested element in the algorithm. The guess on every possible value on sj is done for each guess on the previous sj-1.
This make up an element of exponential complexity to overall time complexity of this MD-MITM attack.
MD-MITM complexity
Time complexity of this attack without brute force, is ⋅⋅
Regarding the memory complexity, it is easy to see that are much smaller than the first built table of candidate values: as i increases, the candidate values contained in must satisfy more conditions thereby fewer candidates will pass on to the end destination .
An upper bound of the memory complexity of MD-MITM is then
where denotes the length of the whole key (combined).
The data complexity depends on the probability that a wrong key may pass (obtain a false positive), which is , where is the intermediate state in the first MITM phase. The size of the intermediate state and the block size is often the same!
Considering also how many keys that are left for testing after the first MITM-phase, it is .
Therefore, after the first MITM phase, there are , where is the block size.
For each time the final candidate value of the keys are tested on a new plaintext/ciphertext-pair, the number of keys that will pass will be multiplied by the probability that a key may pass which is .
The part of brute force testing (testing the candidate key on new -pairs, have time complexity
, clearly for increasing multiples of b in the exponent, number tends to zero.
The conclusion on data complexity is by similar reasoning restricted by that around -pairs.
Below is a specific example of how a 2D-MITM is mounted:
A general example of 2D-MITM
This is a general description of how 2D-MITM is mounted on a block cipher encryption.
In two-dimensional MITM (2D-MITM) the method is to reach 2 intermediate states inside the multiple encryption of the plaintext. See below figure:
2D-MITM algorithm
Compute the following:
and save each together with corresponding in a set A
and save each together with corresponding in a set B.
For each possible guess on an intermediate state s between and
compute the following:
and for each match between this and the set A, save and in a new set T.
and for each match between this and the set B, check also whether it matches with T for
if this is the case then:
Use the found combination of sub-keys on another pair of plaintext/ciphertext to verify the correctness of the key.
2D-MITM complexity
Time complexity of this attack without brute force, is
where |⋅| denotes the length.
Main memory consumption is restricted by the construction of the sets A and B where T is much smaller than the others.
For data complexity see subsection on complexity for MD-MITM.
See also
Birthday attack
Wireless security
Cryptography
3-subset meet-in-the-middle attack
Partial-matching meet-in-the-middle attack
References
Cryptographic attacks |
424589 | https://en.wikipedia.org/wiki/Skype | Skype | Skype () is a proprietary telecommunications application operated by Skype Technologies, a division of Microsoft, best known for VoIP-based videotelephony, videoconferencing and voice calls. It also has instant messaging, file transfer, debit-based calls to landline and mobile telephones (over traditional telephone networks), and other features. Skype is available on various desktop, mobile, and video game console platforms.
Skype was created by Niklas Zennström, Janus Friis, and four Estonian developers and first released in August 2003. In September 2005, eBay acquired Skype for $2.6 billion. In September 2009, Silver Lake, Andreessen Horowitz, and the Canada Pension Plan Investment Board bought 65% of Skype for $1.9 billion from eBay, valuing the business at $2.92 billion. In May 2011, Microsoft bought Skype for $8.5 billion and used it to replace their Windows Live Messenger. As of 2011, most of the development team and 44% of all the division's employees were in Tallinn and Tartu, Estonia.
Skype originally featured a hybrid peer-to-peer and client–server system. It became entirely powered by Microsoft-operated supernodes in May 2012; in 2017, it changed from a peer-to-peer service to a centralized Azure-based service.
As of March 2020, Skype was used by 100 million people at least once a month and by 40 million people each day. During the COVID-19 pandemic, Skype lost a large part of its market share to Zoom.
Etymology
The name for the software is derived from "Sky peer-to-peer", which was then abbreviated to "Skyper". However, some of the domain names associated with "Skyper" were already taken. Dropping the final "r" left the current title "Skype", for which domain names were available.
History
Skype was founded in 2003 by Niklas Zennström, from Sweden, and Janus Friis, from Denmark. The Skype software was created by Estonians Ahti Heinla, Priit Kasesalu, Jaan Tallinn, and Toivo Annus. Friis and Annus are credited with the idea of reducing the cost of voice calls by using a P2P protocol like that of Kazaa. An early alpha version was created and tested in spring 2003, and the first public beta version was released on 29 August 2003.
In June 2005, Skype entered an agreement with the Polish web portal Onet.pl for an integrated offering on the Polish market. On 12 September 2005, eBay Inc. agreed to acquire Luxembourg-based Skype Technologies SA for approximately US$2.5 billion in up-front cash and eBay stock, plus potential performance-based consideration. On 1 September 2009, eBay announced it was selling 65% of Skype to Silver Lake, Andreessen Horowitz, and the Canada Pension Plan Investment Board for US$1.9 billion, valuing Skype at US$2.75 billion. On 14 July 2011, Skype partnered with Comcast to bring its video chat service to Comcast subscribers via HDTV sets.
On 17 June 2013, Skype released a free video messaging service, which can be operated on Windows, Mac OS, iOS, iPadOS, Android, and BlackBerry.
Between 2017 and 2020, Skype collaborated with PayPal to provide a money-send feature. It allowed users to transfer funds via the Skype mobile app in the middle of a conversation.
In 2019, Skype was announced to be the sixth most downloaded mobile app of the decade, from 2010 to 2019.
Microsoft acquisition
On 10 May 2011, Microsoft Corporation acquired Skype Communications, S.à r.l for US$8.5 billion. The company was incorporated as a division of Microsoft, which acquired all its technologies with the purchase. The acquisition was completed on 13 October 2011. Shortly after the acquisition, Microsoft began integrating the Skype service with its own products. Along with taking over the development of existing Skype desktop and mobile apps, the company developed a dedicated client app for its then-newly released, touch-focused Windows 8 and Windows RT operating systems. They were made available from Windows Store when the then-new OS launched on 26 October 2012. The following year, it became the default messaging app for Windows 8.1, replacing the Windows 8 Messaging app at the time, and became pre-installed software on every device that came with or upgraded to 8.1.
In a month-long transition from 8 to 30 April 2013, Microsoft discontinued two of its own products in favor of Skype, including its Windows Live Messenger instant messaging service, although Messenger continued to be available in mainland China.
On 11 November 2014, Microsoft announced that in 2015, its Lync product would be replaced by Skype for Business. This combined features of Lync and the consumer Skype software. There are two user interfaces, organizations could switch their users between the default Skype for Business interface to the Lync interface.
Post-acquisition
On 12 August 2013, Skype released the 4.10 update to the app for Apple iPhone and iPad that allows HD quality video for iPhone 5 and fourth-generation iPads.
On 20 November 2014, Microsoft Office's team announced that a new chat powered by Skype would be implemented in their software, giving tools to be able to chat with co-workers in the same document.
On 15 September 2015, Skype announced the release of Mojis, "a brand new way to express yourself on Skype". Mojis are short clips/gifs featuring characters from films and TV shows to be entered into conversations with the same ease as emoticons. They worked with Universal Studios, Disney Muppets, BBC and other studios to add to the available collection of Mojis. Later that year, Gurdeep Singh Pall, Corporate Vice President of Skype, announced that Microsoft had acquired the technology from Talko.
In July 2016, Skype introduced an early Alpha version of a new Skype for Linux client, built with WebRTC technology, after several petitions had asked Microsoft to continue development for Linux. In September of that year, Skype updated their iOS app with new features, including an option to call contacts on Skype through Siri voice commands. In October of that year, Microsoft launched Skype for Business for Mac.
In February 2017, Microsoft announced plans to discontinue its Skype Wi-Fi service globally. The application was delisted, and the service itself became non-functional from 31 March 2017. On 5 June 2017, Microsoft announced its plans to revamp Skype with similar features to Snapchat, allowing users to share temporary copies of their photos and video files. In late June 2017, Microsoft rolled out their latest update for iOS, incorporating a revamped design and new third-party integrations, with platforms including Gfycat, YouTube, and UpWorthy. It was not well-received, with numerous negative reviews and complaints that the new client broke existing functionality. Skype later removed this "makover". In December 2017, Microsoft added "Skype Interviews", a shared code editing system for those wishing to hold job interviews for programming roles.
Microsoft eventually moved the service from a peer-to-peer to a central server based system, and with it adjusted the user interfaces of apps to make text-based messaging more prominent than voice calling. Skype for Windows, iOS, Android, Mac and Linux all received significant visual overhauls at this time.
Features
Registered users of Skype are identified by a unique Skype ID and may be listed in the Skype directory under a Skype username. Skype allows these registered users to communicate through both instant messaging and voice chat. Voice chat allows telephone calls between pairs of users and conference calling and uses proprietary audio codec. Skype's text chat client allows group chats, emoticons, storing chat history, and editing of previous messages. Offline messages were implemented in a beta build of version 5 but removed after a few weeks without notification. The usual features familiar to instant messaging users—user profiles, online status indicators, and so on—are also included.
The Online Number, a.k.a. SkypeIn, service allows Skype users to receive calls on their computers dialed by conventional phone subscribers to a local Skype phone number; local numbers are available for Australia, Belgium, Brazil, Chile, Colombia, Denmark, the Dominican Republic, Estonia, Finland, France, Germany, Hong Kong, Hungary, India, Ireland, Japan, Mexico, Nepal, New Zealand, Poland, Romania, South Africa, South Korea, Sweden, Switzerland, Turkey, the Netherlands, the United Kingdom, and the United States. A Skype user can have local numbers in any of these countries, with calls to the number charged at the same rate as calls to fixed lines in the country.
Skype supports conference calls, video chats, and screen sharing between 25 people at a time for free, which then increased to 50 on 5 April 2019.
Skype does not provide the ability to call emergency numbers, such as 112 in Europe, 911 in North America, 999 in the UK or 100 in India and Nepal. However, as of December 2012, there is limited support for emergency calls in the United Kingdom, Australia, Denmark, and Finland. The U.S. Federal Communications Commission (FCC) has ruled that, for the purposes of section 255 of the Telecommunications Act, Skype is not an "interconnected VoIP provider". As a result, the U.S. National Emergency Number Association recommends that all VoIP users have an analog line available as a backup.
In 2019, Skype added an option to blur the background in a video chat interface using AI algorithms purely done using software, despite a depth-sensing camera not being present in most webcams.
Usage and traffic
At the end of 2010, there were over 660 million worldwide users, with over 300 million estimated active each month as of August 2015. At one point in February 2012, there were 34 million users concurrently online on Skype.
In January 2011, after the release of video calling on the Skype client for iPhone, Skype reached a record 27 million simultaneous online users. This record was broken with 29 million simultaneous online users on 21 February 2011 and again on 28 March 2011 with 30 million online users. On 25 February 2012, Skype announced that it has over 32 million users for the first time ever. By 5 March 2012, it had 36 million simultaneous online users, and less than a year later, on 21 January 2013, Skype had more than 50 million concurrent users online.
In June 2012, Skype had surpassed 70 million downloads on Android.
On 19 July 2012, Microsoft announced that Skype users had logged 115 billion minutes of calls in the quarter, up to 50% since the last quarter.
On 15 January 2014, TeleGeography estimated that Skype-to-Skype international traffic has gone up to 36% in 2013 to 214 billion minutes.
At end March 2020 there was a 70% increase in the number of daily users from the previous month, due to the COVID-19 pandemic.
System and software
Client applications and devices
Windows client
Multiple different versions of Skype have been released for Windows since its conception. The original line of Skype applications continued from versions 1.0 through 4.0. It has offered a desktop-only program since 2003. Later, a mobile version was created for Windows Phones.
In 2012, Skype introduced a new version for Windows 8 similar to the Windows Phone version. On 7 July 2015 Skype modified the application to direct Windows users to download the desktop version, but it was set to continue working on Windows RT until October 2016. In November 2015, Skype introduced three new applications, called Messaging, Skype Video, and Phone, intended to provide an integrated Skype experience on Windows 10. On 24 March 2016, Skype announced the integrated applications did not satisfy most users' needs and announced that they and the desktop client would eventually be replaced with a new UWP application, which was released as a preview version for the Windows 10 Anniversary Update and dubbed as the stable version with the release of the Windows 10 Creators Update.
The latest version of Skype for Windows is Skype 11, which is based on the Universal Windows Platform and runs on various Windows 10-related systems, including Xbox One, Xbox Series X/S, Windows phones, and Microsoft Hololens. Microsoft still offers the older Skype 8, which is Win32-based and runs on all systems from Windows XP (which is otherwise unsupported by Microsoft) to the most recent release of Windows 10.
In late 2017, this version was upgraded to Skype 12.9 in which several features were both removed and added.
Other desktop clients
macOS (10.9 or newer)
Linux (Debian, Debian-based (Ubuntu, etc.), Fedora, openSUSE)
Mobile clients
iOS
Android
Skype was previously available on Nokia X, Symbian, BlackBerry OS and BlackBerry 10 devices.
In May 2009 a Version 3.0 was available on Windows Mobile 5 to 6.1, and in September 2015 a Version 2.29 was available on Windows Phone 8.1; in 2016 Microsoft announced that this would stop working in early 2017 once Skype's transition from peer-to-peer to client-server was complete.
Other platforms
The Nokia N800, N810, and N900 Internet tablets, which run Maemo
The Nokia N9, which runs MeeGo, comes with Skype voice calling and text messaging integrated; however, it lacks video-calling.
Both the Sony mylo COM-1 and COM-2 models
The PlayStation Portable Slim and Lite series, though the user needs to purchase a specially designed microphone peripheral. The PSP-3000 has a built-in microphone, which allows communication without the Skype peripheral. The PSP Go has the ability to use Bluetooth connections with the Skype application, in addition to its built-in microphone. Skype for PlayStation Vita may be downloaded via the PlayStation Network in the U.S. It includes the capability to receive incoming calls with the application running in the background.
The Samsung Smart TV had a Skype app, which could be downloaded for free. It used the built-in camera and microphone for the newer models. Alternatively, a separate mountable Skype camera with built-in speakers and microphones is available to purchase for older models. This functionality has now been disabled along with any other "TV Based" Skype clients.
Some devices were made to work with Skype by talking to a desktop Skype client or by embedding Skype software into the device. These were usually either tethered to a PC or have a built-in Wi-Fi client to allow calling from Wi-Fi hotspots, like the Netgear SPH101 Skype Wi-Fi Phone, the SMC WSKP100 Skype Wi-Fi Phone, the Belkin F1PP000GN-SK Wi-Fi Skype Phone, the Panasonic KX-WP1050 Wi-Fi Phone for Skype Executive Travel Set, the IPEVO So-20 Wi-Fi Phone for Skype and the Linksys CIT200 Wi-Fi Phone.
Third-party licensing
Third-party developers, such as Truphone, Nimbuzz, and Fring, previously allowed Skype to run in parallel with several other competing VoIP/IM networks (Truphone and Nimbuzz provide TruphoneOut and NimbuzzOut as a competing paid service) in any Symbian or Java environment. Nimbuzz made Skype available to BlackBerry users and Fring provided mobile video calling over Skype as well as support for the Android platform. Skype disabled access to Skype by Fring users in July 2010. Nimbuzz discontinued support of Skype on request in October 2010.
Before and during the Microsoft acquisition, Skype withdrew licensing from several third parties producing software and hardware compatible with Skype. The Skype for Asterisk product from Digium was withdrawn as "no longer available for sale". The Senao SN358+ long-range (10–15 km) cordless phone was discontinued due to loss of licenses to participate in the Skype network as peers. In combination, these two products made it possible to create roaming cordless mesh networks with a robust handoff.
Technology
Protocol
Skype uses a proprietary Internet telephony (VoIP) network called the Skype protocol. The protocol has not been made publicly available by Skype, and official applications using the protocol are also proprietary. Part of the Skype technology relies on the Global Index P2P protocol belonging to the Joltid Ltd. corporation. The main difference between Skype and standard VoIP clients is that Skype operates on a peer-to-peer model (originally based on the Kazaa software), rather than the more usual client–server model (note that the very popular Session Initiation Protocol (SIP) model of VoIP is also peer-to-peer, but implementation generally requires registration with a server, as does Skype).
On 20 June 2014, Microsoft announced the deprecation of the old Skype protocol. Within several months from this date, in order to continue using Skype services, Skype users will have to update to Skype applications released in 2014. The new Skype protocol—Microsoft Notification Protocol 24 was released. The deprecation became effective in the second week of August 2014. Transferred files are now saved on central servers.
As far as networking stack support is concerned, Skype only supports the IPv4 protocol. It lacks support for the next-generation Internet protocol, IPv6. Skype for Business, however, includes support for IPv6 addresses, along with continued support of IPv4.
Protocol detection and control
Many networking and security companies have claimed to detect and control Skype's protocol for enterprise and carrier applications. While the specific detection methods used by these companies are often private, Pearson's chi-squared test and naive Bayes classification are two approaches that were published in 2008. Combining statistical measurements of payload properties (such as byte frequencies and initial byte sequences) as well as flow properties (like packet sizes and packet directions) has also shown to be an effective method for identifying Skype's TCP- and UDP-based protocols.
Audio codecs
Skype 2.x used G.729, Skype 3.2 introduced SVOPC, and Skype 4.0 added a Skype-created codec called :SILK, intended to be "lightweight and embeddable". Additionally, Skype has released Opus as a free codec, which integrates the SILK codec principles for voice transmission with the CELT codec principles for higher-quality audio transmissions, such as live music performances. Opus was submitted to the Internet Engineering Task Force (IETF) in September 2010. Since then, it has been standardized as RFC 6716.
Video codecs
VP7 is used for versions prior to Skype 5.5.
As of version 7.0, H.264 is used for both group and one-on-one video chat, at standard definition, 720p and 1080p high-definition.
Skype Qik
Skype acquired the video service Qik in 2011. After shutting down Qik in April 2014, Skype relaunched the service as Skype Qik on 14 October 2014. Although Qik offered video conferencing and Internet streaming, the new service focuses on mobile video messaging between individuals and groups.
Hyperlink format
Skype uses URIs as skype:USER?call for a call.
Security and privacy
Skype was claimed initially to be a secure communication, with one of its early web pages stating "highly secure with end-to-end encryption". Security services were invisible to the user, and encryption cannot be disabled. Skype claims to use publicly documented, widely trusted encryption techniques for Skype-to-Skype communication: RSA for key negotiation and the Advanced Encryption Standard to encrypt conversations. However, it is impossible to verify that these algorithms are used correctly, completely, and at all times, as there is no public review possible without a protocol specification and/or the program's source code. Skype provides an uncontrolled registration system for users with no proof of identity. Instead, users may choose a screen name which does not have to relate to their real-life identity in any way; a name chosen could also be an impersonation attempt, where the user claims to be someone else for fraudulent purposes. A third-party paper analyzing the security and methodology of Skype was presented at Black Hat Europe 2006. It analyzed Skype and found a number of security issues with the then-current security model.
Skype incorporates some features that tend to hide its traffic, but it is not specifically designed to thwart traffic analysis and therefore does not provide anonymous communication. Some researchers have been able to watermark the traffic so that it is identifiable even after passing through an anonymizing network.
In an interview, Kurt Sauer, the Chief Security Officer of Skype, said, "We provide a safe communication option. I will not tell you whether we can listen or not." This does not deny the fact that the U.S. National Security Agency (NSA) monitors Skype conversations. Skype's client uses an undocumented and proprietary protocol. The Free Software Foundation (FSF) is concerned about user privacy issues arising from using proprietary software and protocols and has made a replacement for Skype one of their high-priority projects. Security researchers Biondi and Desclaux have speculated that Skype may have a back door, since Skype sends traffic even when it is turned off and because Skype has taken extreme measures to obfuscate the program's traffic and functioning. Several media sources reported that at a meeting about the "Lawful interception of IP based services" held on 25 June 2008, high-ranking unnamed officials at the Austrian interior ministry said that they could listen in on Skype conversations without problems. Austrian public broadcasting service ORF, citing minutes from the meeting, reported that "the Austrian police are able to listen in on Skype connections". Skype declined to comment on the reports. One easily demonstrated method of monitoring is to set up two computers with the same Skype user ID and password. When a message is typed or a call is received on one computer, the second computer duplicates the audio and text. This requires knowledge of the user ID and password.
The United States Federal Communications Commission (FCC) has interpreted the Communications Assistance for Law Enforcement Act (CALEA) as requiring digital phone networks to allow wiretapping if authorized by an FBI warrant, in the same way as other phone services. In February 2009, Skype said that, not being a telephone company owning phone lines, it is exempt from CALEA and similar laws, which regulate US phone companies, and it is not clear whether Skype could support wiretapping even if it wanted to. According to the ACLU, the Act is inconsistent with the original intent of the Fourth Amendment to the U.S. Constitution; more recently, the ACLU has expressed the concern that the FCC interpretation of the Act is incorrect. It has been suggested that Microsoft made changes to Skype's infrastructure to ease various wiretapping requirements; however, Skype denies the claims.
Sometime before Skype was sold in 2009, the company had started Project Chess, a program to explore legal and technical ways to easily share calls with intelligence agencies and law enforcement.
On 20 February 2009, the European Union's Eurojust agency announced that the Italian Desk at Eurojust would "play a key role in the coordination and cooperation of the investigations on the use of internet telephony systems (VoIP), such as 'Skype'. [...] The purpose of Eurojust's coordination role is to overcome the technical and judicial obstacles to the interception of internet telephony systems, taking into account the various data protection rules and civil rights."
In November 2010, a flaw was disclosed to Skype that showed how computer crackers could secretly track any user's IP address. Due to Skype's peer-to-peer nature, this was a difficult issue to address, but this bug was eventually remedied in a 2016 update.
In 2012, Skype introduced automatic updates to better protect users from security risks but received some challenge from users of the Mac product, as the updates cannot be disabled from version 5.6 on, both on Mac OS and Windows versions, although in the latter, and only from version 5.9 on, automatic updating can be turned off in certain cases.
According to a 2012 Washington Post article, Skype "has expanded its cooperation with law enforcement authorities to make online chats and other user information available to police"; the article additionally mentions Skype made changes to allow authorities access to addresses and credit card numbers.
In November 2012, Skype was reported to have handed over user data of a pro-WikiLeaks activist to Dallas, Texas-based private security company iSIGHT Partners without a warrant or court order. The alleged handover would be a breach of Skype's privacy policy. Skype responded with a statement that it launched an internal investigation to probe the breach of user data privacy.
On 13 November 2012, a Russian user published a flaw in Skype's security, which allowed any person to take over a Skype account knowing only the victim's email by following 7 steps. This vulnerability was claimed to exist for months and existed for more than 12 hours since published widely.
On 14 May 2013, it was documented that a URL sent via a Skype instant messaging session was usurped by the Skype service and subsequently used in a HTTP HEAD query originating from an IP address registered to Microsoft in Redmond (the IP address used was 65.52.100.214). The Microsoft query used the full URL supplied in the IM conversation and was generated by a previously undocumented security service. Security experts speculate the action was triggered by a technology similar to Microsoft's SmartScreen Filter used in its browsers.
The 2013 mass surveillance disclosures revealed that agencies such as the NSA and the FBI have the ability to eavesdrop on Skype, including the monitoring and storage of text and video calls and file transfers. The PRISM surveillance program, which requires FISA court authorization, reportedly has allowed the NSA unfettered access to its data center supernodes. According to the leaked documents, integration work began in November 2010, but it was not until February 2011 that the company was served with a directive to comply signed by the attorney general, with NSA documents showing that collection began on 31 March 2011.
On 10 November 2014, Skype scored 1 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. Skype received a point for encryption during transit but lost points because communications are not encrypted with a key the provider does not have access to (i.e. the communications are not end-to-end encrypted), users cannot verify contacts' identities, past messages are not secure if the encryption keys are stolen (i.e. the service does not provide forward secrecy), the code is not open to independent review (i.e. not available to merely view, nor under a free-software license), the security design is not properly documented, and there has not been a recent independent security audit. AIM, BlackBerry Messenger, Ebuddy XMS, Hushmail, Kik Messenger, Viber and Yahoo Messenger also scored 1 out of 7 points.
As of August 2018, Skype now supports end-to-end encryption across all platforms.
Cybercrime on application
Cybersex trafficking has occurred on Skype and other videoconferencing applications. According to the Australian Federal Police, overseas pedophiles are directing child sex abuse using its live streaming services.
Service in the People's Republic of China
Since September 2007, users in China trying to download the Skype software client have been redirected to the site of TOM Online, a joint venture between a Chinese wireless operator and Skype, from which a modified Chinese version can be downloaded. The TOM client participates in China's system of Internet censorship, monitoring text messages between Skype users in China as well as messages exchanged with users outside the country. Niklas Zennström, then chief executive of Skype, told reporters that TOM "had implemented a text filter, which is what everyone else in that market is doing. Those are the regulations." He also stated, "One thing that's certain is that those things are in no way jeopardising the privacy or the security of any of the users."
In October 2008, it was reported that TOM had been saving the full message contents of some Skype text conversations on its servers, apparently focusing on conversations containing political issues such as Tibet, Falun Gong, Taiwan independence, and the Chinese Communist Party. The saved messages contain personally identifiable information about the message senders and recipients, including IP addresses, usernames, landline phone numbers, and the entire content of the text messages, including the time and date of each message. Information about Skype users outside China who were communicating with a TOM-Skype user was also saved. A server misconfiguration made these log files accessible to the public for a time.
Research on the TOM-Skype venture has revealed information about blacklisted keyword checks, allowing censorship and surveillance of its users. The partnership has received much criticism for the latter. Microsoft remains unavailable for comment on the issue.
According to reports from the advocacy group Great Fire, Microsoft has modified censorship restrictions and ensured encryption of all user information. Furthermore, Microsoft is now partnered with Guangming Founder (GMF) in China.
All attempts to visit the official Skype web page from mainland China redirects the user to skype.gmw.cn. The Linux version of Skype is unavailable.
Localization
Skype comes bundled with the following locales and languages: Arabic, Bulgarian, Catalan, Chinese (Traditional and Simplified), Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hebrew, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Lithuanian, Nepali, Norwegian, Polish, Portuguese (Brazilian and European), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, and Vietnamese.
As the Windows desktop program offers users the option of creating new language files, at least 80 other (full or partial) localizations are also available for many languages.
Customer service
In January 2010, Skype rescinded its policy of seizing funds in Skype accounts that have been inactive (no paid call) for 180 days. This was in settlement of a class-action lawsuit. Skype also paid up to US$4 to persons who opted into the action.
As of February 2012, Skype provides support through their web support portal, support community, @skypesupport on Twitter, and Skype Facebook page. Direct contact via email and live chat is available through their web support portal. Chat Support is a premium feature available to Skype Premium and some other paid users.
Skype's refund policy states that they will provide refunds in full if customers have used less than 1 euro of their Skype Credit. "Upon a duly submitted request, Skype will refund you on a pro-rata basis for the unused period of a Product".
Skype has come under some criticism from users for the inability to completely close accounts. Users not wanting to continue using Skype can make their account inactive by deleting all personal information, except for the username.
Due to an outage on 21 September 2015 that affected several users in New Zealand, Australia, and other countries, Skype decided to compensate their customers with 20 minutes of free calls to over 60 landline and 8 mobile phone destinations.
Educational use
Although Skype is a commercial product, its non-paid version is used with increasing frequency among teachers, schools, and charities interested in global education projects. A popular use case is to facilitate language learning through conversations that alternate between each participant's native language.
The video conferencing aspect of the software has been praised for its ability to connect students who speak different languages, facilitate virtual field trips, and engage directly with experts.
Skype in the classroom is another free-of-charge tool that Skype has set up on its website, designed to encourage teachers to make their classrooms more interactive, and collaborate with other teachers around the world. There are various Skype lessons in which students can participate. Teachers can also use a search tool and find experts in a particular field. The educational program Skype a Scientist, set up by biologist Sarah McAnulty in 2017, had in two years connected 14,312 classrooms with over 7000 volunteer scientists.
However, Skype is not adopted universally, with many educational institutions in the United States and Europe blocking the application from their networks.
See also
Caller ID spoofing
Censorship of Skype
Comparison of instant messaging clients
Comparison of instant messaging protocols
Comparison of VoIP software
List of video telecommunication services and product brands
Mobile VoIP
Presence information
Unified communications
References
External links
2003 software
2011 mergers and acquisitions
Android (operating system) software
Android Auto software
Companies in the PRISM network
Cross-platform software
Estonian brands
Estonian inventions
Freeware
Instant messaging clients
IOS software
MacOS instant messaging clients
Microsoft acquisitions
Pascal (programming language) software
Peer-to-peer software
Pocket PC software
Portable software
Proprietary freeware for Linux
Proprietary software that uses Qt
Silver Lake (investment firm) companies
Software that uses Qt
Symbian software
Universal Windows Platform apps
Videoconferencing software for Linux
Videotelephony
Voice over IP clients for Linux
VoIP companies of the United States
VoIP services
VoIP software
Windows instant messaging clients
Windows Mobile Standard software
Windows Phone software |
424617 | https://en.wikipedia.org/wiki/Onion%20routing | Onion routing | Onion routing is a technique for anonymous communication over a computer network. In an onion network, messages are encapsulated in layers of encryption, analogous to layers of an onion. The encrypted data is transmitted through a series of network nodes called onion routers, each of which "peels" away a single layer, uncovering the data's next destination. When the final layer is decrypted, the message arrives at its destination. The sender remains anonymous because each intermediary knows only the location of the immediately preceding and following nodes. While onion routing provides a high level of security and anonymity, there are methods to break the anonymity of this technique, such as timing analysis.
Development and implementation
Onion routing was developed in the mid-1990s at the U.S. Naval Research Laboratory by employees Paul Syverson, Michael G. Reed, and David Goldschlag to protect U.S. intelligence communications online. It was further developed by the Defense Advanced Research Projects Agency (DARPA) and patented by the Navy in 1998.
This method was publicly released by the same employees through publishing an article in the IEEE journal of communications the same year. It depicted the use of the method to protect the user from the network and outside observers who eavesdrop and conduct traffic analysis attacks. The most important part of this research is the configurations and applications of onion routing on the existing e-services, such as Virtual private network, Web-browsing, Email, Remote login, and Electronic cash.
Based on the existing onion routing technology, computer scientists Roger Dingledine and Nick Mathewson joined Paul Syverson in 2002 to develop what has become the largest and best-known implementation of onion routing, then called The Onion Routing project (Tor project).
After the Naval Research Laboratory released the code for Tor under a free license, Dingledine, Mathewson and five others founded The Tor Project as a non-profit organization in 2006, with the financial support of the Electronic Frontier Foundation and several other organizations.
Data structure
Metaphorically, an onion is the data structure formed by "wrapping" a message with successive layers of encryption to be decrypted ("peeled" or "unwrapped") by as many intermediary computers as there are layers before arriving at its destination. The original message remains hidden as it is transferred from one node to the next, and no intermediary knows both the origin and final destination of the data, allowing the sender to remain anonymous.
Onion creation and transmission
To create and transmit an onion, the originator selects a set of nodes from a list provided by a "directory node". The chosen nodes are arranged into a path, called a "chain" or "circuit", through which the message will be transmitted. To preserve the anonymity of the sender, no node in the circuit is able to tell whether the node before it is the originator or another intermediary like itself. Likewise, no node in the circuit is able to tell how many other nodes are in the circuit and only the final node, the "exit node", is able to determine its own location in the chain.
Using asymmetric key cryptography, the originator obtains a public key from the directory node to send an encrypted message to the first ("entry") node, establishing a connection and a shared secret ("session key"). Using the established encrypted link to the entry node, the originator can then relay a message through the first node to a second node in the chain using encryption that only the second node, and not the first, can decrypt. When the second node receives the message, it establishes a connection with the first node. While this extends the encrypted link from the originator, the second node cannot determine whether the first node is the originator or just another node in the circuit. The originator can then send a message through the first and second nodes to a third node, encrypted such that only the third node is able to decrypt it. The third, as with the second, becomes linked to the originator but connects only with the second. This process can be repeated to build larger and larger chains, but is typically limited to preserve performance.
When the chain is complete, the originator can send data over the Internet anonymously. When the final recipient of the data sends data back, the intermediary nodes maintain the same link back to the originator, with data again layered, but in reverse such that the final node this time adds the first layer of encryption and the first node adds the last layer of encryption before sending the data, for example a web page, to the originator, who is able to decrypt all layers.
Weaknesses
Timing analysis
One of the reasons why the typical Internet connections are not considered anonymous, is the ability of Internet service providers to trace and log connections between computers. For example, when a person accesses a particular website, the data itself may be secured through a connection like HTTPS such that the user's password, emails, or other content is not visible to an outside party, but there is a record of the connection itself, what time it occurred, and the amount of data transferred. Onion routing creates and obscures a path between two computers such that there's no discernible connection directly from a person to a website, but there still exists records of connections between computers. Traffic analysis searches those records of connections made by a potential originator and tries to match timing and data transfers to connections made to a potential recipient. If an attacker has compromised both ends of a route, a sender may be seen to have transferred an amount of data to an unknown computer a specified amount of seconds before a different unknown computer transferred data of the same exact size to a particular destination. Factors that may facilitate traffic analysis include nodes failing or leaving the network and a compromised node keeping track of a session as it occurs when chains are periodically rebuilt.
Garlic routing is a variant of onion routing associated with the I2P network that encrypts multiple messages together, which both increases the speed of data transfer and makes it more difficult for attackers to perform traffic analysis.
Exit node vulnerability
Although the message being sent is transmitted inside several layers of encryption, the job of the exit node, as the final node in the chain, is to decrypt the final layer and deliver the message to the recipient. A compromised exit node is thus able to acquire the raw data being transmitted, potentially including passwords, private messages, bank account numbers, and other forms of personal information. Dan Egerstad, a Swedish researcher, used such an attack to collect the passwords of over 100 email accounts related to foreign embassies.
Exit node vulnerabilities are similar to those on unsecured wireless networks, where the data being transmitted by a user on the network may be intercepted by another user or by the router operator. Both issues are solved by using a secure end-to-end connection like SSL/TLS or secure HTTP (S-HTTP). If there is end-to-end encryption between the sender and the recipient, and the sender isn't lured into trusting a false SSL certificate offered by the exit node, then not even the last intermediary can view the original message.
See also
Anonymous remailer
Bitblinder
Chaum mixes
Cryptography
Degree of anonymity
Diffie–Hellman key exchange
Java Anon Proxy
Key-based routing
Matryoshka doll
Mix network
Mixmaster anonymous remailer
Public-key cryptography
Proxy server
Tox – implements onion routing
Tribler – implements onion routing
References
–
External links
Onion-Router.net – site formerly hosted at the Center for High Assurance Computer Systems of the U.S. Naval Research Laboratory
Sylverson, P. F.; Goldschlag, D. M.; Reed, M. G. (1997). "Anonymous Connections and Onion Routing". IEEE Symposium on Security and Privacy. – The original paper from the Naval Research Laboratory
Anonymity networks
Routing
Computer-related introductions in 1998
Network architecture
Cryptography
Cryptographic protocols
Onion routing
Key-based routing
Mix networks |