id
stringlengths
4
8
url
stringlengths
32
188
title
stringlengths
2
122
text
stringlengths
143
226k
198684
https://en.wikipedia.org/wiki/McAfee
McAfee
McAfee Corp. (; formerly known as McAfee Associates, Inc. from 1987 to 1997 and 2004 to 2014, Network Associates Inc. from 1997 to 2004, and Intel Security Group from 2014 to 2017) is an American global computer security software company headquartered in San Jose, CA. The company was purchased by Intel in February 2011, and became part of the Intel Security division. In 2017, Intel had a strategic deal with TPG Capital and converted Intel Security into a joint venture between both companies called McAfee. Thoma Bravo took a minority stake in the new company, and Intel retained a 49% stake. The owners took McAfee public on the Nasdaq in 2020, and in 2021 a different investor group announced that they would acquire the company and take it private again. History 1987–1999 The company was founded in 1987 as McAfee Associates, named for its founder John McAfee, who resigned from the company in 1994. McAfee was incorporated in the state of Delaware in 1992. In 1993, McAfee stepped down as head of the company, taking the position of chief technology officer before his eventual resignation. Bill Larson was appointed CEO in his place. Network Associates was formed in 1997 as a merger of McAfee Associates, Network General, PGP Corporation and Helix Software. In 1996, McAfee had acquired Calgary, Alberta, Canada-based FSA Corporation, which helped the company diversify its security offerings away from just client-based antivirus software by bringing on board its own network and desktop encryption technologies. The FSA team also oversaw the creation of a number of other technologies that were leading edge at the time, including firewall, file encryption, and public key infrastructure product lines. While those product lines had their own individual successes including PowerBroker (written by Dean Huxley and Dan Freedman and now sold by BeyondTrust), the growth of antivirus ware always outpaced the growth of the other security product lines. It is fair to say that McAfee remains best known for its anti-virus and anti-spam products. Among other companies bought and sold by McAfee is Trusted Information Systems, which developed the Firewall Toolkit, the free software foundation for the commercial Gauntlet Firewall, which was later sold to Secure Computing Corporation. McAfee acquired Trusted Information Systems under the banner of Network Associates in 1998. McAfee, as a result of brief ownership of TIS Labs/NAI Labs/Network Associates Laboratories/McAfee Research, was highly influential in the world of open-source software, as that organization produced portions of the Linux, FreeBSD, and Darwin operating systems, and developed portions of the BIND name server software and SNMP version 3. 2000–2009 In 2000, McAfee/Network Associates was the leading authority in educating and protecting people against the Love Bug or ILOVEYOU virus, one of the most destructive computer viruses in history. At the end of 2000, CEO Bill Larson, President Peter Watkins, and CFO Prabhat Goyal all resigned after the company sustained losses. Company president Gene Hodges served as interim CEO before George Samenuk was appointed CEO in 2001. The company returned to its original name in July 2004. It restructured, beginning with the sale of its Magic Solutions business to Remedy, a subsidiary of BMC Software early in the year. In mid-2004, the company sold the Sniffer Technologies business to a venture capital backed firm named Network General (the same name as the original owner of Sniffer Technologies), and changed its name back to McAfee to reflect its focus on security-related technologies. In 2006, Dale Fuller became interim CEO when Samenuk resigned and President Kevin Weiss was fired after the company was accused of questionable stock options practices. David DeWalt took over as CEO on April 2, 2007. In 2007, McAfee launched the Security Innovation Alliance (SIA), a program focused on cultivating partnerships with other tech companies and integrating third-party technology with McAfee’s security and compliance risk management technology. On March 11, 2008, McAfee announced a license agreement with the US Department of Defense. This agreement allowed the DoD to integrate McAfee’s Virus Scan Enterprise and Anti-Spyware Enterprise into the Defense Information Systems Agency’s cyber-security solutions. 2010–present On August 19, 2010, Intel announced that it would purchase McAfee for $48 a share in a deal valued at $7.68 billion. In September 2016, Intel announced their strategic partnership with TPG to turn McAfee into an independent cyber-security company as a joint venture. That deal closed on April 3, 2017. CEO David DeWalt resigned in 2011, and McAfee appointed Michael DeCesare and Todd Gebhart as co-presidents. In 2011, McAfee also partnered with SAIC to develop anti-cyber espionage products for use by government and intelligence agencies, along with telecommunications companies. On January 6, 2014, Intel CEO Brian Krzanich announced during the Consumer Electronics Show the name change from McAfee Security to Intel Security. The company's red shield logo would remain, with the firm continuing to operate as a wholly owned Intel subsidiary. John McAfee, who no longer had any involvement in the company, expressed his pleasure at his name no longer being associated with the software. However, as of 2016 the products still bore the McAfee name. On September 7, 2016, Intel sold its majority stake to TPG and entered into an agreement with TPG to turn Intel Security into a jointly-owned, independent cyber-security company with the McAfee name. After the deal between the two companies closed, the company was spun back out of Intel on April 4, 2017."Chris Young assumed the CEO position as the company became an independent entity. In 2018, the company unsuccessfully entered talks to sell majority control of McAfee to minority stakeholder Thoma Bravo.In 2018, McAfee also expanded its Security Innovation Alliance partnerships to include companies such as Atos, CyberX, Fidelis Cyber-security, Aujas, and Silver Peak. In July 2019, McAfee began meeting with bankers to discuss returning to the market as an IPO.As an IPO, the company was estimated to be valued at $8 billion or higher. However, no deal or decision to join the public market was confirmed. Near the end of 2019, McAfee partnered with Google Cloud to integrate McAfee’s Mvision Cloud and endpoint security technology with Google’s cloud infrastructure. In October 2020, McAfee and its shareholders raised $740 million in the initial public offering and had valued at about $8.6 billion based on the outstanding shares listed in its prospectus. McAfee shares will be traded on the Nasdaq stock exchange under ticker symbol MCFE, marked its return to the public market after 9 years.In 2020, former McAfee CEO Chris Young left his position and was replaced with Peter Leav, who is the company’s current CEO. Products McAfee primarily develops digital-security tools for personal computers and server devices, and more recently, for mobile devices. McAfee brands, products and sub-products include: Current products McAfee Total Protection McAfee LiveSafe McAfee Safe Connect VPN McAfee Mobile Security for Android McAfee Mobile Security for iOS McAfee Virus Removal Service McAfee Identity Theft Protection McAfee Gamer Security McAfee Safe Family McAfee DAT Reputation Technology McAfee Small Business Security Renamed products McAfee VirusScan Enterprise (Changed from McAfee VirusScan) McAfee Network Security Platform (Changed from IntruShield) McAfee Application and Change Control (Changed from McAfee Change Control) McAfee WebAdvisor instead of SiteAdvisor Former products McAfee E-Business Server McAfee Entercept Acquisitions Dr Solomon's Group plc On June 9, 1998, Network Associates agreed to acquire Dr Solomon's Group plc, the leading European manufacturer of antivirus software, for $642 million in stock. IntruVert Networks On April 2, 2003, McAfee acquired IntruVert Networks for $100 million. According to Network World, "IntruVert's technology focus is on intrusion-prevention, which entails not just detecting attacks, but blocking them. The IntruVert product line can be used as a passive intrusion-detection system, just watching and reporting, or it can be used in the intrusion-prevention mode of blocking a perceived attack." Foundstone In August 2004, McAfee agreed to acquire Foundstone, a vendor of security consulting, training, and vulnerability management software, for $86 million. SiteAdvisor On April 5, 2006, McAfee bought out SiteAdvisor for a reputed $70 million in competition with Symantec, a service that warns users if downloading software or filling out forms on a site may obtain malware or spam. Preventsys On June 6, 2006, McAfee announced that it would acquire Preventsys, a California-based company offering security risk management products. The acquisition cost McAfee under $10 million. Onigma Ltd On October 16, 2006, McAfee announced that it would acquire Israel based Onigma Ltd for $20 million. Onigma provides host-based data leakage protection software that prevents intentional and unintentional leakage of sensitive data by internal users. SafeBoot Holding B.V. On October 8, 2007, McAfee announced it would acquire SafeBoot Holding B.V. for $350 million. SafeBoot provided mobile data security solutions that protected data, devices, and networks against the risk associated with loss, theft, and unauthorized access. Through the acquisition, McAfee became the only vendor to deliver endpoint, network, web, email and data security, as well as risk and compliance solutions. Gerhard Watzinger, CEO of SafeBoot, joined McAfee to lead the Data Protection product business unit. The deal closed on November 19, 2007. ScanAlert On October 30, 2007, McAfee announced plans to acquire ScanAlert for $51 million. The acquisition integrated ScanAlert's Hacker Safe service and McAfee's SiteAdvisor rating system to attack website security from both sides. It was the industry's first service to help consumers stay safe as they searched, surfed and shopped. The deal closed on February 7, 2008. Reconnex On July 31, 2008, McAfee announced it would acquire Reconnex, a maker of data protection appliances and software. Reconnex sold data loss prevention software, designed to prevent sensitive documents and data from leaving corporate networks. The acquisition added content awareness to McAfee's data security portfolio. The $46 million deal closed on August 12, 2008. Secure Computing On September 22, 2008, McAfee announced an agreement to acquire Secure Computing, a company specializing in network security hardware, services, and software products. The acquisition expanded McAfee's business in securing networks and cloud computing services to offer a more comprehensive brand of products. The deal closed on November 19, 2008 at a price of $497 million. Endeavor In January 2009, McAfee announced plans to acquire Endeavor Security, a privately held maker of IPS/IDS technology. The deal closed in February 2009 for a total purchase price of $3.2 million. Solidcore Systems On May 15, 2009, McAfee announced its intention acquire Solidcore Systems, a privately held security company, for $33 million. Solidcore was a maker of software that helped companies protect ATMs and other specialized computers. The acquisition integrated Solidcore's whitelisting and compliance enforcement mechanisms into the McAfee product line. The deal closed on June 1, 2009. MX Logic On July 30, 2009, McAfee announced plans to acquire managed email and web security vendor MX Logic. The acquisition provided an enhanced range of SaaS-based security services such as cloud-based intelligence, web security, email security, endpoint security and vulnerability assessment. The deal closed on September 1, 2009 at a price of $140 million. MX Logic staff were integrated into McAfee's SaaS business unit. Trust Digital On May 25, 2010, McAfee announced a definitive agreement to acquire Trust Digital, a privately held online security company that specialized in security for mobile devices. The acquisition allowed McAfee to extend its services beyond traditional endpoint security and move into the mobile security market. The acquisition closed on June 3, 2010. The price for Trust Digital was not disclosed. tenCube On July 29, 2010, McAfee announced a definitive agreement to acquire tenCube, a privately held online security company that specialized in anti-theft and data security for mobile devices. The acquisition allowed McAfee to complete its diversification into the mobile security space, and announce its plans to build the next generation mobile platform. The acquisition closed on August 25, 2010. Sentrigo On March 23, 2011, McAfee announced its intention to acquire privately owned Sentrigo, a leading provider of database security, including vulnerability management, database activity monitoring, database audit, and virtual patching—which ensure databases are protected without impacting performance or availability. The acquisition enabled McAfee to extend its database security portfolio. The acquisition closed on April 6, 2011. NitroSecurity On October 4, 2011, McAfee announced its intention to acquire privately owned NitroSecurity, a developer of high-performance security information and event management (SIEM) solutions that protect critical information and infrastructure. NitroSecurity solutions reduce risk exposure and increase network and information availability by removing the scalability and performance limitations of security information management. The acquisition closed on November 30, 2011. ValidEdge On February 26, 2013, McAfee announced it had acquired the ValidEdge sandboxing technology. Stonesoft On July 8, 2013 McAfee completed the tender offer for Finnish network firewall design company Stonesoft Oyj worth $389 million in cash, or about $6.09 a share. The Next Generation Firewall business acquired from Stonesoft was divested to Forcepoint in January 2016. PasswordBox On December 1, 2014, Intel Security announced the acquisition of PasswordBox, a Montreal-based provider of digital identity management solutions. Financial terms were not disclosed. Skyhigh Networks In November 2017, McAfee acquired Skyhigh Networks, a CASB security company. The acquisition closed January 3, 2018. TunnelBear In March 2018, McAfee acquired TunnelBear, a Canadian VPN service. Uplevel Security In July 2019, Uplevel Security, a data analytics company using graph theory and machine learning, announced it had been acquired by McAfee. NanoSec In August 2019, McAfee acquired NanoSec, a container security company. Lightpoint Security In March 31, 2020, McAfee acquired Lightpoint Security, which will extend the capabilities of multiple McAfee products. The amount of this acquisition remains undisclosed. Controversies Channel stuffing lawsuit: On January 4, 2006, the Securities and Exchange Commission filed suit against McAfee for overstating its 1998–2000 net revenue by . Without admitting any wrongdoing, McAfee simultaneously settled the complaint, and agreed to pay a $50 million penalty and rework its accounting practices. The fine was for accounting fraud; known as channel stuffing that served to inflate their revenue to their investors. SEC investigation into share options: In October 2006, McAfee fired its president Kevin Weiss, and its CEO George Samaneuk resigned under the cloud of a recent SEC investigation which also caused the departure of Kent Roberts, the General Counsel, earlier in the year. In late December 2006 both Weiss and Samaneuk had share option grant prices revised upwards by McAfee's board. Weiss and Roberts were both exonerated of all wrongdoing from the claims of McAfee in 2009. DAT 5958 update: On April 21, 2010, beginning at approximately 14:00 UTC, millions of computers worldwide running Windows XP Service Pack 3 were affected by an erroneous virus definition file update by McAfee, resulting in the removal of a Windows system file (svchost.exe) on those machines, causing machines to lose network access and, in some cases, enter a reboot loop. McAfee rectified this by removing and replacing the faulty DAT file, version 5958, with an emergency DAT file, version 5959 and has posted a fix for the affected machines in their consumer knowledge base. The University of Michigan's medical school reported that 8,000 of its 25,000 computers crashed. Police in Lexington, Ky., resorted to hand-writing reports and turned off their patrol car terminals as a precaution. Some jails canceled visitation, and Rhode Island hospitals turned away non-trauma patients at emergency rooms and postponed some elective surgeries. Australian supermarket Coles reported that 10% (1,100) of its point-of-sales terminals were affected and was forced to shut down stores in both western and southern parts of the country. As a result of the outage, McAfee implemented additional QA protocols for any releases that directly impacted critical system files. The company also rolled out additional capabilities in Artemis that provide another level of protection against false positives by leveraging a whitelist of hands-off system files. DAT 6807-6808 update: In August 2012, an issue with an update to McAfee antivirus for home and enterprise computers turned off the antivirus protection and, in many cases, prevented connection to the Internet. McAfee was criticized for being slow to address the problem, forcing network operations to spend time diagnosing the issue. See also Internet Security Comparison of antivirus software Comparison of firewalls References External links Companies listed on the Nasdaq Companies formerly listed on the New York Stock Exchange Software companies established in 1987 Computer security software companies Software companies based in California Antivirus software Technology companies based in the San Francisco Bay Area Companies based in Santa Clara, California 1987 establishments in California 2020 initial public offerings Software companies based in the San Francisco Bay Area Software companies of the United States John McAfee Announced information technology acquisitions
198983
https://en.wikipedia.org/wiki/Daniel%20J.%20Bernstein
Daniel J. Bernstein
Daniel Julius Bernstein (sometimes known as djb; born October 29, 1971) is an American German mathematician, cryptologist, and computer scientist. He is a visiting professor at CASA at Ruhr University Bochum, as well as a Research Professor of Computer Science at the University of Illinois at Chicago. Before this, he was a professor ("persoonlijk hoogleraar") in the department of mathematics and computer science at the Eindhoven University of Technology. Early life Bernstein attended Bellport High School, a public high school on Long Island, graduating in 1987 at the age of 15. The same year, he ranked fifth in the Westinghouse Science Talent Search. In 1987 (at the age of 16), he achieved a Top 10 ranking in the William Lowell Putnam Mathematical Competition. Bernstein earned a B.A. in Mathematics from New York University (1991) and a Ph.D. in Mathematics from the University of California, Berkeley (1995), where he studied under Hendrik Lenstra. Bernstein v. United States The export of cryptography from the United States was controlled as a munition starting from the Cold War until recategorization in 1996, with further relaxation in the late 1990s. In 1995, Bernstein brought the court case Bernstein v. United States. The ruling in the case declared that software was protected speech under the First Amendment, which contributed to regulatory changes reducing controls on encryption. Bernstein was originally represented by the Electronic Frontier Foundation. He later represented himself. Cryptography Bernstein designed the Salsa20 stream cipher in 2005 and submitted it to eSTREAM for review and possible standardization. He later published the ChaCha20 variant of Salsa in 2008. In 2005, he proposed the elliptic curve Curve25519 as a basis for public-key schemes. He worked as the lead researcher on the Ed25519 version of EdDSA. The algorithms made their way into popular software. For example, since 2014, when OpenSSH is compiled without OpenSSL they power most of its operations, and OpenBSD package signing is based on Ed25519. Nearly a decade later Edward Snowden's disclosure of mass surveillance by the National Security Agency and the discovery of a backdoor in their Dual_EC_DRBG, raised suspicions of the elliptic curve parameters proposed by NSA and standardized by NIST. Many researchers feared that the NSA had chosen curves that gave them a cryptanalytic advantage. Google selected ChaCha20 along with Bernstein's Poly1305 message authentication code for use in TLS, which is widely used for Internet security. Many protocols based on his works have been adopted by various standards organizations and are used in a variety of applications, such as Apple iOS, the Linux kernel, OpenSSH, and Tor. In spring 2005, Bernstein taught a course on "high speed cryptography." He introduced new attacks against implementations of AES (cache attacks) in the same time period. In April 2008, Bernstein's stream cipher "Salsa20" was selected as a member of the final portfolio of the eSTREAM project, part of a European Union research directive. In 2011, Bernstein published RFSB, a variant of the Fast Syndrome Based Hash function. He is one of the editors of the 2009 book Post-Quantum Cryptography. Software Starting in the mid-1990s, Bernstein has written a number of security-aware programs, including qmail, ezmlm, djbdns, ucspi-tcp, daemontools, and publicfile. Bernstein criticized the leading DNS package at the time, BIND, and wrote djbdns as a DNS package with security as a primary goal. Bernstein offers "security guarantees" for qmail and djbdns in the form of monetary rewards for the identification of flaws. A purported exploit targeting qmail running on 64-bit platforms was published in 2005, but Bernstein believes that the exploit does not fall within the parameters of his qmail security guarantee. In March 2009, Bernstein awarded $1000 to Matthew Dempsky for finding a security flaw in djbdns. In August 2008, Bernstein announced DNSCurve, a proposal to secure the Domain Name System. DNSCurve applies techniques from elliptic curve cryptography to provide a vast increase in performance over the RSA public-key algorithm used by DNSSEC. It uses the existing DNS hierarchy to propagate trust by embedding public keys into specially formatted, backward-compatible DNS records. Bernstein proposed Internet Mail 2000, an alternative system for electronic mail, intended to replace the Simple Mail Transfer Protocol (SMTP), the Post Office Protocol (POP3) and the Internet Message Access Protocol (IMAP). Bernstein is also known for his string hashing function djb2 and the cdb database library. Mathematics Bernstein has published a number of papers on mathematics and computation. Many of his papers deal with algorithms or implementations. In 2001, Bernstein circulated "Circuits for integer factorization: a proposal," which suggested that, if physical hardware implementations could be brought close to their theoretical efficiency, the then-popular estimates of adequate security parameters might be off by a factor of three. Since 512-bit RSA was breakable at the time, so might be 1536-bit RSA. Bernstein was careful not to make any actual predictions, and emphasized the importance of correctly interpreting asymptotic expressions. Several prominent researchers (among them Arjen Lenstra, Adi Shamir, Jim Tomlinson, and Eran Tromer) disagreed strongly with Bernstein's conclusions. Bernstein has received funding to investigate whether this potential can be realized. Bernstein is also the author of the mathematical libraries DJBFFT, a fast portable FFT library, and primegen, an asymptotically fast small prime sieve with low memory footprint based on the sieve of Atkin (rather than the more usual sieve of Eratosthenes). Both have been used effectively in the search for large prime numbers. In 2007, Bernstein proposed the use of a (twisted) Edwards curve, Curve25519, as a basis for elliptic curve cryptography; it is employed in Ed25519 implementation of EdDSA. In February 2015, Bernstein and others published a paper on stateless post-quantum hash-based signatures, called SPHINCS. In April 2017, Bernstein and others published a paper on Post-Quantum RSA that includes an integer factorization algorithm claimed to be "often much faster than Shor's". Teaching In 2004, Bernstein taught a course on computer software security where he assigned each student to find ten vulnerabilities in published software. The 25 students discovered 44 vulnerabilities, and the class published security advisories about the issues. See also CubeHash, Bernstein's submission to the NIST hash function competition SipHash NaCl (Software), a Networking and Cryptography library Quick Mail Queuing Protocol (QMQP) Quick Mail Transport Protocol (QMTP) References External links DJBFFT Daniel Bernstein on the Faculty Page at UIC Faculty page at Eindhoven University of Technology 1971 births Courant Institute of Mathematical Sciences alumni Living people Modern cryptographers American computer programmers American people of German-Jewish descent 20th-century American mathematicians 21st-century American mathematicians UC Berkeley College of Letters and Science alumni Computer security academics University of Illinois at Chicago faculty Computer science educators Eindhoven University of Technology faculty Open content activists People from East Patchogue, New York
199351
https://en.wikipedia.org/wiki/Yahoo%21%20Messenger
Yahoo! Messenger
Yahoo! Messenger (sometimes abbreviated Y!M) was an advertisement-supported instant messaging client and associated protocol provided by Yahoo!. Yahoo! Messenger was provided free of charge and could be downloaded and used with a generic "Yahoo ID" which also allowed access to other Yahoo! services, such as Yahoo! Mail. The service also offered VoIP, file transfers, webcam hosting, a text messaging service, and chat rooms in various categories. Yahoo! Messenger dates back to Yahoo! Chat, which was a public chat room service. The actual client, originally called Yahoo! Pager, launched on March 9, 1998 and renamed to Yahoo! Messenger in 1999. The chat room service shut down in 2012. In addition to instant messaging features similar to those offered by ICQ, it also offered (on Microsoft Windows) features such as: IMVironments (customizing the look of Instant Message windows, some of which include authorized themes of various cartoons such as Garfield or Dilbert), address-book integration and Custom Status Messages. It was also the first major IM client to feature BUZZing and music-status. A new Yahoo! Messenger was released in 2015, replacing the older one. Yahoo! Messenger was shut down entirely on July 17, 2018, replaced by a new service titled Yahoo! Together, only to be shut down as well in 2019. Features File sharing Yahoo! Messenger offered file sending capabilities to its users. Files could be up to 2 GB each. Since the software's relaunch, only certain media files can be shared: photos, animated GIFs and videos. It also allows album sharing, with multiple media files in one IM. The animated GIF feature integrates with Tumblr, owned by Yahoo! Likes The new Yahoo! Messenger added a like button to messages and media. It was basic in functionality, adding a heart when clicked and listing contacts who added a like. Unsend The new Yahoo! Messenger allowed messages to be unsent, deleting them from both the sender and the receiver's messaging page. Group conversations (formerly Yahoo! Chat) The new Yahoo! Messenger allowed private group conversations. Yahoo! Chat was a free online chat room service provided exclusively for Yahoo! users. Yahoo! Chat was first launched on January 7, 1997. Yahoo! Chat was a separate vertical on Yahoo! In its original form, Yahoo! Chat was a user-to-user text chat service used by millions worldwide. Soon after launch, Yahoo! Chat partnered with NBC and NewsCorp to produce moderated Chat Events. Yahoo! Chat events eventually developed broadcast partnerships with 100+ entities and hosted 350+ events-a-month. Yahoo's Live Chat with the music group Hanson on July 21, 1998, was the Internet's largest live event to date. The blockbusters kept on with events including 3 Beatles (Paul, George, Ringo), a live event from Columbine during the tragedy (in partnership with Time Online), live chats from outer space with John Glenn and many others. Sadly, in one of Yahoo's poorer decisions, Yahoo! Chat Events were discontinued in 2001, right at the start of the social media era. On March 9, 1998, the first public version of Yahoo! Pager was released, with Yahoo! Chat among its features. It allowed users to create public chat rooms, send private messages, and use emoticons. In June 2005, with no advance warning, Yahoo disabled users' ability to create their own chat rooms. The move came after KPRC-TV in Houston, Texas reported that many of the user-created rooms were geared toward pedophilia. The story prompted several advertisers, including Pepsi and Georgia-Pacific, to pull their ads from Yahoo. On November 30, 2012, Yahoo announced that among other changes that the public chat rooms would be discontinued as of December 14, 2012. quoting "This will enable us to refocus our efforts on modernizing our core Yahoo products experiences and of course, create new ones." Until the chat rooms became unavailable on December 14, 2012, all versions of Yahoo! Messenger could access Yahoo chat rooms. Yahoo has since closed down the chat.yahoo.com site (first having it redirect visitors to a section of the Yahoo! Messenger page, but as of June 2019 not even resolving that host name anymore) because the great majority of chat users accessed it through Messenger. The company worked for a while on a way to allow users to create their own rooms while providing safeguards against abuse. A greyed-out option to "create a room" was available until the release of version 11. Voice and video As of January 2014, the iOS version supported voice calls, with video calling on some devices. The Android version supported "voice & video calls (beta)". From September 2016, Yahoo! Messenger no longer offered webcam service on their computer application. Yahoo's software previously allowed users with newer versions (8 through 10) to use webcams. This option enabled users from distances all over the world to view others who had installed a webcam on their end. The service was free with provided speeds averaging from a range in between 1 and 2 frames per second. The resolution of the images could be seen starting at 320×240 pixels or 160×120. Protocol The Yahoo! Messenger Protocol (YMSG) was the client's underlying network protocol. It provided a language and series of conventions for software communicating with Yahoo!'s Instant Messaging service. In essence, YMSG performed the same role for Yahoo!'s IM as HTTP does for the World Wide Web. Unlike HTTP, however, YMSG was a proprietary protocol, a closed standard aligned only with the Yahoo! messaging service. Rival messaging services have their own protocols, some based on open standards, others proprietary, each effectively fulfilling the same role with different mechanics. One of the fundamental tenets of instant messaging is the notion that users can see when someone is connected to the network—known in the industry as 'presence'. The YMSG protocol used the mechanics of a standard internet connection to achieve presence—the same connection it used to send and receive data. In order for each user to remain 'visible' to other users on the service, and thereby signaling their availability, their Yahoo! IM client software maintained a functional, open, network connection linking the client to Yahoo!'s IM servers. URI scheme Yahoo! Messenger's installation process automatically installed an extra uniform resource identifier (URI) scheme handler for the Yahoo! Messenger Protocol into some web browsers, so that URIs beginning ymsgr could open a new Yahoo! Messenger window with specified parameters. This is similar in function to the mailto URI scheme, which creates a new e-mail message using the system's default mail program. For instance, a web page might include a link like the following in its HTML source to open a window for sending a message to the YIM user exampleuser: <a href="ymsgr:sendim?exampleuser">Send Message</a> To specify a message body, the m parameter was used, so that the link location might look like this: ymsgr:sendim?exampleuser&m=This+is+my+message Other commands were: ymsgr:sendim?yahooid ymsgr:addfriend?yahooid ymsgr:sendfile?yahooid ymsgr:call?yahooid ymsgr:callPhone?phonenumber ymsgr:im – opened the "Send an IM" window ymsgr:customstatus?A+custom+status – changed the status message ymsgr:getimv?imvname – loaded an IMVironment (example: ymsgr:getimv?doodle, ymsgr:getimv?yfighter) Interoperability On October 13, 2005, Yahoo and Microsoft announced plans to introduce interoperability between their two messengers, creating the second-largest real-time communications service userbase worldwide: 40 percent of all users. The announcement came after years of third-party interoperability success (most notably, Trillian and Pidgin) and criticisms that the major real-time communications services were locking their networks. Microsoft has also had talks with AOL in an attempt to introduce further interoperability, but AOL was unwilling to participate. Interoperability between Yahoo and Windows Live Messenger was launched July 12, 2006. This allowed Yahoo and Windows Live Messenger users to chat to each other without the need to create an account on the other service, provided both contacts used the latest versions of the clients. It was not possible to talk using the voice service between the two different messengers. As of December 14, 2012, the interoperability between Yahoo! Messenger and Windows Live Messenger ceased to exist. The Live Messenger contacts appeared as greyed out and it was not possible to send instant messages to them. Games There were various games and applications available that can be accessed via the conversation window by clicking the games icon and challenging your current contact. It requires Java to function. As of April 18, 2014, games were removed from Yahoo! Messenger. Plug-ins In version 8.0, Yahoo! Messenger featured the ability for users to create plug-ins, which are then hosted and showcased on the Yahoo chat room. Yahoo now no longer provides plugin development SDK. Yahoo! Messenger users could listen to free and paid Internet radio services, using the defunct Yahoo! Music Radio plug-in from within the messenger window. The plug-in also player functionality, such as play, pause, skip and rate this song. Adoption As of August 2000, according to Media Metrix, Yahoo! Messenger had about 10.6 million users in the U.S., about the same as MSN Messenger but trailing AOL Instant Messenger. However another analyst doubted the figures for Yahoo! and MSN. As of the month of September 2001, over five billion instant messages were sent on the network, up 115% from the year before, according to Media Metrix. Another study in August 2002 showed that it had a 16.7 percent share of IM work and home subscribers in the U.S., compared to 24.1 percent for MSN and 28.3 percent for AIM. In April 2002, 19.1 million people in the U.S. used Yahoo! Messenger, according to Media Metrix. Another study from Nielsen Net Ratings showed that as of 2002, Yahoo! Messenger had some 12 million users worldwide. This increased to 22 million by March 2006. Yahoo! Messenger was the dominant instant messaging platform among commodity traders until the platform was discontinued in August 2016. At the time of Yahoo! Messenger's closure in 2018, it remained popular in Vietnam. Software As of March 27, 2016, the only supported clients were the Android, iOS and web browser clients. The previous Windows, Mac, Unix and Solaris clients were not supported anymore, and their servers began shutting down on August 5, 2016, with the clients no longer working by August 31, 2016. It turned out that the servers for the legacy clients were finally shut down sometime between the mid-morning and early afternoon hours Eastern Standard Time on September 1, 2016, resulting in the legacy desktop clients no longer being able to access their buddy/contact lists. As of 2018 (with the last version), Yahoo! Messenger was available for computers as a web service, including both a messenger-only site and Yahoo! Mail integration. Apps were also available on Android and iOS. Pidgin could connect to Yahoo! Messenger by using the FunYahoo++ plugin. Mobile versions of Yahoo! Messenger were launched originally for Palm OS and Windows CE devices. In a deal signed March 2000, Yahoo! Messenger would come bundled on Palm handheld computers. It was also available for Verizon Wireless customers, through a deal with Yahoo! announced in March 2001, and through Sprint's MiniBrowser. A version for the T-Mobile Sidekick II was released in 2004. This was to be followed by versions for Symbian (via Yahoo! Go), BlackBerry, and then for iPhone in April 2009. A version called Yahoo! Messenger for SMS also existed, which allowed IM via SMS. History Yahoo! Pager launched on March 9, 1998, an instant messaging (IM) client integrated with Yahoo! services including Yahoo! Chat. It included basic messaging support, a buddy list with status message support, the ability to block other users, alerts when a buddies came online, and notifications when a new Yahoo! Mail message arrived. In 2000, the name changed to Yahoo! Messenger. Version 5.0, released November 2001, introduced IMVironments, an initiative that allowed users to play music and Flash Video clips inside the IM window. Yahoo! partnered with rock band Garbage that provided their single "Androgyny" available to share by users. Other partnerships also made IMVironments for the Monsters, Inc. movie, the Super Smash Bros Melee video game, and the Hello Kitty character, among others. In August 2002 with the release of version 5.5, the resolution for video calling was increased to a possible 320x240 and 20 frames per second (up from 160x120 and 1 frame per second). From October 2002, Yahoo! offered for corporate subscribers a more secure and better (SSL) encrypted IM client, called Yahoo! Messenger Enterprise Edition. It was released with a $30 yearly subscription package in 2003. Yahoo! Messenger version 6.0 was released in May 2004. It added games, music, photos, and Yahoo! Search, alongside a "stealth" mode. It also debuted Yahoo! Avatars. With the release of version 7.0 in August 2005, the client was now renamed to Yahoo! Messenger with Voice. It had several new features such as VoIP, voicemail, drag-and-drop file and photo sharing, Yahoo! 360° and LAUNCHcast integration, and others. It was seen as a challenger against Skype. On October 12, 2005, Yahoo! and Microsoft formed an alliance in which Yahoo! Messenger and MSN Messenger (later known as Windows Live Messenger) will be interconnected, allowing users of both communities to communicate and share emoticons and buddy lists with each other. The service was enabled on Yahoo! Messenger with Voice 8.0 in July 2006. As of version 8.1, the name switched back to just Yahoo! Messenger. Beginning in 2006, Yahoo made known its intention to provide a web interface for Yahoo! Messenger, culminating in the Gmail-like web archival and indexing of chat conversations through Yahoo! Mail. However, while Yahoo! Mail integrated much of the rudimentary features of Messenger beginning in 2007, Yahoo did not succeed initially in integrating archival of chat conversations into Mail. Instead, a separate Adobe Flex-based web messenger was released in 2007 with archival of conversations which take place inside the web messenger itself. At the Consumer Electronics Show in January 2007, Yahoo! Messenger for Vista was introduced, which is a version designed and optimized for Windows Vista. It exploited the new design elements of Vista's Windows Presentation Foundation (WPF) and introduced a new user interface and features. The application was in a preview beta until finally released for download on December 6, 2007. As of October 24, 2008, Yahoo! Messenger for Vista is no longer available. In May 2007, Yahoo! Messenger for the Web was launched, a browser-based client of the IM service. Yahoo! Messenger version 9 was released in September 2008. It allows the viewing of YouTube videos within the chat window, and integrates with other Yahoo! services such as Flickr. This version also saw the release of Pingbox, which embeds on a blog or website and allows visitors to send IM texts anonymously without needing Yahoo! Messenger software or to sign in. Version 10, released November 2009, incorporates many bug fixes and features high-quality video calling. The last major Windows client release, version 11 in 2011, featured integration with Facebook, Twitter and Zynga, allowing chat with Facebook friends and playing Zynga games within. It also archives past messages on an online server which is accessible through the client. Version 11.5 (released November 2011) added tabbed IMs. In December 2015, an all-new, rewritten Yahoo! Messenger was launched, only on mobile and through a browser. A desktop version of the "new" Messenger was later released, shortly before the "legacy" Messenger shut down on August 5, 2016. Yahoo! Together Yahoo! Together was a freeware and cross-platform messaging service, developed by Yahoo! for the Android and iOS mobile platforms. The software was introduced in beta on May 8, 2018, as Yahoo! Squirrel to replace Yahoo! Messenger and Verizon Media's AOL Instant Messenger. In October 2018, it was renamed to its present name. Yahoo! Together was targeted to families and the consumer market rather than enterprise. The app was compared to Slack. Less than a year after its public beta release, Yahoo! Together went offline on April 4, 2019. Third-party clients Third-party clients could also be used to access the original service. These included: Adium BitlBee Centericq Empathy Fire imeem IMVU Kopete meebo Meetro Miranda IM Paltalk Pidgin Trillian Trillian Astra Trillian Pro Windows Live Messenger SPIM Yahoo! Messenger users were subjected to unsolicited messages (SPIM). Yahoo's primary solution to the issue involved deleting such messages and placing the senders on an Ignore List. , it was estimated that at least 75% of all users who used Yahoo chat rooms were bots. Yahoo introduced a CAPTCHA system to help filter out bots from joining chat rooms, but such systems generally do little to prevent abuse by spammers. Security On November 4, 2014, the Electronic Frontier Foundation listed Yahoo! Messenger on its "Secure Messaging Scorecard". Yahoo! Messenger received 1 out of 7 points on the scorecard. It received a point for encryption during transit, but missed points because communications were not encrypted with a key the provider didn't have access to (i.e. the communications were not end-to-end encrypted), users couldn't verify contacts' identities, past messages were not secure if the encryption keys were stolen (i.e. the service did not provide forward secrecy), the code was not open to independent review (i.e. the source code was not open-source), the security design was not properly documented, and there had not been a recent independent security audit. The British intelligence agency Government Communications Headquarters (GCHQ)'s secret mass surveillance program Optic Nerve and National Security Agency (NSA) were reported to be indiscriminately collecting still images from Yahoo webcam streams from millions of mostly innocent Yahoo webcam users from 2008 to 2010, among other things creating a database for facial recognition for future use. Optic Nerve took a still image from the webcam stream every 5 minutes. In September 2016, The New York Times reported that Yahoo's security team, led by Alex Stamos, had pressed for Yahoo to adopt end-to-end encryption sometime between 2014 and 2015, but this had been resisted by Jeff Bonforte, Yahoo's senior vice president, "because it would have hurt Yahoo's ability to index and search message data". See also Comparison of instant messaging clients Comparison of instant messaging protocols Comparison of IRC clients Instant messaging Yahoo Together References External links Discontinued software BlackBerry software IOS software Classic Mac OS instant messaging clients MacOS instant messaging clients Symbian instant messaging clients Windows instant messaging clients Freeware Messenger Yahoo! instant messaging clients VoIP software Videotelephony 1998 software Android (operating system) software Yahoo! community websites Internet properties established in 1998 Internet properties disestablished in 2018 Messenger
203466
https://en.wikipedia.org/wiki/Internet%20Society
Internet Society
The Internet Society (ISOC) is an American nonprofit advocacy organization founded in 1992 with local chapters around the world. Its mission is "to promote the open development, evolution, and use of the Internet for the benefit of all people throughout the world." It has offices in Reston, Virginia, U.S., and Geneva, Switzerland. Organization The Internet Society has regional bureaus worldwide, composed of chapters, organizational members, and, as of July 2020, more than 70,000 individual members. The Internet Society has a staff of more than 100 and was governed by a board of trustees, whose members are appointed or elected by the society's chapters, organization members, and the Internet Engineering Task Force (IETF). The IETF comprised the Internet Society's volunteer base. Its leadership includes Chairman of the Board of Trustees, Ted Hardie; and President and CEO, Andrew Sullivan. The Internet Society created the Public Interest Registry (PIR), launched the Internet Hall of Fame, and served as the organizational home of the IETF. The Internet Society Foundation was created in 2017 as its independent philanthropic arm, which awarded grants to organizations. History In 1991, the NSF contract with the Corporation for National Research Initiatives (CNRI) to operate the Internet Engineering Task Force (IETF) expired. The then Internet Activities Board (IAB) sought to create a non-profit institution which could take over the role. In 1992 Vint Cerf, Bob Kahn and Lyman Chapin announced the formation of the Internet Society as "a professional society to facilitate, support, and promote the evolution and growth of the Internet as a global research communications infrastructure," which would incorporate the IAB, the IETF, and the Internet Research Task Force (IRTF), plus the organization of the annual INET meetings. This arrangement was formalized in RFC1602 in 1993. In 1999, after Jon Postel's death, ISOC established the Jonathan B. Postel Service Award. The award has been presented every year since 1999 by the Internet Society to "honor a person who has made outstanding contributions in service to the data communications community." By mid-2000, the Internet Society's finances became precarious, and several individuals and organizations stepped forward to fill the gap. Until 2001, there were also trustees elected by individual members of the Internet Society. Those elections were "suspended" in 2001. This was ostensibly done as a fiscal measure due to the perception that the elections were too expensive for the precarious financial state of the organization. In later Bylaw revisions, the concept of individual member-selected trustees went from "suspended" to being deleted altogether In late 2001, leaders from Afilias (a domain name registry) approached the Internet Society CEO Lynn St.Amour, to propose a novel partnership to jointly bid for the .org registry. In this model, the Internet Society would become the new home of .org, and all technical and service functions would be managed by Afilias. Afilias would pay for all bid expenses and would contribute towards the Internet Society payroll while the bid was under consideration by ICANN. The Internet Society Board approved this proposal at their Board meeting in 2001. In 2002 ISOC successfully bid for the .org registry and formed the Public Interest Registry (PIR), to manage and operate it. In 2010, ISOC launched its first community network initiative to deploy five wireless mesh based networks in rural locations across India. In 2012, on ISOC's 20th anniversary, it established the Internet Hall of Fame, an award to "publicly recognize a distinguished and select group of visionaries, leaders, and luminaries who have made significant contributions to the development and advancement of the global Internet". On June 8, 2011, ISOC mounted World IPv6 Day to test IPv6 deployment. In 2012 ISOC launched Deploy360, a portal and training program to promote IPv6 and DNSSEC. On June 6, 2012, ISOC organized the World IPv6 Launch, this time with the intention of leaving IPv6 permanently enabled on all participating sites. In 2016 Deploy 360 extended its campaigns to include Mutually Agreed Norms for Routing Security (MANRS) and DNS-based Authentication of Named Entities (DANE). In 2017 ISOC's North America Region launched an annual Indigenous Connectivity Summit with an event in Santa Fe, New Mexico. In subsequent years the event has been held in Inuvik, NWT, and Hilo, Hawaii. In December 2017 ISOC absorbed standards body Online Trust Alliance (OTA) which produces an annual Online Trust Audit, a Cyber Incident Response Guide, and an Internet of Things (IoT) Trust Framework. In August 2018 the Internet Society organized the IETF more formally as the IETF Administration LLC (IETF LLC) underneath ISOC. The IETF LLC continues to be closely associated with ISOC and is significantly funded by ISOC. Support to United Nations Internet Governance Initiative The ubiquity of the Internet in modern-day society has prompted António Guterres, the United Nations Secretary-General to convene a panel of professional experts to discuss the future of the Internet and the role of the Internet in globalized digital cooperation. Three models were proposed after several rounds of discussion, i.e. a Digital Commons Architecture (DCA), a Distributed Co-Governance Architecture (CoGov), and a reformed Internet Governance Forum (IGF+). As of October 2020, the ISOC is leading and facilitating the multi-round meetings for Stakeholders’ Dialogue to collect, compile, and submit the inputs of the worldwide professionals and experts for future governance of the Internet. Activities In the late 1990s, the Internet Society established the Jonathan B. Postel Service Award. It was presented every year to honor a person who has made outstanding contributions in service to the data communications community. The Internet Society's activities included MANRS (Mutually Agreed Norms for Routing Security) - which was launched in 2014 to provide crucial fixes to reduce the most common threats to the Internet's routing infrastructure. The society organized the Africa Peering and Interconnection Forum (AfPIF) to help grow the Internet infrastructure in Africa and hosts Internet development conferences in developing markets. The society offered Deploy360, an information hub, portal and training program to promote IPv6 and DNSSEC. In 2017 it launched an annual Indigenous Connectivity Summit to connect tribal communities, starting with an event in Santa Fe, New Mexico. In subsequent years the event was held in Inuvik, NWT, and Hilo, Hawaii. The society also publishes reports on global Internet issues, and created tools, surveys, codes, and policy recommendations to improve Internet use. The society supports projects to build community networks and infrastructure, secure routing protocols, and advocate for end-to-end encryption. Controversies Sale of the Public Interest Registry In 2019 the Internet Society agreed to the sale of the Public Interest Registry to Ethos Capital for $1.135 billion, a transaction initially expected to be completed in early 2020. The Internet Society said it planned to use the proceeds to fund an endowment. The Public Interest Registry is a non-profit subsidiary of the Internet Society which operates three top-level domain names (.ORG, .NGO, and .ONG), all of which have traditionally focused on serving the non-profit and non-governmental organization communities. The sale was met with significant opposition due to involving the transfer of what is viewed as a public asset to a private equity investment firm. In late January 2020, the ICANN halted its final approval of the sale after the Attorney General of California requested detailed documentation from all parties, citing concerns that both ICANN and the Internet Society had potentially violated their public interest missions as registered charities subject to the laws of California. In February, the Internet Society's Chapter Advisory Council (which represents its membership) began the process to adopt a motion rejecting the sale if certain conditions were not complied with. On April 30, 2020, ICANN rejected the proposal to sell the PIR to Ethos Capital. Denial of participation of Iranians in activities In September 2016, the Internet Society indicated that it would not seek to obtain a license from the Office of Foreign Assets Control (OFAC) of the US Department of the Treasury that would allow it to fund the activities of Iranian nationals. This caused considerable distress to ISOC members in Iran, who were thus unable to launch an Internet Society chapter in Iran, and saw a fellowship revoked that the Internet Society had awarded to fund the travel of Iranian student to visit the Internet Governance Forum in Mexico. References External links An Oral History of the Internet Society’s Founding (2013) History of the Internet Internet governance organizations Internet Standards Organizations established in 1992 1992 establishments in the United States
204933
https://en.wikipedia.org/wiki/Niels%20Ferguson
Niels Ferguson
Niels T. Ferguson (born 10 December 1965, Eindhoven) is a Dutch cryptographer and consultant who currently works for Microsoft. He has worked with others, including Bruce Schneier, designing cryptographic algorithms, testing algorithms and protocols, and writing papers and books. Among the designs Ferguson has contributed to is the AES finalist block cipher algorithm Twofish as well as the stream cipher Helix and the Skein hash function. In 1999, Niels Ferguson, together with Bruce Schneier and John Kelsey, developed the Yarrow algorithm random number generator. Yarrow was later further developed by Niels Ferguson and Bruce Schneier into the Fortuna random number generator. In 2001, he claimed to have broken the HDCP system that is incorporated into HD DVD and Blu-ray Discs players, similar to the DVDs Content Scramble System, but has not published his research, citing the Digital Millennium Copyright Act of 1998, which would make such publication illegal. In 2006 he published a paper covering some of his work around Bitlocker full disk encryption at Microsoft. At the CRYPTO 2007 conference rump session, Dan Shumow and Niels Ferguson presented an informal paper describing a potential kleptographic backdoor in the NIST specified Dual_EC_DRBG cryptographically secure pseudorandom number generator. The kleptographic backdoor was confirmed to be real in 2013 as part of the Edward Snowden leaks. References External links Short bio at the ORD-GROUP site. Ferguson chooses not to publish his results because he fears being prosecuted under the Digital Millennium Copyright Act On the Possibility of a Back Door in the NIST SP800-90 Dual Ec Prng Dutch cryptographers Modern cryptographers Microsoft employees People from Eindhoven 1965 births Living people
205635
https://en.wikipedia.org/wiki/Mac%20OS%209
Mac OS 9
Mac OS 9 is the ninth major release of Apple's classic Mac OS operating system which was succeeded by OS X. Introduced on October 23, 1999, it was promoted by Apple as "The Best Internet Operating System Ever", highlighting Sherlock 2's Internet search capabilities, integration with Apple's free online services known as iTools and improved Open Transport networking. While Mac OS 9 lacks protected memory and full pre-emptive multitasking, lasting improvements include the introduction of an automated Software Update engine and support for multiple users. Apple discontinued development of Mac OS 9 in late 2001, transitioning all future development to Mac OS X. The final updates to Mac OS 9 addressed compatibility issues with Mac OS X while running in the Classic Environment and compatibility with Carbon applications. At the 2002 Worldwide Developers Conference, Steve Jobs began his keynote address by staging a mock funeral for OS 9. Features Apple billed Mac OS 9 as including "50 new features" and heavily marketed its Sherlock 2 software, which introduced a "channels" feature for searching different online resources and introduced a QuickTime-like metallic appearance. Mac OS 9 also featured integrated support for Apple's suite of Internet services known as iTools (later re-branded as .Mac, then MobileMe, which was replaced by iCloud) and included improved TCP/IP functionality with Open Transport 2.5. Other features new to Mac OS 9 include: Integrated support for multiple user accounts without using At Ease. Support for voice login through VoicePrint passwords. Keychain, a feature allowing users to save passwords and textual data encrypted in protected keychains. A Software Update control panel for automatic download and installation of Apple system software updates. A redesigned Sound control panel and support for USB audio. Speakable Items 2.0, also known as PlainTalk, featuring improved speech synthesis and recognition along with AppleScript integration. Improved font management through FontSync. Remote Access Personal Server 3.5, including support for TCP/IP clients over Point-to-Point Protocol (PPP). An updated version of AppleScript with support for TCP/IP. Personal File Sharing over TCP/IP. USB Printer Sharing, a control panel allowing certain USB printers to be shared across a TCP/IP network. 128-bit file encryption in the Finder. Support for files larger than 2 GB. Unix volume support. CD Burning in the Finder (introduced in Mac OS 9.1). Addition of a "Window" menu to the Finder (introduced in Mac OS 9.1) Mac OS 9 and the Classic Environment PowerPC versions of Mac OS X prior to 10.5 include a compatibility layer (a shell) called Classic, enabling users to run applications and hardware requiring Mac OS 9 from within OS X. This is achieved through running Mac OS 9 without access to its Finder inside OS X. This requires Mac OS 9 to be installed on the computer even though computers that can run the Classic environment are not necessarily able to boot into Mac OS 9. Some Mac OS 9 applications do not run well in Classic. They demonstrate screen redraw problems and lagging performance. In addition, some drivers and other software which directly interact with the hardware fail to work properly. In May 2002, at Apple's Worldwide Developers Conference in San Jose, California, Steve Jobs, accompanied by a coffin, held a mock funeral to announce that Apple had stopped development of Mac OS 9. Mac OS 9.2.2, released in December 2001, was the final version of Mac OS 9 and the "classic" Mac OS. In June 2005, Jobs announced that the Macintosh platform would be transitioning to Intel x86 microprocessors. Developer documentation of the Rosetta PowerPC emulation layer revealed that applications written for Mac OS 8 or 9 would not run on x86-based Macs. The Classic Environment remains in the PowerPC version of 10.4; however, x86 versions of OS X do not support the Classic environment. Mac OS 9 can be emulated by using SheepShaver, a PowerPC emulator available on multiple operating systems, including Intel-based Macs. However, SheepShaver cannot run Mac OS versions newer than 9.0.4, as there is no support for a memory management unit. The PearPC PowerPC emulator does not support Mac OS 9. QEMU has experimental support for running Mac OS 9 using PowerPC G4 emulation. The 1GHz Titanium Powerbook G4 “Antimony” model A1025 released in 2002 can boot both Mac OS 9 and Mac OS X and is often installed with a “dual boot” configuration. However this was the final notebook that could boot Mac OS 9. All other G4 Macs that have a 1 GHz and higher processor and all G5 Macs can not boot Mac OS 9 natively as the "Mac OS ROM" was never updated to allow those Macs, which were developed during the OS X era, to directly boot it. In recent years, unofficial patches for Mac OS 9 and the Mac OS ROM have been made to allow unsupported G4 Macs to boot into Mac OS 9 (G5 Macs still can't run Mac OS 9 at all since Mac OS 9 doesn't recognize the G5 processor), though this is not officially supported by Apple. Other uses Aside from Apple-branded hardware that is still maintained and operated, Mac OS 9 can be operated in other environments such as Windows and Unix. For example, the aforementioned SheepShaver software was not designed for use on x86 platforms and required an actual PowerPC processor present in the machine it was running on similar to a hypervisor. Although it provides PowerPC processor support, it can only run up to Mac OS 9.0.4 because it does not emulate a memory management unit. Version history Updates to Mac OS 9 include 9.0.4, 9.1, 9.2.1, and 9.2.2. Mac OS 9.0.4 was a collection of bug fixes primarily relating to USB and FireWire support. Mac OS 9.1 included integrated CD burning support in the Macintosh Finder and added a new Window menu in the Finder for switching between open windows. Mac OS 9.2 increased performance noticeably and improved Classic Environment support. Compatibility See also List of Apple operating systems References External links from apple.com from apple.com from apple.com TN1176 Mac OS 9 }} from apple.com 1999 software Classic Mac OS PowerPC operating systems
205812
https://en.wikipedia.org/wiki/Sylpheed
Sylpheed
Sylpheed is an open-source e-mail client and news client licensed under GNU GPL-2.0-or-later with the library part LibSylph under GNU LGPL-2.1-or-later. It provides easy configuration and an abundance of features. It stores mail in the MH Message Handling System. Sylpheed runs on Unix-like systems such as Linux or BSD, and it is also usable on Windows. It uses GTK+. In 2005, Sylpheed was forked to create Sylpheed-Claws, now known as Claws Mail. As of 2020, both projects continue to be developed independently. Sylpheed is the default mail client in Lubuntu, Damn Small Linux and some flavours of Puppy Linux. Features Spam filtering Sylpheed provides support for spam filtering using either bogofilter or bsfilter, at the user's choice. Bsfilter is shipped with the Windows version of Sylpheed. Plug-ins Sylpheed supports the development of plug-ins. As of February 2015, Sylpheed's website notes an attachment-tool plug-in, an automatic mail forwarding plug-in, and a plug-in for determining whether or not attachments are password-protected. Limitations Sylpheed is unable to send HTML mail. This is intentional, since the developers consider HTML mail to be harmful. It is still possible to receive HTML mail using Sylpheed. Password The password is stored in plaintext in the Sylpheed configuration file, which by default is only readable by "owner" and not by "group" nor "other". A feature called "master password" prevents Sylpheed from holding plaintext passwords, but does not protect stored messages from other local users with administrator privilege. Encryption Sylpheed includes natively PGP Sign and PGP Encrypt options in the compose window (which requires however an encryption tool based on PGP already installed on the computer). This function is simple to handle yet not intuitive to set up. See also Claws Mail List of Usenet newsreaders Comparison of Usenet newsreaders Comparison of e-mail clients References External links Sylpheed documentation project SourceForge project page Free software programmed in C Email clients that use GTK GNOME Applications Free email software Email client software for Linux Windows email clients MacOS email clients Usenet clients
206753
https://en.wikipedia.org/wiki/Trapdoor%20function
Trapdoor function
A trapdoor function is a function that is easy to compute in one direction, yet difficult to compute in the opposite direction (finding its inverse) without special information, called the "trapdoor". Trapdoor functions are widely used in cryptography. In mathematical terms, if f is a trapdoor function, then there exists some secret information t, such that given f(x) and t, it is easy to compute x. Consider a padlock and its key. It is trivial to change the padlock from open to closed without using the key, by pushing the shackle into the lock mechanism. Opening the padlock easily, however, requires the key to be used. Here the key is the trapdoor and the padlock is the trapdoor function. An example of a simple mathematical trapdoor is "6895601 is the product of two prime numbers. What are those numbers?" A typical "brute-force" solution would be to try dividing 6895601 by several prime numbers until finding the answer. However, if one is told that 1931 is one of the numbers, one can find the answer by entering "6895601 ÷ 1931" into any calculator. This example is not a sturdy trapdoor function – modern computers can guess all of the possible answers within a second – but this sample problem could be improved by using the product of two much larger primes. Trapdoor functions came to prominence in cryptography in the mid-1970s with the publication of asymmetric (or public-key) encryption techniques by Diffie, Hellman, and Merkle. Indeed, coined the term. Several function classes had been proposed, and it soon became obvious that trapdoor functions are harder to find than was initially thought. For example, an early suggestion was to use schemes based on the subset sum problem. This turned out – rather quickly – to be unsuitable. , the best known trapdoor function (family) candidates are the RSA and Rabin families of functions. Both are written as exponentiation modulo a composite number, and both are related to the problem of prime factorization. Functions related to the hardness of the discrete logarithm problem (either modulo a prime or in a group defined over an elliptic curve) are not known to be trapdoor functions, because there is no known "trapdoor" information about the group that enables the efficient computation of discrete logarithms. A trapdoor in cryptography has the very specific aforementioned meaning and is not to be confused with a backdoor (these are frequently used interchangeably, which is incorrect). A backdoor is a deliberate mechanism that is added to a cryptographic algorithm (e.g., a key pair generation algorithm, digital signing algorithm, etc.) or operating system, for example, that permits one or more unauthorized parties to bypass or subvert the security of the system in some fashion. Definition A trapdoor function is a collection of one-way functions { fk : Dk → Rk } (k ∈ K), in which all of K, Dk, Rk are subsets of binary strings {0, 1}*, satisfying the following conditions: There exists a probabilistic polynomial time (PPT) sampling algorithm Gen s.t. Gen(1n) = (k, tk) with k ∈ K ∩ {0, 1}n and tk ∈ {0, 1}* satisfies | tk | < p (n), in which p is some polynomial. Each tk is called the trapdoor corresponding to k. Each trapdoor can be efficiently sampled. Given input k, there also exists a PPT algorithm that outputs x ∈ Dk. That is, each Dk can be efficiently sampled. For any k ∈ K, there exists a PPT algorithm that correctly computes fk. For any k ∈ K, there exists a PPT algorithm A s.t. for any x ∈ Dk, let y = A ( k, fk(x), tk ), and then we have fk(y) = fk(x). That is, given trapdoor, it is easy to invert. For any k ∈ K, without trapdoor tk, for any PPT algorithm, the probability to correctly invert fk (i.e., given fk(x), find a pre-image x' such that fk(x' ) = fk(x)) is negligible. If each function in the collection above is a one-way permutation, then the collection is also called a trapdoor permutation. Examples In the following two examples, we always assume it is difficult to factorize a large composite number (see Integer factorization). RSA Assumption In this example, having the inverse of e modulo φ(n), the Euler's totient function of n, is the trapdoor: If the factorization is known, φ(n) can be computed, so then the inverse of can be computed = e−1 mod φ(n), and then given y = f(x) we can find x = yd mod n = xed mod n = x mod n. Its hardness follows from RSA assumption. Rabin's Quadratic Residue Assumption Let n be a large composite number such that n = pq, where p and q are large primes such that p ≡ 3 mod 4, q ≡ 3 mod 4, and kept confidential to the adversary. The problem is to compute z given a such that a ≡ z2 mod n. The trapdoor is the factorization of n. With the trapdoor, the solutions of z can be given as , , , , where a ≡ x2 mod p, a ≡ y2 mod q, c ≡ 1 mod p, c ≡ 0 mod q, d ≡ 0 mod p, d ≡ 1 mod q. See Chinese remainder theorem for more details. Note that given primes p and q, we can find x ≡ a(p+1)/4 mod p and y ≡ a(q+1)/4 mod q. Here the conditions p ≡ 3 mod 4 and q ≡ 3 mod 4 guarantee that the solutions x and y can be well defined. See also One-way function Notes References Theory of cryptography Cryptography Cryptographic primitives
208250
https://en.wikipedia.org/wiki/John%20the%20Ripper
John the Ripper
John the Ripper is a free password cracking software tool. Originally developed for the Unix operating system, it can run on fifteen different platforms (eleven of which are architecture-specific versions of Unix, DOS, Win32, BeOS, and OpenVMS). It is among the most frequently used password testing and breaking programs as it combines a number of password crackers into one package, autodetects password hash types, and includes a customizable cracker. It can be run against various encrypted password formats including several crypt password hash types most commonly found on various Unix versions (based on DES, MD5, or Blowfish), Kerberos AFS, and Windows NT/2000/XP/2003 LM hash. Additional modules have extended its ability to include MD4-based password hashes and passwords stored in LDAP, MySQL, and others. Sample output Here is a sample output in a Debian environment. $ cat pass.txt user:AZl.zWwxIh15Q $ john -w:password.lst pass.txt Loaded 1 password hash (Traditional DES [24/32 4K]) example (user) guesses: 1 time: 0:00:00:00 100% c/s: 752 trying: 12345 - pookie The first line is a command to expand the data stored in the file "pass.txt". The next line is the contents of the file, i.e. the user (AZl) and the hash associated with that user (zWwxIh15Q). The third line is the command for running John the Ripper utilizing the "-w" flag. "password.lst" is the name of a text file full of words the program will use against the hash, pass.txt makes another appearance as the file we want John to work on. Then we see output from John working. Loaded 1 password hash — the one we saw with the "cat" command — and the type of hash John thinks it is (Traditional DES). We also see that the attempt required one guess at a time of 0 with a 100% guess rate. Attack types One of the modes John can use is the dictionary attack. It takes text string samples (usually from a file, called a wordlist, containing words found in a dictionary or real passwords cracked before), encrypting it in the same format as the password being examined (including both the encryption algorithm and key), and comparing the output to the encrypted string. It can also perform a variety of alterations to the dictionary words and try these. Many of these alterations are also used in John's single attack mode, which modifies an associated plaintext (such as a username with an encrypted password) and checks the variations against the hashes. John also offers a brute force mode. In this type of attack, the program goes through all the possible plaintexts, hashing each one and then comparing it to the input hash. John uses character frequency tables to try plaintexts containing more frequently used characters first. This method is useful for cracking passwords that do not appear in dictionary wordlists, but it takes a long time to run. See also Brute-force search Brute-force attack Crack (password software) Computer hacking Hacking tool Openwall Project Password cracking References External links Password cracking software Free security software Cross-platform software Cryptanalytic software Year of introduction missing
211010
https://en.wikipedia.org/wiki/AE
AE
AE, Ae, ae, Æ or æ may refer to: Arts and entertainment A.E. (video game), 1982 Autechre, an electronic music group L'Année épigraphique, a French publication on epigraphy Encyclopedia Dramatica, often abbreviated æ Language Characters Æ or æ, a ligature or letter List of words that may be spelled with a ligature, including "AE" being rendered as "Æ" Ä or ä, a letter sometimes represented as "ae" Ae (Cyrillic), a Cyrillic-script letter Ae (digraph), a Latin-script digraph Languages and dialects American English, the set of varieties of the English language native to the United States Avestan, a language, ISO 639-1 language code ae People A. E. or Æ, a penname of George William Russell (1867–1935), Irish writer Koichi Ae (born 1976), Japanese football player Alexander Emelianenko (born 1981), Russian mixed martial artist, with AE Team Places Ae, Dumfries and Galloway, Scotland Water of Ae, a river United Arab Emirates, ISO 3166-1 and FIPS 10-4 country code AE .ae, the top level domain for United Arab Emirates United States postal abbreviation for US armed forces in Europe Science and technology Acoustic emission, the phenomenon of radiation of acoustic waves in solids Adobe After Effects, graphics software Adverse event, any untoward medical occurrence in a patient or clinical investigation Aeon in astronomy, 109 years Aggregate expenditure, a measure of national income Almost everywhere, in mathematical analysis ASCII Express, computer software Authenticated encryption, a form of encryption Automatic exposure, a mode available on some cameras Canon AE-1, a camera Other uses Æ or AE, a numismatic abbreviation for "bronze" Air Efficiency Award, a British medal 1942–1999 Ammunition ship Applied Engineering, a computer hardware retailer Mandarin Airlines, a Taiwanese airline, IATA designator AE Toyota Corolla, the fifth generation of which is referred to as "AE" See also A&E (disambiguation) AES (disambiguation)
214793
https://en.wikipedia.org/wiki/Digital%20Media%20Consumers%27%20Rights%20Act
Digital Media Consumers' Rights Act
The Digital Media Consumers' Rights Act (DMCRA) was a proposed law in the United States that directly challenges portions of the Digital Millennium Copyright Act, and would intensify Federal Trade Commission efforts to mandate proper labeling for copy-protected CDs to ensure consumer protection from deceptive labeling practices. It would also allow manufacturers to innovate in hardware designs and allow consumers to treat CDs as they have historically been able to treat them. The DMCRA bill was introduced to the United States House of Representatives on January 7, 2003 as by Rick Boucher. The bill was co-sponsored by John Doolittle, Spencer Bachus and Patrick J. Kennedy. The bill was reintroduced into Congress once again on March 9, 2005 as , the 'Digital Media Consumers Rights Act of 2005'. The 2005 bill's original co-sponsors were John Doolittle, and Joe Barton. Some provisions of the bill were incorporated into the FAIR USE Act of 2007. Official summary of the bill The authors of the bill have summarized it as follows: The Digital Media Consumers’ Rights Act (DMCRA) restores the historical balance in copyright law and ensures the proper labeling of "copy-protected compact discs". 1) Restores the Historic Balance in U.S. Copyright Law Reaffirms Fair Use. The DMCRA provides that it is not a violation of Section 1201 of Title 17 (the Digital Millennium Copyright Act, or DMCA) to circumvent a technological measure in connection with gaining access to or using a work if the circumvention does not result in an infringement of the copyright in the work. For example, under the bill a user may circumvent an access control on an electronic book he purchased for the purpose of reading it on a different electronic reader. However, if he were to upload the book onto the Internet for distribution to others, he would be liable for both a Section 1201 circumvention violation and for copyright infringement. Reestablishes Betamax Standard. The DMCRA also would specify that it is not a violation of Section 1201 of the DMCA to manufacture, distribute, or make non-infringing use of a hardware or software product capable of enabling significant non-infringing use of a copyrighted work. By re-establishing the principle set forth in Sony v. Universal City Studios, 464 U.S. 417 (1984), this provision is intended to ensure that consumers will have access to hardware and software products by which to engage in the activities authorized by the legislation. For example, a blind person could develop a means to listen in audio form to an electronic book which had been purchased in text form. Restores Valid Scientific Research. The bill amends the DMCA to permit researchers to produce the software tools necessary to carry out "scientific research into technological protection measures." Current law allows circumvention for encryption research under specified circumstances. The bill will enable circumvention for research on technological measures other than encryption. The bill also permits a researcher to develop the tools necessary for such circumvention. 2) Ensures Proper Labeling of "Copy-Protected Compact Discs" Major record companies have begun adding technology to CDs that would block people from making copies. In many cases the technology has also prevented playback on computers, DVD players, or even some standard CD players. It has become apparent that even the limited introduction of these discs into the United States market has caused consumer and increased burdens on retailers and manufacturers. Consumers are accustomed to the functionality of industry standard Compact Discs and should be aware of any reduced playability or recording functionality of non-standard "copy-protected compact discs" before they make the decision to purchase such items. For that reason, the bill directs the Federal Trade Commission to ensure that adequate labeling occurs for the benefit of consumers. See also BALANCE Act Digital Millennium Copyright Act FAIR USE Act Intellectual property legislation pending in the United States Congress External links Digital Media Consumers' Rights Act of 2005 bill (PDF) Digital Media Consumers' Rights Act of 2003 bill - hosted on Library of Congress website Digital Media Consumers’ Rights Act Section-by-Section Description - hosted on Rep. Rick Boucher's website Digital Media Consumers' Rights Act of 2003 Hearing (PDF) - transcript of the May 12, 2004 hearing before the House Subcommittee on Commerce, Trade, and Consumer Protection Re-striking the balance from DMCA to DMCRA: A short analysis of the May 2004 Hearing on the Digital Media Consumers’ Rights Act - written by Rik Lambers for www.indicare.org United States proposed federal intellectual property legislation Digital Millennium Copyright Act Proposed legislation of the 108th United States Congress Proposed legislation of the 109th United States Congress
216381
https://en.wikipedia.org/wiki/Boot%20sector
Boot sector
A boot sector is the sector of a persistent data storage device (e.g., hard disk, floppy disk, optical disc, etc.) which contains machine code to be loaded into random-access memory (RAM) and then executed by a computer system's built-in firmware (e.g., the BIOS). Usually, the very first sector of the hard disk is the boot sector, regardless of sector size (512 or 4096 bytes) and partitioning flavor (MBR or GPT). The purpose of defining one particular sector as the boot sector is inter-operability between various firmwares and various operating systems. The purpose of chainloading first a firmware (e.g., the BIOS), then some code contained in the boot sector, and then, for example, an operating system, is maximal flexibility. The IBM PC and compatible computers On an IBM PC compatible machine, the BIOS selects a boot device, then copies the first sector from the device (which may be an MBR, VBR or any executable code), into physical memory at memory address 0x7C00. On other systems, the process may be quite different. Unified Extensible Firmware Interface (UEFI) The UEFI (not legacy boot via CSM) does not rely on boot sectors, UEFI system loads the boot loader (EFI application file in USB disk or in the EFI system partition) directly. Additionally, the UEFI specification also contains "secure boot", which basically wants the UEFI code to be digitally signed. Damage to the boot sector In case a boot sector receives physical damage, the hard disk will no longer be bootable; unless when used with a custom BIOS, which defines a non-damaged sector as the boot sector. However, since the very first sector additionally contains data regarding the partitioning of the hard disk, the hard disk will become entirely unusable, except when used in conjunction with custom software. Partition tables A disk can be partitioned into multiple partitions and, on conventional systems, it is expected to be. There are two definitions on how to store the information regarding the partitioning: A master boot record (MBR) is the first sector of a data storage device that has been partitioned. The MBR sector may contain code to locate the active partition and invoke its Volume Boot Record. A volume boot record (VBR) is the first sector of a data storage device that has not been partitioned, or the first sector of an individual partition on a data storage device that has been partitioned. It may contain code to load an operating system (or other standalone program) installed on that device or within that partition. The presence of an IBM PC compatible boot loader for x86-CPUs in the boot sector is by convention indicated by a two-byte hexadecimal sequence 0x55 0xAA (called the boot sector signature) at the end of the boot sector (offsets 0x1FE and 0x1FF). This signature indicates the presence of at least a dummy boot loader which is safe to be executed, even if it may not be able actually to load an operating system. It does not indicate a particular (or even the presence of) file system or operating system, although some old versions of DOS 3 relied on it in their process to detect FAT-formatted media (newer versions do not). Boot code for other platforms or CPUs should not use this signature, since this may lead to a crash when the BIOS passes execution to the boot sector assuming that it contains valid executable code. Nevertheless, some media for other platforms erroneously contain the signature, anyway, rendering this check not 100% reliable in practice. The signature is checked for by most system BIOSes since (at least) the IBM PC/AT (but not by the original IBM PC and some other machines). Even more so, it is also checked by most MBR boot loaders before passing control to the boot sector. Some BIOSes (like the IBM PC/AT) perform the check only for fixed disk/removable drives, while for floppies and superfloppies, it is enough to start with a byte greater or equal to 06h and the first nine words not to contain the same value, before the boot sector is accepted as valid, thereby avoiding the explicit test for 0x55, 0xAA on floppies. Since old boot sectors (e.g., very old CP/M-86 and DOS media) sometimes do not feature this signature despite the fact that they can be booted successfully, the check can be disabled in some environments. If the BIOS or MBR code does not detect a valid boot sector and therefore cannot pass execution to the boot sector code, it will try the next boot device in the row. If they all fail it will typically display an error message and invoke INT 18h. This will either start up optional resident software in ROM (ROM BASIC), reboot the system via INT 19h after user confirmation or cause the system to halt the bootstrapping process until the next power-up. Systems not following the above described design are: CD-ROMs usually have their own structure of boot sectors; for IBM PC compatible systems this is subject to El Torito specifications. C128 or C64 software on Commodore DOS disks where data on Track 1, Sector 0 began with a magic number corresponding to string "CBM". IBM mainframe computers place a small amount of boot code in the first and second track of the first cylinder of the disk, and the root directory, called the Volume Table of Contents, is also located at the fixed location of the third track of the first cylinder of the disk. Other (non IBM-compatible) PC systems may have different boot sector formats on their disk devices. Operation On IBM PC compatible machines, the BIOS is ignorant of the distinction between VBRs and MBRs, and of partitioning. The firmware simply loads and runs the first sector of the storage device. If the device is a floppy or USB flash drive, that will be a VBR. If the device is a hard disk, that will be an MBR. It is the code in the MBR which generally understands disk partitioning, and in turn, is responsible for loading and running the VBR of whichever primary partition is set to boot (the active partition). The VBR then loads a second-stage bootloader from another location on the disk. Furthermore, whatever is stored in the first sector of a floppy diskette, USB device, hard disk or any other bootable storage device, is not required to immediately load any bootstrap code for an OS, if ever. The BIOS merely passes control to whatever exists there, as long as the sector meets the very simple qualification of having the boot record signature of 0x55, 0xAA in its last two bytes. This is why it is easy to replace the usual bootstrap code found in an MBR with more complex loaders, even large multi-functional boot managers (programs stored elsewhere on the device which can run without an operating system), allowing users a number of choices in what occurs next. With this kind of freedom, abuse often occurs in the form of boot sector viruses. Boot sector viruses Since code in the boot sector is executed automatically, boot sectors have historically been a common attack vector for computer viruses. To combat this behavior, the system BIOS often includes an option to prevent software from writing to the first sector of any attached hard drives; it could thereby protect the master boot record containing the partition table from being overwritten accidentally, but not the volume boot records in the bootable partitions. Depending on the BIOS, attempts to write to the protected sector may be blocked with or without user interaction. Most BIOSes, however, will display a popup message giving the user a chance to override the setting. The BIOS option is disabled by default because the message may not be displayed correctly in graphics mode and blocking access to the MBR may cause problems with operating system setup programs or disk access, encryption or partitioning tools like FDISK, which may not have been written to be aware of that possibility, causing them to abort ungracefully and possibly leaving the disk partitioning in an inconsistent state. As an example, the malware NotPetya attempts to gain administrative privileges on an operating system, and then would attempt to overwrite the boot sector of a computer. The CIA has also developed malware that attempts to modify the boot sector in order to load additional drivers to be used by other malware. See also Master boot record (MBR) Volume boot record (VBR) Notes References External links Computer file systems BIOS Booting
216563
https://en.wikipedia.org/wiki/ECB
ECB
ECB may refer to: Organizations European Central Bank, the central bank for the Eurozone of the European Union European Chemicals Bureau, the Toxicology and Chemical Substances Unit of the Joint Research Centre of the European Commission ECB Project (Emergency Capacity Building Project), a humanitarian capacity building project England and Wales Cricket Board, the governing body of cricket in England and Wales Emirates Cricket Board, the official governing body of cricket in the United Arab Emirates East Coast Bays AFC, a football team from East Coast Bays, New Zealand Environmental Control Board, New York City, US Equatorial Commercial Bank, former name of the Spire Bank, Kenya Education Environmental Campus Birkenfeld, a branch of the Trier University of Applied Sciences in Rhineland-Palatinate, Germany Wisconsin Educational Communications Board, a Wisconsin state agency, US Government Engineering College Bikaner, a technical education institute in Bikaner, Rajasthan, India Technology Electronic codebook, a type of data encryption using block ciphers Electronically controlled brake, Toyota's brake-by-wire system Electronically controlled pneumatic brakes, for railways Europe Card Bus, an 8-bit computer bus, used by older Kontron computers and the N8VEM home brew computer project Ethylene copolymer bitumen, a type of roofing membrane Entity-control-boundary, an architectural pattern in the domain of software engineering Other uses External commercial borrowing, a type of commercial borrowing in India used for the public sector Reading electric multiple units, a class of trains
216721
https://en.wikipedia.org/wiki/Wardriving
Wardriving
Wardriving is the act of searching for Wi-Fi wireless networks, usually from a moving vehicle, using a laptop or smartphone. Software for wardriving is freely available on the internet. Warbiking, warcycling, warwalking and similar use the same approach but with other modes of transportation. Etymology War driving originated from wardialing, a method popularized by a character played by Matthew Broderick in the film WarGames, and named after that film. War dialing consists of dialing every phone number in a specific sequence in search of modems. Variants Warbiking or warcycling is similar to wardriving, but is done from a moving bicycle or motorcycle. This practice is sometimes facilitated by mounting a Wi-Fi enabled device on the vehicle. Warwalking, or warjogging, is similar to wardriving, but is done on foot rather than from a moving vehicle. The disadvantages of this method are slower speed of travel (but leading to discovery of more infrequently discovered networks) and the absence of a convenient computing environment. Consequently, handheld devices such as pocket computers, which can perform such tasks while users are walking or standing, have dominated this practice. Technology advances and developments in the early 2000s expanded the extent of this practice. Advances include computers with integrated Wi-Fi, rather than CompactFlash (CF) or PC Card (PCMCIA) add-in cards in computers such as Dell Axim, Compaq iPAQ and Toshiba pocket computers starting in 2002. Later, the active Nintendo DS and Sony PSP enthusiast communities gained Wi-Fi abilities on these devices. Further, nearly all modern smartphones integrate Wi-Fi and Global Positioning System (GPS). Warrailing, or Wartraining, is similar to wardriving, but is done on a train or tram rather than from a slower more controllable vehicle. The disadvantages of this method are higher speed of travel (resulting in less discovery of more infrequently discovered networks) and often limited to major roads with a higher traffic. Warkitting is a combination of wardriving and rootkitting. In a warkitting attack, a hacker replaces the firmware of an attacked router. This allows them to control all traffic for the victim, and could even permit them to disable TLS by replacing HTML content as it is being downloaded. Warkitting was identified by Tsow, Jakobsson, Yang, and Wetzel. Mapping Wardrivers use a Wi-Fi-equipped device together with a GPS device to record the location of wireless networks. The results can then be uploaded to websites like WiGLE, or Geomena where the data is processed to form maps of the network neighborhood. There are also clients available for smartphones running Android that can upload data directly. For better range and sensitivity, antennas are built or bought, and vary from omnidirectional to highly directional. The maps of known network IDs can then be used as a geolocation system—an alternative to GPS—by triangulating the current position from the signal strengths of known network IDs. Examples include Place Lab by Intel, Skyhook, Navizon by Cyril Houri, SeekerLocate from Seeker Wireless, and Geomena. Navizon and combines information from Wi-Fi and cell phone tower maps contributed by users from Wi-Fi-equipped cell phones. In addition to location finding, this provides navigation information, and allows for the tracking of the position of friends, and geotagging. In December 2004, a class of 100 undergraduates worked to map the city of Seattle, Washington over several weeks. They found 5,225 access points; 44% were secured with WEP encryption, 52% were open, and 3% were pay-for-access. They noticed trends in the frequency and security of the networks depending on location. Many of the open networks were clearly intended to be used by the general public, with network names like "Open to share, no porn please" or "Free access, be nice." The information was collected into high-resolution maps, which were published online. Previous efforts had mapped cities such as Dublin. Legal and ethical considerations Some portray wardriving as a questionable practice (typically from its association with piggybacking), though, from a technical viewpoint, everything is working as designed: many access points broadcast identifying data accessible to anyone with a suitable receiver. It could be compared to making a map of a neighborhood's house numbers and mail box labels. While some may claim that wardriving is illegal, there are no laws that specifically prohibit or allow wardriving, though many localities have laws forbidding unauthorized access of computer networks and protecting personal privacy. Google created a privacy storm in some countries after it eventually admitted systematically but surreptitiously gathering Wi-Fi data while capturing video footage and mapping data for its Street View service. It has since been using Android-based mobile devices to gather this data. Passive, listen-only wardriving (with programs like Kismet or KisMAC) does not communicate at all with the networks, merely logging broadcast addresses. This can be likened to listening to a radio station that happens to be broadcasting in the area or with other forms of DXing. With other types of software, such as NetStumbler, the wardriver actively sends probe messages, and the access point responds per design. The legality of active wardriving is less certain, since the wardriver temporarily becomes "associated" with the network, even though no data is transferred. Most access points, when using default "out of the box" security settings, are intended to provide wireless access to all who request it. The war driver's liability may be reduced by setting the computer to a static IP, instead of using DHCP, preventing the network from granting the computer an IP address or logging the connection. In the United States, the case that is usually referenced in determining whether a network has been "accessed" is State v. Allen. In this case, Allen had been wardialing in an attempt to get free long-distance calling through Southwestern Bell's computer systems. When presented with a password protection screen, however, he did not attempt to bypass it. The court ruled that although he had "contacted" or "approached" the computer system, this did not constitute "access" of the company's network. Software iStumbler InSSIDer Kismet KisMAC NetSpot NetStumbler WiFi-Where WiGLE for Android There are also homebrew wardriving applications for handheld game consoles that support Wi-Fi, such as sniff for the Nintendo DS/Android, Road Dog for the Sony PSP, WiFi-Where for the iPhone, G-MoN, Wardrive, Wigle Wifi for Android, and WlanPollution for Symbian NokiaS60 devices. There also exists a mode within Metal Gear Solid: Portable Ops for the Sony PSP (wherein the player is able to find new comrades by searching for wireless access points) which can be used to wardrive. Treasure World for the DS is a commercial game in which gameplay wholly revolves around wardriving. See also Honeypot (computing) Hotspot Warchalking Warshipping References External links Computer security exploits Wireless networking
216844
https://en.wikipedia.org/wiki/Network%20Information%20Service
Network Information Service
The Network Information Service, or NIS (originally called Yellow Pages or YP), is a client–server directory service protocol for distributing system configuration data such as user and host names between computers on a computer network. Sun Microsystems developed the NIS; the technology is licensed to virtually all other Unix vendors. Because British Telecom PLC owned the name "Yellow Pages" as a registered trademark in the United Kingdom for its paper-based, commercial telephone directory, Sun changed the name of its system to NIS, though all the commands and functions still start with "yp". A NIS/YP system maintains and distributes a central directory of user and group information, hostnames, e-mail aliases and other text-based tables of information in a computer network. For example, in a common UNIX environment, the list of users for identification is placed in and secret authentication hashes in . NIS adds another "global" user list which is used for identifying users on any client of the NIS domain. Administrators have the ability to configure NIS to serve password data to outside processes to authenticate users using various versions of the Unix crypt(3) hash algorithms. However, in such cases, any NIS(0307) client can retrieve the entire password database for offline inspection. Successor technologies The original NIS design was seen to have inherent limitations, especially in the areas of scalability and security, so other technologies have come to replace it. Sun introduced NIS+ as part of Solaris 2 in 1992, with the intention for it to eventually supersede NIS. NIS+ features much stronger security and authentication features, as well as a hierarchical design intended to provide greater scalability and flexibility. However, it was also more cumbersome to set up and administer, and was more difficult to integrate into an existing NIS environment than many existing users wished. NIS+ has been removed from Solaris 11. As a result, many users choose to stick with NIS, and over time other modern and secure distributed directory systems, most notably Lightweight Directory Access Protocol (LDAP), came to replace it. For example, slapd (the standalone LDAP daemon) generally runs as a non-root user, and SASL-based encryption of LDAP traffic is natively supported. On large LANs, DNS servers may provide better nameserver functionality than NIS or LDAP can provide, leaving just site-wide identification information for NIS master and slave systems to serve. However, some functionssuch as the distribution of netmask information to clients, as well as the maintenance of e-mail aliasesmay still be performed by NIS or LDAP. NIS maintains an NFS database information file as well as so called maps. See also Dynamic Host Configuration Protocol (DHCP) Hesiod (name service) Name Service Switch (NSS) Network information system, for a broader use of NIS to manage other system and networks References External links RHEL 9 will remove support for NIS Alexander Bokovoy, Sr. Principal Software Engineer slide show Unix network-related software Sun Microsystems software Network management Directory services Inter-process communication
217038
https://en.wikipedia.org/wiki/Cryptome
Cryptome
Cryptome is a 501(c)(3) private foundation created in 1996 by John Young and Deborah Natsios and sponsored by Natsios-Young Architects. The site collects information about freedom of expression, privacy, cryptography, dual-use technologies, national security, intelligence, and government secrecy. Cryptome is known for publishing the alleged identity of the CIA analyst who located Osama Bin Laden, lists of people allegedly associated with the Stasi, and the PSIA. Cryptome is also known for publishing the alleged identity of British intelligence agent and anti-Irish Republican Army assassin Stakeknife and the disputed internal emails of the WikiLeaks organization. Cryptome republished the already public surveillance disclosures of Edward Snowden and announced in June 2014 that they would publish all unreleased Snowden documents later that month. Cryptome has received praise from notable organizations such as the EFF, but has also been the subject of criticism and controversy. Cryptome was accused by WikiLeaks of forging emails and some of Cryptome's posted documents have been called an "invitation to terrorists." The website has also been criticized for posting maps and pictures of "dangerous Achilles' heel[s] in the domestic infrastructure," which The New York Times called a "tip off [to] terrorists." ABC News also criticized Cryptome for posting information that terrorists could use to plan attacks. Cryptome continues to post controversial materials including guides on "how to attack critical infrastructure" in addition to other instructions for illegal hacking "for those without the patience to wait for whistleblowers". Cryptome has also received criticism for its handling of private and embarrassing information. People John Young John Young was born in 1935. He grew up in West Texas where his father worked on a decommissioned Texas POW camp, and Young later served in the United States Army Corps of Engineers in Germany (1953–56) and earned degrees in philosophy and architecture from Rice University (1957–63). He went on to receive his graduate degree in architecture from Columbia University in 1969. A self-identified radical, he became an activist and helped create community service group Urban Deadline, where his fellow student-activists initially suspected him of being a police spy. Urban Deadline went on to receive citations from the Citizens Union of the City of New York and the New York City Council, and which later evolved into Cryptome. His work earned him a position on the nominating committee for the Chrysler Award for Innovation in Design in 1998. He has received citations from the American Society of Mechanical Engineers, the American Society of Civil Engineers and the Legal Aid Society. In 1993, he was awarded the Certificate of Special Congressional Recognition. He has stated he doesn't "acknowledge the power of the law." Deborah Natsios Deborah Natsios grew up in CIA safe houses across Europe, Asia and South America reserved for covert CIA station chiefs. She later received her graduate degree in architecture from Princeton University. She has taught architecture and urban design at Columbia University and Parsons The New School for Design, and held seminars at the Pratt Institute and the University of Texas. She is the principal of Natsios Young Architects. In addition to being co-editor for Cryptome, she is responsible for the associated project Cartome, which was founded in 2011 and posts her original critical art and graphical images and other public resources to document sensitive areas. She additionally holds a degree in mathematics from Smith College. She has given talks at the USENIX Annual Technical Conference and Architectures of Fear: Terrorism and the Future of Urbanism in the West, and written on topics ranging from architectural theory to defenses of Jim Bell and assassination politics. She is a notable critic of Edward Snowden. Family Natsios is the daughter of Nicholas Natsios, who served as CIA station chief in Greece from 1948–1956, in Vietnam from 1956–1960, in France from 1960–1962, in South Korea from 1962–1965, in Argentina from 1965–1969, in the Netherlands from 1969–1972, and in Iran from 1972–1974. While stationed in Vietnam, his deputy was William Colby, the future Director of Central Intelligence. His name was included in the 1996 membership directory of the Association of Former Intelligence Officers, which Cryptome helped to publish. Cryptome acknowledged its link to Nicholas Natsios in 2000. Activities Digital library Cryptome's digital library includes series on: Cartome: An archive of news and spatial / geographic documents on privacy, cryptography, dual-use technologies, national security and intelligence—communicated by imagery systems: cartography, photography, photogrammetry, steganography, camouflage, maps, images, drawings, charts, diagrams, IMINT and their reverse-panopticon and counter-deception potential. Cryptome CN: Information, documents and opinions banned by the People's Republic of China. Nuclear Power Plants and WMD Series. Protest Photos Series. NYC Most Dangerous Buildings Series. Editorial policy Young has said of Cryptome, "We do expect to get false documents but it's not our job to sort that out." In another interview, Young promoted skepticism about all sources of information, saying: "Facts are not a trustworthy source of knowledge. Cryptome is not an authoritative source." When asked about providing context for material, Young said, "We do not believe in 'context.' That is authoritarian nonsense. For the same reason, we do not believe in verification, authentication, background." The front page of the Cryptome website states that "documents are removed from this site only by order served directly by a US court having jurisdiction. No court order has ever been served; any order served will be published here – or elsewhere if gagged by order." However, documents have been removed at the request of both law enforcement as well as individuals. Privacy policy In 2015, it was discovered that Cryptome's USB archives contained web server logs, containing clues to the identities of Cryptome visitors including their IP addresses and what files they had accessed on Cryptome. Cryptome initially stated that they had been faked as part of a disinformation campaign. Several days later, Cryptome confirmed the logs were real and shared their findings. The logs had been mailed out to users who ordered the site's archive since they changed web hosts in 2007, which Cryptome blamed on their current ISP, Network Solutions. Cryptome has warned users that they do not have technical measures to protect the anonymity of their sources, saying "don’t send us stuff and think that we’ll protect you." History 1968: Urban Deadline was created as an extension of the Columbia strike and the Avery Hall occupation. Three decades later, Cryptome evolves out of Urban Deadline. 1993: Young and Natsios met and their collaboration begins "some time late in 1993". Young received the Certificate of Special Congressional Recognition. 1994: What became Cryptome began with Young and Natsios's participation in the Cypherpunks electronic mailing list and Urban Deadline. Natsios called this time "seminal" and "transformative" for the internet. 1996: Cryptome was officially created out of their architectural practice. 1999: In October journalist Declan McCullagh wrote about Young's perusal of the site's access logs. 2000: Cartome was founded. In July, two FBI agents spoke with Cryptome on the phone after Cryptome published a Public Security Intelligence Agency personnel file. The file listed 400 names, birthdates, and titles, notably included Director General Hidenao Toyoshima. The FBI expressed concerns over the file, but admitted it was legal to publish in the United States but not Japan. After speculation that the documents may have come from someone called "Shigeo Kifuji", Cryptome identified the source as Hironari Noda. 2004: New York City removed warning signs around gas mains after Cryptome posts pictures of them, citing security concerns. 2006: Cryptome became one of the early organizers of WikiLeaks. Young revealed that he was approached by Julian Assange and asked to be the public face of Wikileaks; Young agreed and his name was listed on the website's original domain registration form. 2007: In the early part of the year, Young and Natsios left Wikileaks due to concerns about the organizations' finances and fundraising, accusing it of being a "money-making operation" and "business intelligence" scheme, and expressing concern that the amount of money they sought "could not be needed so soon except for suspect purposes." Cryptome published an archive of the secret, internal electronic mailing list of the Wikileaks organizers, from its inception through Young's departure from the group. On April 20 the website received notice from its hosting company, Verio, that it would be evicted on May 4 for unspecified breaches of their acceptable use policy. Cryptome alleged that the shutdown is a censorship attempt in response to posts about the Coast Guard's Deepwater program. 2010: Cryptome's Earthlink account was compromised, leading to its website being hacked and Cryptome's data copied. The hackers posted screenshots of the compromised email account. Cryptome confirmed the accuracy of the information taken, but contested specific assertions, claiming they had only about seven gigabytes of data, not the seven terabytes the attackers claimed to copy. In February, Cryptome is briefly shut down by Network Solutions for alleged DMCA violations after it posted a "Microsoft legal spy manual". Microsoft withdraws the complaint 3 days later and the website is restored. In March, PayPal stopped processing donations to Cryptome and froze the account due to "suspicious activities". The account was restored after an "investigation" by PayPal. Cryptome ended on bad terms with Wikileaks, with Young directly accusing them of selling classified material and calling them "a criminal organization". In a separate interview, he called Assange a narcissist and compared him to Henry Kissinger. Young also accused George Soros and the Koch brothers of "backing Wikileaks generously". 2011: In July, Cryptome named the alleged CIA analyst who found Osama Bin Laden. In September, Cryptome published a list of Intelligence and National Security Alliance members, alleging that they were spies. On September 1 Cryptome published the unredacted United States diplomatic cables leak a day before Wikileaks. 2012: In February, the Cryptome website was hacked to infect visitors with malware. 2013: In February, Cryptome's website, email and Twitter account were compromised, exposing whistleblowers and sources that had corresponded with Cryptome via email. Cryptome blamed hackers Ruxpin and Sabu, who was an FBI informant at the time. In June two US Secret Service agents visited Cryptome to request removal of a former presidential Bush family email allegedly hacked by Guccifer. In August, a complaint about Cryptome's identification of alleged Japanese terrorists led Network Solutions to briefly shut down the site. In October Cryptome informed its users that Network Solutions had generated logs of site's visitors, and that requests to delete the logs were not being honored. (According to Network Solutions's website, logs are deleted after thirty days and Cryptome could choose to prevent the logging.) 2014: Cryptome attempted to raise $100,000 to fund the website and its other disclosure initiatives. In June, Cryptome was pulled offline again when malware was found infecting visitors to the site. In July, Cryptome said it would publish the remaining NSA documents taken by Edward Snowden in the "coming weeks". 2015: In September, Cryptome announced that their encryption keys are compromised. A few days later, Cryptome filed for incorporation in New York. Later that month, a GCHQ document leaked by Edward Snowden revealed that the agency is monitoring visits to Cryptome. In October, a sold edition (USB stick) of the Cryptome archive was observed to contain web server logs, containing clues to the identities of Cryptome visitors. The logs had been mailed out to users who ordered the site's archive at least since 2007. Cryptome denied the logs were real, and accused the discoverer of forging the data and other forms of corruption. Cryptome later confirmed they were real. Cryptome posted pictures of logs dating back to the site's creation, claiming that Cryptome is for sale. Cryptome later claimed that the sale is a parody and that "Cryptome has no logs, never has", noting that their "various ISPs have copious logs of many kinds" along with metadata and that Cryptome tracks these "to see what happens to our files". 2016: In April, Cryptome published thousands of credit-card numbers, passwords and personal information allegedly belonging to Qatar National Bank's clients. In July, Cryptome alleged LinkNYC was "tracking Cryptome's movements through the city" after the company responded to Cryptome's social media posts by attempting to prevent them from photographing the company's installations. Reception A 2004 The New York Times article assessed Cryptome with the headline, "Advise the Public, Tip Off the Terrorists" in its coverage of the site's gas pipeline maps. Reader's Digest made an even more alarming assessment of the site in 2005, calling it an "invitation to terrorists" and alleging that Young "may well have put lives at risk". A 2007 Wired article criticized Cryptome for going "overboard". The Village Voice featured Cryptome in its 2008 Best of NYC feature, citing its hosting of "photos, facts, and figures" of the Iraq War. WikiLeaks accused Cryptome of executing a "smear campaign" in 2010 after Cryptome posted what it alleged were email exchanges with WikiLeaks insiders, which WikiLeaks disputed. Cryptome was awarded the Defensor Libertatis (defender of liberty) award at the 2010 Big Brother Awards, for a "life in the fight against surveillance and censorship" and for providing "suppressed or otherwise censored documents to the global public". The awards committee noted that Cryptome had engaged with "every protagonist of the military-electronic monitoring complex". In 2012, Steven Aftergood, the director of the Federation of American Scientists Project on Government Secrecy, described Young and Cryptome as "fearless and contemptuous of any pretensions to authority" and "oblivious to the security concerns that are the preconditions of a working democracy. And he seems indifferent to the human costs of involuntary disclosure of personal information." Aftergood specifically criticized Cryptome's handling of the McGurk emails, saying "it's fine to oppose McGurk or anyone else. It wasn't necessary to humiliate them". In 2013, Cindy Cohn, then the legal director of the Electronic Frontier Foundation, praised Cryptome as "a really important safety valve for the rest of us, as to what our government is up to." In 2014, Glenn Greenwald praised and criticized Cryptome, saying "There is an obvious irony to complaining that we're profiting from our work while [Cryptome] tries to raise $100,000 by featuring our work. Even though [Cryptome] occasionally does some repellent and demented things—such as posting the home addresses of Laura Poitras, Bart Gellman, and myself along with maps pointing to our homes—[they also do] things that are quite productive and valuable. On the whole, I'm glad there is a Cryptome and hope they succeed in raising the money they want." Giganews criticized Cryptome for posting unverified allegations which Giganews described as completely false and without evidence. Giganews went on to question Cryptome's credibility and motives, saying "Cryptome's failure to contact us to validate the allegations or respond to our concerns has lessened their credibility. It does not seem that Cryptome is in search for the truth, which leaves us to question what are their true motives." Peter Earnest, a 36-year veteran of the CIA turned executive director of the International Spy Museum and chairman of the board of directors of the Association for Intelligence Officers criticized Cryptome for publishing the names of spies, saying it does considerable damage and aids people that would do them harm. See also Cryptome quotes Cypherpunks Distributed Denial of Secrets Espionage Open government Secrecy WikiLeaks References External links Official: Cryptome Index Natsios Young Research YouTube channel Mirrors: WikiLeaks Search of Cryptome Archive and Tweets WikiLeaks mirror Cryptome dataset 1996-2016 - 102,000 files on the Internet Archive The Cryptome Archives hosted by The Cthulhu Siteprotect.net mirror Other: Why All the Snowden Docs Should Be Public: An Interview with Cryptome (Cox, Joseph, Vice/Motherboard, July 16, 2014) Older, quieter than WikiLeaks, Cryptome perseveres (Associated Press, Mar. 9, 2013) Wikileaks' estranged co-founder becomes a critic (Q&A) Young and Natsios radio interview with Emmanuel Goldstein on Off The Hook. (February 2012, WBAI) Open Source Design 01: The architects of information (Domus June 2011) An Excerpt From 'This Machine Kills Secrets': Meet The 'Spiritual Godfather Of Online Leaking' (Greenberg, Andy, September 17, 2012) Technology websites Cypherpunks Internet properties established in 1996 1996 establishments in the United States Whistleblowing in the United States Online archives of the United States News leaks Information sensitivity Classified documents
217833
https://en.wikipedia.org/wiki/DVD-Audio
DVD-Audio
DVD-Audio (commonly abbreviated as DVD-A) is a digital format for delivering high-fidelity audio content on a DVD. DVD-Audio uses most of the storage on the disc for high-quality audio and is not intended to be a video delivery format. The first discs entered the marketplace in 2000. DVD-Audio was in a format war with Super Audio CD (SACD), and along with consumers' tastes tending towards downloadable music, these factors meant that neither high-quality disc achieved considerable market penetration; DVD-Audio has been described as "extinct" by 2007. As of June 2019, Amazon lists only 155 albums available on "Audio DVD". DVD-Audio remains a niche market but some independent online labels offer a wider choice of titles. Audio specifications DVD-Audio offers many possible configurations of audio channels, ranging from single-channel mono to 5.1-channel surround sound, at various sampling frequencies and sample rates. (The ".1" denotes a low-frequency effects channel (LFE) for bass and/or special audio effects.) Compared to the Compact Disc, the much higher capacity DVD format enables the inclusion of either: Considerably more music (with respect to total running time and quantity of songs) or Encoding at higher linear sampling rates and more bits-per-sample, and/or Additional channels for spatial sound reproduction. Audio on a DVD-Audio disc can be stored in many different bit depth/sampling rate/channel combinations: Different bit depth/sampling rate/channel combinations can be used on a single disc. For instance, a DVD-Audio disc may contain a 96 kHz/24-bit 5.1-channel audio track as well as a 192 kHz/24-bit stereo audio track. Also, the channels of a track can be split into two groups stored at different resolutions. For example, the front speakers could be 96/24, while the surrounds are 48/20. Audio is stored on the disc in Linear PCM format, which is either uncompressed or losslessly compressed with MLP (Meridian Lossless Packing). The maximum permissible total bit rate is 9.6 Megabits per second. Channel/resolution combinations that would exceed this need to be compressed. In uncompressed modes, it is possible to get up to 96/16 or 48/24 in 5.1, and 192/24 in stereo. To store 5.1 tracks in 88.2/20, 88.2/24, 96/20 or 96/24 MLP encoding is mandatory. If no native stereo audio exists on the disc, the DVD-Audio player may be able to downmix the 5.1-channel audio to two-channel stereo audio if the listener does not have a surround sound setup (provided that the coefficients were set in the stream at authoring). Downmixing can only be done to two-channel stereo, not to other configurations, such as 4.0 quad. DVD-Audio may also feature menus, text subtitles, still images and video, plus in high end authoring systems it is also possible to link directly into a Video_TS directory that might contain video tracks, as well as PCM stereo and other "bonus" features. Player compatibility With the introduction of the DVD-Audio format, some kind of backward compatibility with existing DVD-Video players was desired, although not required. To address this, most DVD-Audio discs also contain DVD-Video compatible data to play the standard DVD-Video Dolby Digital 5.1-channel audio track on the disc (which can be downmixed to two channels for listeners with no surround sound setup). Many DVD-Video players also offer the option to create a Dolby MP matrix-encoded soundtrack for older surround sound systems lacking Dolby Digital or DTS decoding. Some discs also include a native Dolby Digital 2.0 stereo, and even a DTS 96/24 5.1-channel, audio track. Since the DVD-Audio format is a member of the DVD family, a single disc can have multiple layers, and even two sides that contain audio and video material. A common configuration is a single-sided DVD with content in both the DVD-Video (VIDEO_TS) and DVD-Audio (AUDIO_TS) directories. The high-resolution, Packed PCM audio encoded using MLP is only playable by DVD players containing DVD-Audio decoding capability. DVD-Video content, which can include LPCM, Dolby or DTS material, and even video, makes the disc compatible with all DVD players. Other disc configurations may consist of double layer DVDs (DVD-9) or two-sided discs (DVD-10, DVD-14 or DVD-18). Some labels have released two-sided DVD titles that contain DVD-Audio content on one side and DVD-Video content on the other, the Classic Records HDAD being one such example. Unofficial playback of DVD-Audio on a PC is possible through freeware audio player foobar2000 for Windows using an open source plug-in extension called DVDADecoder. VLC media player has DVD-Audio support Cyberlink's PowerDVD Version 8 provides an official method of playing DVD-Audio discs. This feature was dropped from version 9 onwards. Creative also provide a dedicated DVD-Audio player with some of its Soundblaster Audigy and X-Fi cards. Preamplifier/surround-processor interface In order to play DVD-Audio, a preamplifier or surround controller with six analog inputs was originally required. Whereas DVD-Video audio formats such as Dolby Digital and DTS can be sent via the player's digital output to a receiver for conversion to analog form and distribution to speakers, DVD-Audio is not allowed to be delivered via unencrypted digital audio link at sample rates higher than 48 kHz (i.e., ordinary DVD-Video quality) due to concerns about digital copying. However, encrypted digital formats have now been approved by the DVD Forum, the first of which was Meridian Audio's MHR (Meridian High Resolution). The High Definition Multimedia Interface (HDMI 1.1) also allows encrypted digital audio to be carried up to DVD-Audio specification (6 × 24-bit/96 kHz channels or 2 × 24-bit/192 kHz channels). The six channels of audio information can thus be sent to the amplifier by several different methods: The 6 audio channels can be decrypted and extracted in the player and sent to the amplifier along 6 standard analog cables. The 6 audio channels can be decrypted and then re-encrypted into an HDMI or IEEE-1394 (FireWire) signal and sent to the amplifier, which will then decrypt the digital signal and then extract the 6 channels of audio. HDMI and IEEE-1394 encryption are different from the DVD-A encryption and were designed as a general standard for a high-quality digital interface. The amplifier has to be equipped with a valid decryption key or it will not play the disk. The third option is via the S/PDIF (or TOSLINK) digital interface. However, because of concerns over unauthorized copying, DVD-A players are required to handle this digital interface in one of the following ways: Turn such an interface off completely. This option is preferred by music publishers. Downconvert the audio to a 2-channel 16-bit/48 kHz PCM signal. The music publishers are not enthusiastic about this because it permits the production of a CD-quality copy, something they still expect to sell, besides DVD-A. Downconvert the audio to 2 channels, but keep the original sample size and bit rate if the producer sets a flag on the DVD-A disc telling the player to do so. A final option is to modify the player, capturing the high resolution digital signals before they are fed to internal D/A converters and convert it to S/PDIF, giving full range digital (but only stereo) sound. There exists already do-it-yourself solutions for some players. There also exists an option to equip a DVD-A player with multiple S/PDIF outputs, for full resolution multichannel digital output. Sound quality Researchers in 2004 found no detectable difference in audio quality between DVD-A and SACD (and subsequent research found no detectable difference in audio quality between SACD and CD). Format variants Four of the major music labels, Universal Music, EMI, Warner Bros. Records, Naxos Records and several smaller audiophile labels (such as AIX Records, Claudio Records, DTS Entertainment, Silverline Records, OgreOgress productions, Tacet and Teldec) have released or are continuing to release albums on DVD-Audio, but the number is minimal compared to standard CDs. New high-definition titles have been released in standard DVD-Video format (which can contain two-channel Linear PCM audio data ranging from 48 kHz/16-bit to 96 kHz/24-bit), "HDAD", which includes a DVD-Video format recording on one side and DVD-Audio on the other, CD/DVD packages, which can include the album on both CD and DVD-Audio, or DualDisc, which can contain DVD-Audio content on the DVD side. In addition, some titles that had been initially released as a standalone DVD-Audio disc were re-released as a CD/DVD package or as a DualDisc. Copy protection DVD-Audio discs may optionally employ a copy protection mechanism called Content Protection for Prerecorded Media (CPPM). CPPM, managed by the 4C Entity, was designed to prevent users from extracting audio to computers and portable media players. Because DVD-Video's content-scrambling system (CSS) was quickly broken, DVD-Audio's developers sought a better method of blocking unauthorized duplications. They developed CPPM, which uses a media key block (MKB) to authenticate DVD-Audio players. In order to decrypt the audio, players must obtain a media key from the MKB, which also is encrypted. The player must use its own unique key to decrypt the MKB. If a DVD-Audio player's decryption key is compromised, that key can be rendered useless for decrypting future DVD-Audio discs. DVD-Audio discs can also utilize digital watermarking technology developed by the Verance Corporation, typically embedded into the audio once every thirty seconds. If a DVD-Audio player encounters a watermark on a disc without a valid MKB, it will halt playback. The 4C Entity also developed a similar specification, Content Protection for Recordable Media (CPRM), which is used on Secure Digital cards. DVD-Audio's copy protection was overcome in 2005 by tools which allow data to be decrypted or converted to 6 channel .WAV files without going through lossy digital-to-analog conversion. Previously that conversion had required expensive equipment to retain all six channels of audio rather than having it downmixed to stereo. In the digital method, the decryption is done by a commercial software player which has been patched to allow access to the unprotected audio. In 2007 the encryption scheme was overcome with a tool called dvdcpxm. On 12 February 2008 a program called DVD-Audio Explorer was released, containing aforementioned libdvdcpxm coupled with an open source MLP decoder. Like DVD-Video decryption, such tools may be illegal to use in the United States under the Digital Millennium Copyright Act. While the Recording Industry Association of America has been successful in keeping these tools off Web sites, they are still distributed on P2P file sharing networks and newsgroups. Additionally, in 2007 the widely used commercial software DVDFab Platinum added DVD-Audio decryption, allowing users to back up a full DVD-A image to an ISO image file. Authoring software OS X Sonic Solutions DVD Creator AV – The first DVD-Audio authoring solution available. A spin-off of the popular high-end DVD Video authoring package. It allows DVD-Audio authoring at the command line level only. No longer sold or supported by Sonic Solutions. Sonic Studio SonicStudio HD – Macintosh-based tool used for High Density audio mastering and to prepare audio for DVD-A authoring in One Click DVD. Sonic Studio OneClick DVD – Converts prepared Sonic Studio EDLs into binary MLP files to be used in the authoring tool. Also generates scriptFile information to be added to DVD Creator AV projects. This product is no longer available. DVD audio Tools: see Windows section below. Apple Logic Pro 8 and later - When bouncing, choose "Burn to CD/DVD" under destination, and then choose DVD-A for the destination format. Minnetonka discWelder - This product is no longer available. Steinberg Wavelab Pro - This product does not support MLP encoding or MLP-encoded files. It supports slideshows and DVD menus. Burn - General purpose CD and DVD burning utility that can write AUDIO_TS data. Select "Audio" tab then "DVD-Audio" from the drop-down menu. Windows Minnetonka discWelder - This product is no longer available. Cirlinca HD-AUDIO Solo Ultra. This product does not support MLP encoding or MLP-encoded files. It has not been available online as of early 2019. DVD Audio Extractor - This product and others can extract audio from DVD-Audio discs, but does not author new DVD-Audio discs. Steinberg WaveLab Pro - This product does not support MLP encoding or MLP-encoded files. It supports slideshows and DVD menus. Gear Pro Mastering Edition - This product can burn DVD-Audio images, but does not author new DVD-Audio discs. DVD audio tools package: A project called DVD audio Tools provides a free/open source console application and user interface. Supports menu editing and audio extraction from discs. Experimental support for MLP decoding. DVD-Audio/Video discs (aka Hybrid or Universal DVDs) can be authored. MAGIX Samplitude Restricted DVD-Audio editing (no MLP, no menus, no slideshows) MAGIX Sequoia Linux DVD audio Tools provides free/open source DVD-Audio authoring tools for Linux and other *nix platforms. (See above for Windows). For non-expert users debian or rpm binary packages are provided. References External links Robert Normandeau Interview Interview with Robert Normandeau On Outsight Radio Hours about using the format for the release of Puzzles (empreintes DIGITALes, IMED 0575, 2005) DVD-Audio at the Museum of Obsolete Media Audio storage Audio Audiovisual introductions in 2000
218382
https://en.wikipedia.org/wiki/TI%20MSP430
TI MSP430
The MSP430 is a mixed-signal microcontroller family from Texas Instruments, first introduced on 14 February 1992. Built around a CPU, the MSP430 is designed for low cost and, specifically, low power consumption embedded applications. Applications The MSP430 can be used for low powered embedded devices. The current drawn in idle mode can be less than 1 µA. The top CPU speed is 25 MHz. It can be throttled back for lower power consumption. The MSP430 also uses six different low-power modes, which can disable unneeded clocks and CPU. Further, the MSP430 can wake-up in times under 1 microsecond, allowing the controller to stay in sleep mode longer, minimizing average current use. The device comes in a variety of configurations featuring the usual peripherals: internal oscillator, timer including pulse-width modulation (PWM), watchdog timer (watchdog), USART, Serial Peripheral Interface (SPI) bus, Inter-Integrated Circuit (I²C), 10/12/14/16/24-bit analog-to-digital converters (ADC), and brownout reset circuitry. Some less usual peripheral options include comparators (that can be used with the timers to do simple ADC), on-chip operational amplifiers (op-amp) for signal conditioning, 12-bit digital-to-analog converter (DAC), liquid crystal display (LCD) driver, hardware multiplier, USB, and direct memory access (DMA) for ADC results. Apart from some older erasable programmable read-only memory (EPROM, such as MSP430E3xx) and high volume mask ROM (MSP430Cxxx) versions, all of the devices are in-system programming enabled via Joint Test Action Group (JTAG), full four-wire or Spy-Bi-Wire), a built in bootstrapping loader (BSL) using UART such as RS232, or USB on devices with USB support. No BSL is included in F20xx, G2xx0, G2xx1, G2xx2, or I20xx family devices. There are, however, limits that preclude its use in more complex embedded systems. The MSP430 does not have an external memory bus, so it is limited to on-chip memory, up to 512 KB flash memory and 66 KB random-access memory (RAM), which may be too small for applications needing large buffers or data tables. Also, although it has a DMA controller, it is very difficult to use it to move data off the chip due to a lack of a DMA output strobe. MSP430 generations Six general generations of MSP430 processors exist. In order of development, they are: '3xx generation, '1xx generation, '4xx generation, '2xx generation, '5xx generation, and '6xx generation. The digit after the generation identifies the model (generally higher model numbers are larger and more capable), the third digit identifies the amount of memory included, and the fourth, if present, identifies a minor model variant. The most common variation is a different on-chip analog-to-digital converter. The 3xx and 1xx generations are limited to a 16-bit address space. In the later generations this was expanded to include '430X' instructions that allow a 20-bit address space. As happened with other processor architectures (e.g. the processor of the PDP-11), extending the addressing range beyond the 16-bit word size introduced some peculiarities and inefficiencies for programs larger than 64 kBytes. In the following list, it helps to think of the typical 200 mA·Hr capacity of a CR2032 lithium coin cell as 200,000 μA·Hr, or 22.8 μA·year. Thus, considering only the CPU draw, such a battery could supply a 0.7 μA current draw for 32 years. (In reality, battery self-discharge would reduce this number.) The significance of the RAM retention vs the real-time clock mode is that in real time clock mode the CPU can go to sleep with a clock running which will wake it up at a specific future time. In RAM retention mode, some external signal is required to wake it, e.g., input/output (I/O) pin signal or SPI slave receive interrupt. MSP430x1xx series The MSP430x1xx Series is the basic generation without an embedded LCD controller. They are generally smaller than the '3xx generation. These flash- or ROM-based ultra-low-power MCUs offer 8 MIPS, 1.8–3.6 V operation, up to 60 KB flash, and a wide range of analog and digital peripherals. Power specification overview, as low as: 0.1 μA RAM retention 0.7 μA real-time clock mode 200 μA / MIPS active Features fast wake-up from standby mode in less than 6 µs. Device parameters Flash options: 1–60 KB ROM options: 1–16 KB RAM : 128 B–10 KB GPIO options: 14, 22, 48 pins ADC options: Slope, 10 & 12-bit SAR Other integrated peripherals: 12-bit DAC, up to 2 16-bit timers, watchdog timer, brown-out reset, SVS, USART module (UART, SPI), DMA, 16×16 multiplier, Comparator_A, temperature sensor MSP430F2xx series The MSP430F2xx Series are similar to the '1xx generation, but operate at even lower power, support up to 16 MHz operation, and have a more accurate (±2%) on-chip clock that makes it easier to operate without an external crystal. These flash-based ultra-low power devices offer 1.8–3.6 V operation. Includes the very-low power oscillator (VLO), internal pull-up/pull-down resistors, and low-pin count options. Power specification overview, as low as: 0.1 μA RAM retention 0.3 μA standby mode (VLO) 0.7 μA real-time clock mode 220 μA / MIPS active Feature ultra-fast wake-up from standby mode in less than 1 μs Device parameters Flash options: 1–120 KB RAM options: 128 B – 8 KB GPIO options: 10, 11, 16, 24, 32, and 48 pins ADC options: Slope, 10 & 12-bit SAR, 16 & 24-bit Sigma Delta Other integrated peripherals: operational amplifiers, 12-bit DAC, up to 2 16-bit timers, watchdog timer, brown-out reset, SVS, USI module (I²C, SPI), USCI module, DMA, 16×16 multiplier, Comparator_A+, temperature sensor MSP430G2xx series The MSP430G2xx Value Series features flash-based Ultra-Low Power MCUs up to 16 MIPS with 1.8–3.6 V operation. Includes the Very-Low power Oscillator (VLO), internal pull-up/pull-down resistors, and low-pin count options, at lower prices than the MSP430F2xx series. Ultra-Low Power, as low as (@2.2 V): 0.1 μA RAM retention 0.4 μA Standby mode (VLO) 0.7 μA real-time clock mode 220 μA / MIPS active Ultra-Fast Wake-Up From Standby Mode in <1 μs Device parameters Flash options: 0.5–56 KB RAM options: 128 B–4 KB GPIO options: 10, 16, 24, 32 pins ADC options: Slope, 10-bit SAR Other integrated peripherals: Capacitive Touch I/O, up to 3 16-bit timers, watchdog timer, brown-out reset, USI module (I²C, SPI), USCI module, Comparator_A+, Temp sensor MSP430x3xx series The MSP430x3xx Series is the oldest generation, designed for portable instrumentation with an embedded LCD controller. This also includes a frequency-locked loop oscillator that can automatically synchronize to a low-speed (32 kHz) crystal. This generation does not support EEPROM memory, only mask ROM and UV-eraseable and one-time programmable EPROM. Later generations provide only flash memory and mask ROM options. These devices offer 2.5–5.5 V operation, up to 32 KB ROM. Power specification overview, as low as: 0.1 μA RAM retention 0.9 μA real-time clock mode 160 μA / MIPS active Features fast wake-up from standby mode in less than 6 µs. Device parameters: ROM options: 2–32 KB RAM options: 512 B–1 KB GPIO options: 14, 40 pins ADC options: Slope, 14-bit SAR Other integrated peripherals: LCD controller, multiplier MSP430x4xx series The MSP430x4xx Series are similar to the '3xx generation, but include an integrated LCD controller, and are larger and more capable. These flash or ROM based devices offers 8–16 MIPS at 1.8–3.6 V operation, with FLL, and SVS. Ideal for low power metering and medical applications. Power specification overview, as low as: 0.1 μA RAM retention 0.7 μA real-time clock mode 200 μA / MIPS active Features fast wake-up from standby mode in less than 6 µs. Device parameters: Flash/ROM options: 4 – 120 KB RAM options: 256 B – 8 KB GPIO options: 14, 32, 48, 56, 68, 72, 80 pins ADC options: Slope, 10 & 12-bit SAR, 16-bit Sigma Delta Other integrated peripherals: SCAN_IF, ESP430, 12-bit DAC, Op Amps, RTC, up to 2 16-bit timers, watchdog timer, basic timer, brown-out reset, SVS, USART module (UART, SPI), USCI module, LCD Controller, DMA, 16×16 & 32x32 multiplier, Comparator_A, temperature sensor, 8 MIPS CPU Speed MSP430x5xx series The MSP430x5xx Series are able to run up to 25 MHz, have up to 512 KB flash memory and up to 66 KB RAM. This flash-based family features low active power consumption with up to 25 MIPS at 1.8–3.6 V operation (165 uA/MIPS). Includes an innovative power management module for optimal power consumption and integrated USB. Power specification overview, as low as: 0.1 μA RAM retention 2.5 μA real-time clock mode 165 μA / MIPS active Features fast wake-up from standby mode in less than 5 µs. Device parameters: Flash options: up to 512 KB RAM options: up to 66 KB ADC options: 10 & 12-bit SAR GPIO options: 29, 31, 47, 48, 63, 67, 74, 87 pins Other optional integrated peripherals: 12-bit DAC, High resolution PWM, 5 V I/O's, USB, backup battery switch, up to 4 16-bit timers, watchdog timer, Real-Time Clock, brown-out reset, SVS, USCI module, DMA, 32x32 multiplier, Comp B, temperature sensor MSP430x6xx series The MSP430x6xx Series are able to run up to 25 MHz, have up to 512 KB flash memory and up to 66 KB RAM. This flash-based family features low active power consumption with up to 25 MIPS at 1.8–3.6 V operation (165 uA/MIPS). Includes an innovative power management module for optimal power consumption and integrated USB. Power specification overview, as low as: 0.1 μA RAM retention 2.5 μA real-time clock mode 165 μA / MIPS active Features fast wake-up from standby mode in less than 5 µs. Device parameters: Flash options: up to 512 KB RAM options: up to 66 KB ADC options: 12-bit SAR GPIO options: 74 pins Other integrated peripherals: USB, LCD, DAC, Comparator_B, DMA, 32x32 multiplier, power management module (BOR, SVS, SVM, LDO), watchdog timer, RTC, Temp sensor RF SoC (CC430) series The RF SoC (CC430) Series provides tight integration between the microcontroller core, peripherals, software, and RF transceiver. Features <1 GHz RF transceiver, with 1.8 V–3.6 V operation. Programming using Arduino integrated development environment (IDE) is possible via the panStamp API. Power specification overview, as low as: 1 μA RAM retention 1.7 μA real-time clock mode 180 μA / MIPS active Device parameters: Speed options: up to 20 MHz Flash options: up to 32 KB RAM options: up to 4 KB ADC options: 12-bit SAR GPIO options: 30 & 44 pins Other integrated peripherals: LCD Controller, up to 2 16-bit timers, watchdog timer, RTC, power management module (BOR, SVS, SVM, LDO), USCI module, DMA, 32x32 multiplier, Comp B, temperature sensor FRAM series The FRAM Series from Texas Instruments provides unified memory with dynamic partitioning and memory access speeds 100 times faster than flash. FRAM is also capable of zero power state retention in all power modes, which means that writes are guaranteed, even in the event of a power loss. With a write endurance of over 100 trillion cycles, EEPROM is no longer required. Active power consumption at less than 100μA/MHz. Power specification overview, as low as: 320 nA RAM retention 0.35 μA real-time clock mode 82 μA / MIPS active Device parameters: Speed options: 8 to 24 MHz FRAM options: 4 to 256 KB RAM options: 0.5 to 8 KB ADC options: 10 or 12-bit SAR GPIO options: 17 to 83 GPIO pins Other possible integrated peripherals: MPU, up to 6 16-bit timers, watchdog timer, RTC, power management module (BOR, SVS, SVM, LDO), USCI module, DMA, multiplier, Comp B, temperature sensor, LCD driver, I2C and UART BSL, Extended Scan Interface, 32 bit multiplier, AES, CRC, signal processing acceleration, capacitive touch, IR modulation Low voltage series The Low Voltage Series include the MSP430C09x and MSP430L092 parts. These 2 series of low voltage 16 bit microcontrollers have configurations with two 16-bit timers, an 8-bit analog-to-digital (A/D) converter, an 8-bit digital-to-analog (D/A) converter, and up to 11 I/O pins. Power specification overview, as low as: 1 μA RAM retention 1.7 μA real-time clock mode 180 μA / MIPS active Device parameters: Speed options: 4 MHz ROM options: 1–2 kB SRAM options: 2 kB ADC options: 8-bit SAR GPIO options: 11 pins Other integrated peripherals: up to 2 16-bit timers, watchdog timer, brown-out reset, SVS, comparator, temperature sensor Other MSP430 families More families within MSP430 include Fixed Function, Automotive, and Extended Temp parts. Fixed Function: The MSP430BQ1010 16-bit microcontroller is an advanced fixed-function device that forms the control and communications unit on the receiver side for wireless power transfer in portable applications. MSP430BQ1010 complies with the Wireless Power Consortium (WPC) specification. For more information, see Contactless Power. Automotive: Automotive MSP430 microcontrollers (MCUs) from Texas Instruments (TI) are 16-bit, RISC-based, mixed-signal processors that are AEC-Q100 qualified and suitable for automotive applications in environments up to 105 °C ambient temperature. LIN compliant drivers for the MSP430 MCU provided by IHR GmbH. Extended Temp: MSP430 devices are very popular in harsh environments such as industrial sensing for their low power consumption and innovative analog integration. Some harsh environment applications include transportation/automotive, renewable energy, military/space/avionics, mineral exploration, industrial, and safety & security. Device Definitions: HT: -55 °C to 150 °C EP: Enhanced products -55 °C to 125 °C Q1: Automotive Q100 qualified -40 °C to 105 °C T: Extended temperature -40 °C to 105 °C applications Note that when the flash size is over 64K words (128 KBytes), instruction addresses can no longer be encoded in just two bytes. This change in pointer size causes some incompatibilities with previous parts. Peripherals The MSP430 peripherals are generally easy to use, with (mostly) consistent addresses between models, and no write-only registers (except for the hardware multiplier). General purpose I/O ports 0–10 If the peripheral is not needed, the pin may be used for general purpose I/O. The pins are divided into 8-bit groups called "ports", each of which is controlled by a number of 8-bit registers. In some cases, the ports are arranged in pairs which can be accessed as 16-bit registers. The MSP430 family defines 11 I/O ports, P0 through P10, although no chip implements more than 10 of them. P0 is only implemented on the '3xx family. P7 through P10 are only implemented on the largest members (and highest pin count versions) of the '4xx and '2xx families. The newest '5xx and '6xx families has P1 through P11, and the control registers are reassigned to provide more port pairs. Each port is controlled by the following registers. Ports which do not implement particular features (such as interrupt on state change) do not implement the corresponding registers. PxIN Port x input. This is a read-only register, and reflects the current state of the port's pins. PxOUT Port x output. The values written to this read/write register are driven out the corresponding pins when they are configured to output. PxDIR Port x data direction. Bits written as 1 configure the corresponding pin for output. Bits written as 0 configure the pin for input. PxSEL Port x function select. Bits written as 1 configure the corresponding pin for use by the specialized peripheral. Bits written as 0 configure the pin for general purpose I/O. Port 0 ('3xx parts only) is not multiplexed with other peripherals and does not have a P0SEL register. PxREN Port x resistor enable ('2xx & '5xx only). Bits set in this register enable weak pull-up or pull-down resistors on the corresponding I/O pins even when they are configured as inputs. The direction of the pull is set by the bit written to the PxOUT register. PxDS Port x drive strength ('5xx only). Bits set in this register enable high current outputs. This increases output power, but may cause electromagnetic interference (EMI). Ports 0–2 can produce interrupts when inputs change. Further registers configure this ability: PxIES Port x interrupt edge select. Selects the edge which will cause the PxIFG bit to be set. When the input bit changes from matching the PxIES state to not matching it (i.e. whenever a bit in PxIES XOR PxIN changes from clear to set), the corresponding PxIFG bit is set. PxIE Port x interrupt enable. When this bit and the corresponding PxIFG bit are both set, an interrupt is generated. PxIFG Port x interrupt flag. Set whenever the corresponding pin makes the state change requested by PxIES. Can be cleared only by software. (Can also be set by software.) PxIV Port x interrupt vector ('5xx only). This 16-bit register is a priority encoder which can be used to handle pin-change interrupts. If n is the lowest-numbered interrupt bit which is pending in PxIFG and enabled in PxIE, this register reads as 2n+2. If there is no such bit, it reads as 0. The scale factor of 2 allows direct use as an offset into a branch table. Reading this register also clears the reported PxIFG flag. Some pins have special purposes either as inputs or outputs. (For example, timer pins can be configured as capture inputs or PWM outputs.) In this case, the PxDIR bit controls which of the two functions the pin performs when the PxSEL bit is set. If there is only one special function, then PxDIR is generally ignored. The PxIN register is still readable if the PxSEL bit is set, but interrupt generation is disabled. If PxSEL is clear, the special function's input is frozen and disconnected from the external pin. Also, configuring a pin for general purpose output does not disable interrupt generation. Integrated peripherals Analog Analog-to-digital converter The MSP430 line offers two types of analog-to-digital conversion (ADC). 10- and 12-bit successive approximation converters, as well as a 16-bit Sigma-Delta converter. Data transfer controllers and a 16-word conversion-and-control buffer allow the MSP430 to convert and store samples without CPU intervention, minimizing power consumption. Analog pool The Analog Pool (A-POOL) module can be configured as an ADC, DAC, comparator, SVS or temperature sensor. It allows flexibility for the user to program a series of analog functions with only one setup. Comparator A, A+ The MSP430's comparator module provides precision slope analog-to-digital conversions. Monitors external analog signals and provides voltage and resistor value measurement. Capable of selectable power modes. DAC12 The DAC12 module is a 12-bit, voltage-output DAC featuring internal/external reference selection and programmable settling time for optimal power consumption. It can be configured in 8- or 12-bit mode. When multiple DAC12 modules are present, they may be grouped together for synchronous update operation. Op Amps Feature single supply, low current operation with rail-to-rail outputs and programmable settling times. Software selectable configuration options: unity gain mode, comparator mode, inverting PGA, non-inverting PGA, differential and instrumentation amplifier. Sigma Delta (SD) The SD16/SD16_A/SD24_A modules each feature 16-/24-bit sigma-delta A/D converters with an internal 1.2-V reference. Each converter has up to eight fully differential multiplexed inputs, including a built-in temperature sensor. The converters are second-order oversampling sigma-delta modulators with selectable oversampling ratios of up to 1024 (SD16_A/SD24_A) or 256 (SD16). Timers Basic timer (BT) The BT has two independent 8-bit timers that can be cascaded to form a 16-bit timer/counter. Both timers can be read and written by software. The BT is extended to provide an integrated RTC. An internal calendar compensates for months with less than 31 days and includes leap-year correction. Real-Time Clock RTC_A/B are 32-bit hardware counter modules that provide clock counters with a calendar, a flexible programmable alarm, and calibration. The RTC_B includes a switchable battery backup system that provides the ability for the RTC to operate when the primary supply fails. 16-bit timers Timer_A, Timer_B and Timer_D are asynchronous 16-bit timers/counters with up to seven capture/compare registers and various operating modes. The timers support multiple capture/compares, PWM outputs, and interval timing. They also have extensive interrupt capabilities. Timer_B introduces added features such as programmable timer lengths (8-, 10-, 12-, or 16-bit) and double-buffered compare register updates, while Timer_D introduces a high-resolution (4 ns) mode. Watchdog (WDT+) The WDT+ performs a controlled system restart after a software problem occurs. If the selected time interval expires, a system reset is generated. If the watchdog function is not needed in an application, the module can be configured as an interval timer and can generate interrupts at selected time intervals. System Advanced Encryption Standard (AES) The AES accelerator module performs encryption and decryption of 128-bit data with 128-bit keys according to the advanced encryption standard in hardware, and can be configured with user software. Brown-Out Reset (BOR) The BOR circuit detects low supply voltages and resets the device by triggering a power-on reset (POR) signal when power is applied or removed. The MSP430 MCU’s zero-power BOR circuit is continuously turned on, including in all low-power modes. Direct Memory Access (DMA) Controller The DMA controller transfers data from one address to another across the entire address range without CPU intervention. The DMA increases the throughput of peripheral modules and reduces system power consumption. The module features up to three independent transfer channels. Although the MSP430's DMA subsystem is very capable it has several flaws, the most significant of which is the lack of an external transfer strobe. Although a DMA transfer can be triggered externally, there is no external indication of completion of a transfer. Consequently DMA to and from external sources is limited to external trigger per byte transfers, rather than full blocks automatically via DMA. This can lead to significant complexity (as in requiring extensive hand tweaking of code) when implementing processor to processor or processor to USB communications. The reference cited uses an obscure timer mode to generate high speed strobes for DMA transfers. The timers are not flexible enough to easily make up for the lack of an external DMA transfer strobe. DMA operations that involve word transfers to byte locations cause truncation to 8 bits rather than conversion to two byte transfers. This makes DMA with A/D or D/A 16 bit values less useful than it could be (although it is possible to DMA these values through port A or B on some versions of the MSP 430 using an externally visible trigger per transfer such as a timer output). Enhanced Emulation Module (EEM) The EEM provides different levels of debug features such as 2-8 hardware breakpoints, complex breakpoints, break when read/write occurs at specified address, and more. Embedded into all flash-based MSP430 devices. Hardware multiplier Some MSP430 models include a memory-mapped hardware multiplier peripheral which performs various 16×16+32→33-bit multiply-accumulate operations. Unusually for the MSP430, this peripheral does include an implicit 2-bit write-only register, which makes it effectively impossible to context switch. This peripheral does not interfere with CPU activities and can be accessed by the DMA. The MPY on all MSP430F5xx and some MSP430F4xx devices feature up to 32-bit x 32-bit. The 8 registers used are: {|class="wikitable" |- ! Address || Name || Function |- | 0x130 || MPY || Operand1 for unsigned multiply |- | 0x132 || MPYS || Operand1 for signed multiply |- | 0x134 || MAC || Operand1 for unsigned multiply-accumulate |- | 0x136 || MACS || Operand1 for signed multiply-accumulate |- | 0x138 || OP2 || Second operand for multiply operation |- | 0x13A || ResLo || Low word of multiply result |- | 0x13C || ResHi || High word of multiply result |- | 0x13E || SumExt || Carry out of multiply-accumulate |} The first operand is written to one of four 16-bit registers. The address written determines the operation performed. While the value written can be read back from any of the registers, the register number written to cannot be recovered. If a multiply-accumulate operation is desired, the ResLo and ResHi registers must also be initialized. Then, each time a write is performed to the OP2 register, a multiply is performed and the result stored or added to the result registers. The SumExt register is a read-only register that contains the carry out of the addition (0 or 1) in case of an unsigned multiply), or the sign extension of the 32-bit sum (0 or -1) in case of a signed multiply. In the case of a signed multiply-accumulate, the SumExt value must be combined with the most significant bit of the prior SumHi contents to determine the true carry out result (-1, 0, or +1). The result is available after three clock cycles of delay, which is the time required to fetch a following instruction and a following index word. Thus, the delay is typically invisible. An explicit delay is only required if using an indirect addressing mode to fetch the result. Memory Protection Unit (MPU) The FRAM MPU protects against accidental writes to designated read-only memory segments or execution of code from a constant memory. The MPU can set any portioning of memory with bit level addressing, making the complete memory accessible for read, write and execute operations in FRAM devices. Power management module (PMM) The PMM generates a supply voltage for the core logic, and provides several mechanisms for the supervision and monitoring of both the voltage applied to the device and the voltage generated for the core. It is integrated with a low-dropout voltage regulator (LDO), brown-out reset (BOR), and a supply voltage supervisor and monitor. Supply-Voltage Supervisor (SVS) The SVS is a configurable module used to monitor the AVCC supply voltage or an external voltage. The SVS can be configured to set a flag or generate a power-on reset (POR) when the supply voltage or external voltage drops below a user-selected threshold. Communication and Interface Capacitive Touch Sense I/Os The integrated capacitive touch sense I/O module offers several benefits to touch button and touch slider applications. The system does not require external components to create the self-oscillation (reducing bill of materials) and the capacitor (that defines the frequency of the self-oscillation) can be connected directly. In addition, there is no need for external MUXes to allow multiple pads and each I/O pad can directly serve as a cap sense input. A hysteresis of ~0.7 V ensures robust operation. Control and sequencing is done completely in software. General Purpose I/Os MSP430 devices have up to 12 digital I/O ports implemented. Each port has eight I/O pins. Every I/O pin can be configured as either input or output, and can be individually read or written to. Ports P1 and P2 have interrupt capability. MSP430F2xx, F5xx and some F4xx devices feature built-in, individually configurable pull-up or pull-down resistors. Sub-GHz RF Front End The flexible CC1101 sub-1 GHz transceiver delivers the sensitivity and blocking performance required to achieve successful communication links in any RF environment. It also features low current consumption and supports flexible data rates and modulation formats. USART (UART, SPI, I²C) The universal synchronous/asychrnous receive/transmit (USART) peripheral interface supports asynchronous RS-232 and synchronous SPI communication with one hardware module. The MSP430F15x/16x USART modules also support I²C, programmable baud rate, and independent interrupt capability for receive and transmit. USB The USB module is fully compliant with the USB 2.0 specification and supports control, interrupt and bulk transfers at a data rate of 12 Mbps (full speed). The module supports USB suspend, resume and remote wake-up operations and can be configured for up to eight input and eight output endpoints. The module includes an integrated physical interface (PHY); a phase-locked loop (PLL) for USB clock generation; and a flexible power-supply system enabling bus-powered and self-powered devices. USCI (UART, SPI, I²C, LIN, IrDA) The universal serial communication interface (USCI) module features two independent channels that can be used simultaneously. The asynchronous channel (USCI_A) supports UART mode; SPI mode; pulse shaping for IrDA; and automatic baud-rate detection for LIN communications. The synchronous channel (USCI_B) supports I²C and SPI modes. USI (SPI, I²C) The universal serial interface (USI) module is a synchronous serial communication interface with a data length of up to 16-bits and can support SPI and I²C communication with minimal software. Infrared Modulation Available on the MSP430FR4xxx and MSP430FR2xxx series chips, this feature is configured via the SYSCFG register set. This peripheral ties into other peripherals (Timers, eUSCI_A) to generate an IR modulated signal on an output pin. (page 43) Metering ESP430 (integrated in FE42xx devices) The ESP430CE module performs metering calculations independent of the CPU. Module has separate SD16, HW multiplier, and the ESP430 embedded processor engine for single-phase energy-metering applications. Scan Interface (SIF) The SIF module, a programmable state machine with an analog front end, is used to automatically measure linear or rotational motion with the lowest possible power consumption. The module features support for different types of LC and resistive sensors and for quadrature encoding. Display LCD/LCD_A/LCD_B The LCD/LCD_A controller directly drives LCDs for up to 196 segments. Supports static, 2-mux, 3-mux, and 4-mux LCDs. LCD_A module has integrated charge pump for contrast control. LCD_B enables blinking of individual segments with separate blinking memory. LCD_E The LCD_E controller comes with the newer MSP430FR4xxx series microcontrollers and directly drives LCDs up to 448 segments. Supports static, 2-mux, 3-mux, 4-mux, 5-mux, 6-mux, 7-mux, 8-mux (1/3 bias) LCDs. Segment and Common pins may be reprogrammed to available LCD drive pins. This peripheral may be driven in LPM3.5 (RTC running+Main CPU core shutdown low-power mode). Software development environment Texas Instruments provides various hardware experimenter boards that support large (approximately two centimeters square) and small (approximately one millimeter square) MSP430 chips. TI also provides software development tools, both directly, and in conjunction with partners (see the full list of compilers, assemblers, and IDEs). One such toolchain is the IAR C/C++ compiler and Integrated development environment, or IDE. A Kickstart edition can be downloaded for free from TI or IAR; it is limited to 8 KB of C/C++ code in the compiler and debugger (assembly language programs of any size can be developed and debugged with this free toolchain). TI also combines a version of its own compiler and tools with its Eclipse-based Code Composer Studio IDE (CCS). It sells full-featured versions, and offers a free version for download which has a code size limit of 16 KB. CCS supports in-circuit emulators, and includes a simulator and other tools; it can also work with other processors sold by TI. For those who are more comfortable with the Arduino, there is also another software Energia Energia, an open source electronics prototyping platform with the goal to bring the Wiring and Arduino framework to the Texas Instruments MSP430 based LaunchPad where Arduino code can be exported for programming MSP430 chips. The latest release of Energia supports the MSP-EXP430G2xxx, MSP-EXP430FR5739, MSP-EXP430FR5969, MSP-EXP430FR5994, MSP-EXP430F5529LP, Stellaris EK-LM4F120XL, Tiva-C EK-TM4C123GXL, Tiva-C EK-TM4C1294XL, CC3200 WiFi LaunchPad. The open source community produces a freely available software development toolset based on the GNU toolset. The GNU compiler is currently declined in three versions: (MSPGCC) (MSPGCC Uniarch) TI consulted with RedHat to provide official support for the MSP430 architecture to the GNU Compiler Collection C/C++ compiler. This msp430-elf-gcc compiler is supported by TI's Code Composer Studio version 6.0 and higher. There is a very early llvm-msp430 project, which may eventually provide better support for MSP430 in LLVM. Other commercial development tool sets, which include editor, compiler, linker, assembler, debugger and in some cases code wizards, are available. VisSim, a block diagram language for model based development, generates efficient fixed point C-Code directly from the diagram. VisSim generated code for a closed loop ADC+PWM based PID control on the F2013 compiles to less than 1 KB flash and 100 bytes RAM. VisSim has on-chip peripheral blocks for the entire MSP430 family I²C, ADC, SD16, PWM. Low cost development platforms The MSP430F2013 and its siblings are set apart by the fact that (except for the MSP430G2 Value Line) it is the only MSP430 part that is available in a dual in-line package (DIP). Other variants in this family are only available in various surface-mount packages. TI has gone to some trouble to support the eZ430 development platform by making the raw chips easy for hobbyists to use in prototypes. eZ430-F2013 TI has tackled the low-budget problem by offering a very small experimenter board, the eZ430-F2013, on a USB stick. This makes it easy for designers to choose the MSP430 chip for inexpensive development platforms that can be used with a computer. The eZ430-F2013 contains an MSP430F2013 microcontroller on a detachable prototyping board, and accompanying CD with development software. It is helpful for schools, hobbyists and garage inventors. It is also welcomed by engineers in large companies prototyping projects with capital budget problems. MSP430 LaunchPad Texas Instruments released the MSP430 LaunchPad in July 2010. The MSP430 LaunchPad has an onboard flash emulator, USB, 2 programmable LEDs, and 1 programmable push button. As an addition to experimentation with the LaunchPad a shield board is available. TI has since provided several new LaunchPads based on the MSP430 platform: MSP-EXP430F5529LP features the MSP430F5529 USB device-capable MCU with 128KB flash and 8KB SRAM MSP-EXP430FR5969 features the MSP430FR5969 FRAM MCU with 64KB FRAM and 2KB SRAM MSP-EXP430FR4133 features the MSP430FR4133 FRAM MCU with 16KB FRAM, 2KB SRAM and on-board LCD MSP-EXP430FR6989 features the MSP430FR6989 FRAM MCU with 128KB FRAM, 2KB SRAM, on-board LCD and Extended Scan Interface peripheral MSP-EXP430FR2311 features the MSP430FR2311 FRAM MCU with 4KB FRAM, 1KB SRAM, OpAmp and Transimpedance Amplifier peripheral MSP-EXP430FR2433 features the MSP430FR2433 FRAM MCU with 15.5KB FRAM, 4KB SRAM MSP-EXP430FR2355 features the MSP430FR2355 FRAM MCU with 32KB FRAM, 4KB SRAM, 12-bit ADC, 12-bit DAC, OpAmp/PGA, ICC for nested interrupts MSP-EXP430FR5994 features the MSP430FR5994 FRAM MCU with 256KB FRAM, 8KB SRAM, 12-bit ADC and LEA DSP peripheral All three of these LaunchPads include an eZ-FET JTAG debugger with backchannel UART capable of 1Mbit/s speeds. The FRAM LaunchPads (e.g. MSP-EXP430FR5969, MSP-EXP430FR4133) include EnergyTrace, a feature supported by TI's Code Composer Studio IDE for monitoring and analyzing power consumption. Debugging interface In common with other microcontroller vendors, TI has developed a two-wire debugging interface found on some of their MSP430 parts that can replace the larger JTAG interface. The eZ430 Development Tool contains a full USB-connected flash emulation tool (FET) for this new two-wire protocol, named Spy-Bi-Wire by TI. Spy-Bi-Wire was initially introduced on only the smallest devices in the 'F2xx family with limited number of I/O pins, such as the MSP430F20xx, MSP430F21x2, and MSP430F22x2. The support for Spy-Bi-Wire has been expanded with the introduction of the latest '5xx family, where all devices have support Spy-Bi-Wire interface in addition to JTAG. The advantage of the Spy-Bi-Wire protocol is that it uses only two communication lines, one of which is the dedicated _RESET line. The JTAG interface on the lower pin count MSP430 parts is multiplexed with general purpose I/O lines. This makes it relatively difficult to debug circuits built around the small, low-I/O-budget chips, since the full 4-pin JTAG hardware will conflict with anything else connected to those I/O lines. This problem is alleviated with the Spy-Bi-Wire-capable chips, which are still compatible with the normal JTAG interface for backwards compatibility with the old development tools. JTAG debugging and flash programming tools based on OpenOCD and widely used in the ARM architecture community are not available for the MSP430. Programming tools specially designed for the MSP430 are marginally less expensive than JTAG interfaces that use OpenOCD. However, should it be discovered mid-project that more MIPS, more memory, and more I/O peripherals are needed, those tools will not transfer to a processor from another vendor. MSP430 CPU The MSP430 CPU uses a von Neumann architecture, with a single address space for instructions and data. Memory is byte-addressed, and pairs of bytes are combined little-endian to make 16-bit words. The processor contains 16 16-bit registers, of which four are dedicated to special purposes: R0 is the program counter, R1 is the stack pointer, R2 is the status register, and R3 is a "constant generator" which reads as zero and ignores writes. Added address mode encodings using R3 and R2 allow a total of six commonly used constant values (0, 1, 2, 4, 8 and −1) without needing an immediate operand word. R4 through R15 are available for general use. The instruction set is very simple: 27 instructions exist in three families. Most instructions occur in .B (8-bit byte) and .W (16-bit word) suffixed versions, depending on the value of a B/W bit: the bit is set to 1 for 8-bit and 0 for 16-bit. A missing suffix is equivalent to .W. Byte operations to memory affect only the addressed byte, while byte operations to registers clear the most significant byte. Instructions are 16 bits, followed by up to two 16-bit extension words. Addressing modes are specified by the 2-bit As field and the 1-bit Ad field. Some special versions can be constructed using R0, and modes other than register direct using R2 (the status register) and R3 (the constant generator) are interpreted specially. Ad can use only a subset of the addressing modes for As. Indexed addressing modes add a 16-bit extension word to the instruction. If both source and destination are indexed, the source extension word comes first. x refers to the next extension word in the instruction stream in the table below. Instructions generally take 1 cycle per word fetched or stored, so instruction times range from 1 cycle for a simple register-register instruction to 6 cycles for an instruction with both source and destination indexed. The MSP430X extension with 20-bit addressing adds added instructions that can require up to 10 clock cycles. Setting or clearing a peripheral bit takes two clocks. A jump, taken or not takes two clocks. With the 2xx series 2 MCLKs is 125 ns at 16 MHz. Moves to the program counter are allowed and perform jumps. Return from subroutine, for example, is implemented as MOV @SP+,PC. When R0 (PC) or R1 (SP) are used with the autoincrement addressing mode, they are always incremented by two. Other registers (R4 through R15) are incremented by the operand size, either 1 or 2 bytes. The status register contains 4 arithmetic status bits, a global interrupt enable, and 4 bits that disable various clocks to enter low-power mode. When handling an interrupt, the processor saves the status register on the stack and clears the low-power bits. If the interrupt handler does not modify the saved status register, returning from the interrupt will then resume the original low-power mode. Pseudo-operations Many added instructions are implemented as aliases for forms of the above. For example, there is no specific "return from subroutine" instruction, but it is implemented as "MOV @SP+,PC". Emulated instructions are: Note that the immediate constants −1 (0xffff), 0, 1, 2, 4 and 8 can be specified in a single-word instruction without needing a separate immediate operand. MSP430X 20-bit extension The basic MSP430 cannot support more memory (ROM + RAM + peripherals) than its 64K address space. In order to support this, an extended form of the MSP430 uses 20-bit registers and a 20-bit address space, allowing up to 1 MB of memory. This uses the same instruction set as the basic form, but with two extensions: A limited number of 20-bit instructions for common operations, and A general prefix-word mechanism that can extend any instruction to 20 bits. The extended instructions include some added abilities, notably multibit shifts and multiregister load/store operations. 20-bit operations use the length suffix "A" (for address) instead of .B or .W. .W is still the default. In general, shorter operations clear the high-order bits of the destination register. The new instructions are as follows: All other instructions can have a prefix word added which extends them to 20 bits. The prefix word contains an added operand size bit, which is combined with the existing B/W bit to specify the operand size. One unused size combination exists; indications suggest that it may be used in future for a 32-bit operand size. The prefix word comes in two formats, and the choice between them depends on the instruction which follows. If the instruction has any non-register operands, then the simple form is used, which provides 2 4-bit fields to extend any offset or immediate constant in the instruction stream to 20 bits. If the instruction is register-to-register, a different extension word is used. This includes a "ZC" flag which suppresses carry-in (useful for instructions like DADD which always use the carry bit), and a repeat count. A 4-bit field in the extension word encodes either a repeat count (0–15 repetitions in addition to the initial execution), or a register number which contains a 4-bit repeat count. MSP430 address space The general layout of the MSP430 address space is: 0x0000–0x0007 Processor special function registers (interrupt control registers) 0x0008–0x00FF 8-bit peripherals. These must be accessed using 8-bit loads and stores. 0x0100–0x01FF 16-bit peripherals. These must be accessed using 16-bit loads and stores. 0x0200–0x09FF Up to 2048 bytes of RAM. 0x0C00–0x0FFF 1024 bytes of bootstrap loader ROM (flash parts only). 0x1000–0x10FF 256 bytes of data flash ROM (flash parts only). 0x1800-0x19FF 512 bytes of data FRAM (most FRAM MCUs, user-writable containing no calibration data) 0x1100–0x38FF Extended RAM on models with more than 2048 bytes of RAM. (0x1100–0x18FF is a copy of 0x0200–0x09FF) 0x1100–0xFFFF Up to 60 kilobytes of program ROM. Smaller ROMs start at higher addresses. The last 16 or 32 bytes are interrupt vectors. A few models include more than 2048 bytes of RAM; in that case RAM begins at 0x1100. The first 2048 bytes (0x1100–0x18FF) is mirrored at 0x0200–0x09FF for compatibility. Also, some recent models bend the 8-bit and 16-bit peripheral rules, allowing 16-bit access to peripherals in the 8-bit peripheral address range. There is a new extended version of the architecture (named MSP430X) which allows a 20-bit address space. It allows added program ROM beginning at 0x10000. The '5xx series has a greatly redesigned address space, with the first 4K devoted to peripherals, and up to 16K of RAM. References External links Community and information sites TI MSP430 Homepage MSP430 TITAN Development Board TI E2E MSP430 Community forum MSP430 Community sponsored by Texas Instruments MSP430 Yahoo!group MSP430.info MSP430 English-Japanese forum 43oh.com – MSP430 News, Projects and Forums TinyOS-MSP430 TinyOS port MSP430 Egel project pages – About 50 examples with sources, schematics, well documented. Visual programming C code generators VisSim MSP430 Model-Based Embedded Development System Compilers, assemblers and IDEs Free Compiler and IDEs Arduino IDE Arduino IDE Code Composer Studio Eclipse based IDE Code Composer Studio Cloud IAR Embedded Workbench Kickstart IDE (size limited to 4/8/16 KB – depends on device used) GCC toolchain for the MSP430 Microcontrollers MSP430 Development System naken_asm Open-Source MSP430 assembler, disassembler, simulator. Pre-built MSP430 GCC 4.x binaries for Windows MSP430 16-bit noForth compiler With assembler, disassembler and sources. FastForth with 5 MBds terminal, assembler, SD_Card driver... Most popular unrestricted IDEs and compilers IAR Embedded Workbench for TI MSP430 Rowley CrossWorks for MSP430 (only a 30-day evaluation period) GCC toolchain for the MSP430 Microcontrollers (Free C-compiler) MSP430 Development System A plugin for Visual Studio that supports MSP430 via MSP430-GCC (30-day evaluation) Miscellaneous IDEs AQ430 Development Tools for MSP430 Microcontrollers ImageCraft C Tools ForthInc Forth-Compiler MPE Forth IDE & Cross-Compiler for MSP430 currently in Beta HI-TECH C for MSP430 (Dropped MSP430 Support in 2009) List of debugging tools (not complete) Other tools WSim – a software-driven emulator for full platform estimations and debug MSPSim – a Java based MSP430 emulator/simulator MSP430Static – a reverse engineering tool in Perl GoodFET – an open MSP430 JTAG debugger in C and Python mspdebug – an opensource MSP430 JTAG debugger Trace32 MSP430 SIM – Download area with MSP430 Instruction Set Simulator free for evaluation ERIKA Enterprise – a free of charge, open source RTOS implementation of the ISO 17356 API (derived from the OSEK/VDX API) Energia is based on Wiring and Arduino and uses the Processing IDE. The hardware platform is based upon TI MSP430 LaunchPad Microcontrollers MSP430
218447
https://en.wikipedia.org/wiki/Polymorphic%20code
Polymorphic code
In computing, polymorphic code is code that uses a polymorphic engine to mutate while keeping the original algorithm intact - that is, the code changes itself every time it runs, but the function of the code (its semantics) will not change at all. For example, the simple math equations 3+1 and 6-2 both achieve the same result, yet run with different machine code in a CPU. This technique is sometimes used by computer viruses, shellcodes and computer worms to hide their presence. Encryption is the most common method to hide code. With encryption, the main body of the code (also called its payload) is encrypted and will appear meaningless. For the code to function as before, a decryption function is added to the code. When the code is executed this function reads the payload and decrypts it before executing it in turn. Encryption alone is not polymorphism. To gain polymorphic behavior, the encryptor/decryptor pair is mutated with each copy of the code. This allows different versions of some code which all function the same. Malicious code Most anti-virus software and intrusion detection systems (IDS) attempt to locate malicious code by searching through computer files and data packets sent over a computer network. If the security software finds patterns that correspond to known computer viruses or worms, it takes appropriate steps to neutralize the threat. Polymorphic algorithms make it difficult for such software to recognize the offending code because it constantly mutates. Malicious programmers have sought to protect their encrypted code from this virus-scanning strategy by rewriting the unencrypted decryption engine (and the resulting encrypted payload) each time the virus or worm is propagated. Anti-virus software uses sophisticated pattern analysis to find underlying patterns within the different mutations of the decryption engine, in hopes of reliably detecting such malware. Emulation may be used to defeat polymorphic obfuscation by letting the malware demangle itself in a virtual environment before utilizing other methods, such as traditional signature scanning. Such a virtual environment is sometimes called a sandbox. Polymorphism does not protect the virus against such emulation if the decrypted payload remains the same regardless of variation in the decryption algorithm. Metamorphic code techniques may be used to complicate detection further, as the virus may execute without ever having identifiable code blocks in memory that remains constant from infection to infection. The first known polymorphic virus was written by Mark Washburn. The virus, called 1260, was written in 1990. A better-known polymorphic virus was created in 1992 by the hacker Dark Avenger as a means of avoiding pattern recognition from antivirus software. A common and very virulent polymorphic virus is the file infecter Virut. Example This example is not really a polymorphic code but will serve as an introduction to the world of encryption via the XOR operator. For example, in an algorithm using the variables A and B but not the variable C, there could be a large amount of code that changes C, and it would have no effect on the algorithm itself, allowing it to be changed endlessly and without heed as to what the final product will be. Start: GOTO Decryption_Code Encrypted: ...lots of encrypted code... Decryption_Code: C = C + 1 A = Encrypted Loop: B = *A C = 3214 * A B = B XOR CryptoKey *A = B C = 1 C = A + B A = A + 1 GOTO Loop IF NOT A = Decryption_Code C = C^2 GOTO Encrypted CryptoKey: some_random_number The encrypted code is the payload. To make different versions of the code, in each copy the garbage lines which manipulate C will change. The code inside "Encrypted" ("lots of encrypted code") can search the code between Decryption_Code and CryptoKey and each algorithm for new code that does the same thing. Usually, the coder uses a zero key (for example; A xor 0 = A) for the first generation of the virus, making it easier for the coder because with this key the code is not encrypted. The coder then implements an incremental key algorithm or a random one. Polymorphic encryption Polymorphic code can be also used to generate encryption algorithm. This code was generated by the online service StringEncrypt. It takes the string or a file content and encrypts it with random encryption commands and generates polymorphic decryption code in one of the many supported programming languages: // encrypted with https://www.stringencrypt.com (v1.1.0) [C/C++] // szLabel = "Wikipedia" wchar_t szLabel[10] = { 0xB1A8, 0xB12E, 0xB0B4, 0xB03C, 0x33B9, 0xB30C, 0x3295, 0xB260, 0xB5E5, 0x35A2 }; for (unsigned tUTuj = 0, KRspk = 0; tUTuj < 10; tUTuj++) { KRspk = szLabel[tUTuj]; KRspk ^= 0x2622; KRspk = ~KRspk; KRspk --; KRspk += tUTuj; KRspk = (((KRspk & 0xFFFF) >> 3) | (KRspk << 13)) & 0xFFFF; KRspk += tUTuj; KRspk --; KRspk = ((KRspk << 8) | ( (KRspk & 0xFFFF) >> 8)) & 0xFFFF; KRspk ^= 0xE702; KRspk = ((KRspk << 4) | ( (KRspk & 0xFFFF) >> 12)) & 0xFFFF; KRspk ^= tUTuj; KRspk ++; KRspk = (((KRspk & 0xFFFF) >> 8) | (KRspk << 8)) & 0xFFFF; KRspk = ~KRspk; szLabel[tUTuj] = KRspk; } wprintf(szLabel); As you can see in this C++ example, the string was encrypted and each character was stored in encrypted form using UNICODE widechar format. Different encryption commands were used like bitwise XOR, NOT, addition, subtraction, bit rotations. Everything is randomized, encryption keys, bit rotation counters and encryption commands order as well. Output code can be generated in C/C++, C#, Java, JavaScript, Python, Ruby, Haskell, MASM, FASM and AutoIt. Thanks to the randomization the generated algorithm is different every time. It's not possible to write generic decryption tools and the compiled code with polymorphic encryption code has to be analyzed each time it's re-encrypted. See also Timeline of notable computer viruses and worms Metamorphic code Self-modifying code Alphanumeric shellcode Shellcode Software cracking Security cracking Obfuscated code Oligomorphic code References Types of malware
219396
https://en.wikipedia.org/wiki/Legal%20instrument
Legal instrument
Legal instrument is a legal term of art that is used for any formally executed written document that can be formally attributed to its author, records and formally expresses a legally enforceable act, process, or contractual duty, obligation, or right, and therefore evidences that act, process, or agreement. Examples include a certificate, deed, bond, contract, will, legislative act, notarial act, court writ or process, or any law passed by a competent legislative body in municipal (domestic) or international law. Many legal instruments were written under seal by affixing a wax or paper seal to the document in evidence of its legal execution and authenticity (which often removed the need for consideration in contract law). However, today many jurisdictions have done away with the requirement of documents being under seal in order to give them legal effect. Electronic legal documents With the onset of the Internet and electronic equipment such as the personal computers and cell-phones, legal instruments or formal legal documents have undergone a progressive change of dematerialisation. In this electronic age, document authentication can now be verified digitally using various software. All documents needing authentication can be processed as digital documents with all the necessary information such as date and time stamp imbedded. To prevent tampering or unauthorized changes to the original document, encryption is used. In modern times, authentication is no longer limited to the type of paper used, the specialized seal, stamps, etc., as document authentication software helps secure the original context. The use of electronic legal documents is most prominent in the United States' courts. Most American courts prefer the filing of electronic legal documents over paper. However, there is not yet a public law to unify the different standards of document authentication. Therefore, one must know the court's requirement before filing court papers. To address part of this concern, the United States Congress enacted the Electronic Signatures in Global and National Commerce Act in 2000 (P.L. 106-229 of 2000, 15 USCS sec. 7001) specifying that no court could thereafter fail to recognize a contract simply because it was digitally signed. The law is very permissive, making essentially any electronic character in a contract sufficient. It is also quite restrictive in that it does not force the recognition of some document types in electronic form, no matter what the electronic character might be. No restriction is made to signatures which are adequately cryptographically tied to both the document text (see message digest) and to a particular key whose use should be restricted to certain persons (e.g., the alleged sender). There is thus a gap between what the cryptographic engineering can provide and what the law assumes is both possible and meaningful. Several states had already enacted laws on the subject of electronic legal documents and signatures before the U.S. Congress had acted, including Utah, Washington, and California to name only a few of the earliest. They vary considerably in intent, coverage, cryptographic understanding, and effect. Several other nations and international bodies have also enacted statutes and regulations regarding the validity and binding nature of digital signatures. To date, the variety (and inadequacy) of the definitions used for digital signatures (or electronic signatures) have produced a legal and contractual minefield for those who may be considering relying on the legality and enforceability of digitally signed contracts in any of many jurisdictions. Adequate legislation adequately informed by cryptographic engineering technology remains an elusive goal. That it has been fully, or adequately, achieved (in any jurisdiction) is a claim which must be taken with considerable caution. See also Legal coding Legal document assistant References External links A framework and infrastructure for assuring legal strength in digital interactions Legal documents
220232
https://en.wikipedia.org/wiki/British%20Satellite%20Broadcasting
British Satellite Broadcasting
British Satellite Broadcasting (BSB) was a television company, headquartered in London, that provided direct broadcast satellite television services to the United Kingdom. They started broadcasting on 25 March 1990. The company was merged with Sky Television plc on 2 November 1990 to form British Sky Broadcasting. History Development In January 1977, the World Administrative Radio Conference assigned each country to create five high-powered direct broadcast by satellite channels for domestic use. Between 25 February and 5 March 1982, after being awarded two of the channels as the BBC proposed its own satellite service, but the government imposed two conditions on it: Used built by United Satellite, a consortium of British Aerospace and Matra Marconi Space (the former Marconi Space merged with Matra Espace, the latter's space divisions later to become part of Astrium and then Airbus Defence and Space) with costs estimated at £24 million per year. A supplementary charter was agreed in May 1983 which allow the BBC to borrow up to £225 million to cover the cost of the project as it was not be allowed to call on public funds, nor use existing sources of revenue to fund the project. During Autumn 1983, the cost of Unisat was found to be greatly underestimated and the new Home Secretary announced the three remaining channels would be given to the Independent Broadcasting Authority to allow the private sector to compete against the BBC in satellite broadcasting. Within a few months, the BBC started talking with the IBA about a joint project to help cover the cost. Subsequently, the government allowed the IBA to bring in private companies to help cover the costs (dubbed as the "Club of 21"): BBC – 50% ITV franchises – 30% Virgin/Thorn EMI/Granada TV Rental/Pearson Longman and Consolidated Satellite Broadcasting – 20% Within a year, the consortium made it clear that the original launch date of 1986 would be delayed to 1989, while also asking the government to allow them to tender out the building of the new satellite system to help reduce cost. On 15 June 1985, the project failed when consortium concluded that the cost of set-up was not justifiable as the BBC stated the costs were prohibitive, because the government insisted that the "Club of 21" should pay for the costs of constructing and launching a dedicated satellite. IBA satellite franchise On 2 April 1986, the IBA convinced the Home Secretary to revive the DBS project but under different conditions broadly based on a report drawn up by John Jackson, invite the private-sector companies to apply for a new television franchise via satellite to provide a commercial service on the IBA's three DBS channels (of the five in total allocated to the United Kingdom). One of the conditions imposed on applicants by the IBA was that they use a new untried transmission standard, D-MAC. This was part of the European Communities' support for the HD-MAC high-definition television standard which was being developed by Philips and other European companies. The technology was still at the laboratory stage and was incompatible with previous standards: HD-MAC transmissions could not be received by existing television sets which used PAL or SECAM standards. The condition to use a high-power (230 watt) satellite was dropped, and no winner was precluded from buying a foreign satellite system. The IBA received five major contenders with serious bids for the direct broadcast satellite franchises, it also received submissions from The Children's Channel and ITN to make sure their programmes were used on any successful bid: Winning bid British Satellite Broadcasting won the 15-year franchise on 11 December 1986 to operate the DBS system, with a licence to operate three channels. BSB forecast 400,000 homes would be equipped during its first year, but some doubts were cast whether this was possible. The Cable Authority welcomed the service believing it would encourage more users, especially with its dedicated movie service. The original four satellite channels were: Preparations for launch Around the time of the licence award, Amstrad withdrew its backing, as they believed it was not possible to sell a satellite dish and D-MAC standard receiver for £250. Australian businessman Alan Bond joined the consortium along with Reed Elsevier, Chargeurs, Next and London Merchant Securities, amongst others. BSB earmarked the bulk of the first round of financing for buying and launching two satellites (for redundancy and provision of further channels later), and planned a second round close to the commencement of broadcasting operations. It commissioned the Hughes Aircraft Company to provide two high-powered satellites using launch vehicles from McDonnell Douglas (later United Launch Alliance). Both companies were American and had established reputations for reliability. Hughes was the main contractor and offered a commercial space industry as the first "in-orbit delivery" on 6 August 1987, whereby BSB's risk was reduced because payments became due only after the satellites were launched and operational. On 8 June 1988, rival tycoon Rupert Murdoch having failed to gain regulatory approval for his own satellite service to become part of the BSB consortium, announced that his pan-European television station Sky Channel, would be relaunched as a four-channel, United Kingdom-based service called Sky Television, using the Astra system and broadcast in PAL with analogue sound. BSB had been aware of the impending launch of Astra when it submitted its proposal to the IBA in 1986, but had discounted it, partly on advice from the IBA that it would not have been possible for Sky to securely scramble an analogue PAL signal, and a prediction that satisfactory reception from a medium-powered satellite such as Astra would not be possible with a dish of under 1.2 metres, which would require individual planning permission for each customer. Lazard Brothers, the Pearson subsidiary responsible for BSB's first fundraising memorandum, reportedly regarded Astra as technology-led rather than programming-led and, therefore, an unlikely threat. The stage was set for a dramatic confrontation: BSB, expecting to be the United Kingdom's only satellite service, was faced with an aggressive drive by Murdoch's Sky to be the first service to launch. As Britain's official satellite television provider, BSB had high hopes as the company planned to provide a mixture of highbrow programming and popular entertainment, from arts and opera to blockbuster movies and music videos. The service would also be technically superior, broadcasting in the D-MAC (Multiplexed Analogue Components type D) system dictated by the European Union regulations with potentially superior picture sharpness, digital stereo sound and the potential to show widescreen programming; rather than the existing PAL system. BSB claimed that Sky's PAL pictures would be too degraded by satellite transmission, and that in any case, BSB would broadcast superior programming. SES (later operators of the O3b data satellites and others with names including AMC, Ciel, NSS, QuetzSat, YahSat and SES, and formerly at that time, the Astra TV satellite operator), had no regulatory permission to broadcast, had plans (initially) for only one satellite with no backup, and the European satellite launch vehicle Ariane suffered repeated failures. However, SES used the resulting delay time to re-engineer the satellite to reduce the dish size needed, which would otherwise have been larger than 60 cm (24"). To distance itself from Sky and its dish antennas, BSB announced a new type of flat-plate satellite antenna called a "squarial" (i.e., "square aerial"). The illustrative model shown to the press was a dummy and BSB commissioned a working version which was under 45 cm (18") wide. A conventional dish of the same diameter was also available. The company had serious technical problems with the development of ITT's D-MAC silicon chips needed for its MAC receivers. BSB was still hoping to launch in September 1989, but eventually had to admit that the launch would be delayed. By 22 July 1988 in a bid to gain more viewers, BSB and the BBC prepared a bid for a four-year deal for the rights to broadcast top league football, outbidding ITV's £44 million offer. BSB had also committed about £400 million to tying up the film libraries of Paramount, Universal, Columbia and MGM/United Artists, with total up-front payments of about £85 million. On 1 February 1989, BSB's costs had started to climb reaching £354 million, while chief executive Anthony Simonds-Gooding denied that BSB had gone over budget and would require more than the planned £625 million it required to operate up to 1993. Virgin pulled out of the BSB consortium in December 1988, ostensibly because it was going private again and become increasingly concerned about BSB's mounting costs. The film-rights battle proved to be the final straw for Virgin, since it would necessitate a "supplementary first round" of financing of £131 million in January earlier that year in addition to the initial £222.5 million. After unsuccessfully offering its stake in BSB to the remaining founders, Virgin sold it to the Bond Corporation, already BSB's largest shareholder for a nominal profit. Despite the delayed launch, BSB continued to invest heavily for marketing in 1989 to minimize the effects of Sky's timing advantage. BSB also received a needed boost in June 1989 when it won the franchises for the two remaining British high-powered DBS channels, beating six other bidders when the BBC dropped all plans for use of its allocated channels. BSB revised its line-up to include separate channels for films, sports, pop music, general entertainment and current affairs. Unfortunately, this increased the size of the dishes which the public had to purchase from 25 to 35–40 centimetres; subsidies from BSB helped maintain retail prices at £250. Launch of five-channel service There were five satellite channels for the general public with a sixth part-time service on subscription for business users, as BSB Datavision was a subsidiary of the company which offered encrypted television sets and data reception through domestic receivers. The channel line-up launched over five consecutive days in one at a time was: The Movie Channel (25 March 1990) Galaxy (26 March 1990) The Sports Channel (27 March 1990) Now (28 March 1990) The Power Station (29 March 1990) The Computer Channel (28 June 1990; Defunct: 29 November 1990) BSB launched its service on 25 March 1990, with the slogan It's Smart to be Square. The launch, six months late, came 13 months after Sky's launch. BSB was due to start broadcasting in September 1989 but was delayed by problems with the supply of receiving equipment and because BSB wanted to avoid Sky's experience of launching when most shops had no equipment to sell. BSB claimed to have around 750,000 subscriptions while Sky had extended its reach into more than 1.5 million homes. It was believed both companies could break even if subscriptions reached three million households, with most analysts expecting this to be reached in 1992. Competition and merger Sky's head start over BSB proved that the PAL system would give adequate picture quality, and that many viewers would be happy to watch Sky's more populist output as opposed to waiting for the promised quality programming pledged by BSB. Sky had launched its multichannel service from studios at an industrial estate in Isleworth, with a ten-year lease on SES transponders for an estimated £50 million without backup. BSB on the other hand, would operate from more expensive headquarters at Marco Polo House in Battersea, with construction and launch of its own satellites costing an estimated £200 million as the second of which was a backup. When BSB finally went on air in March 1990 (13 months after Sky), the company's technical problems were resolved and its programming was critically acclaimed. However, its D-MAC receivers were more expensive than Sky's PAL equivalents and incompatible with them. Many potential customers compared the competition between the rival satellite companies to the format war between VHS and Betamax recorders, and chose to wait and see which company would win outright in order to avoid buying potentially obsolete equipment. Both BSB and Sky had begun to struggle with the burden of huge losses, rapidly increasing debts and ongoing startup costs. On 2 November 1990, a 50:50 merger was announced to form a single company called British Sky Broadcasting (marketed as "Sky"). Following the merger, BSkyB moved quickly to rationalise the combined channels it now owned: Outcomes BSB's shareholders and Murdoch's News International made huge profits on their investments, the 50:50 merged venture having an effective quasi-monopoly on British satellite pay-television. From a United Kingdom perspective, British Satellite Broadcasting's existence prevented 100% of these profits being made by News International, reducing Murdoch's ability to influence government policy. At one stage of the saga, News International was facing dismemberment at the hands of its bankers. Following the takeover of Sky by Comcast in October 2018, Murdoch was no longer involved in British television but retained his newspaper assets through News Corp. Regulatory context A new television transmission system, Multiplexed Analogue Components, was originally developed for high-definition television but European manufacturers developed patented variants and successfully lobbied regulators such that it was adopted by the Commission of the European Communities as the standard for all direct broadcast satellites. This had the effect that the low-cost non-European manufacturers would not only have to pay royalties to the manufacturers, but would also not have direct access to the technology, and hence would always be behind with new developments. In the United Kingdom, the Independent Broadcasting Authority developed a variant, D-MAC, which had marginal audio channel improvements, and insisted on its use by the satellite service to be licensed by itself. In the rest of Europe, satellite television manufacturers standardised on another variant, D2-MAC, which used less bandwidth and was compatible with the extensive existing European cable systems. With the launch of BSB, the IBA became a member of the secret "MAC Club" of European organisations which owned patents on MAC variants and had a royalty sharing agreement for all television and set top boxes sold. The IBA was not directed to be an "economic regulator", so the free market in lower power satellite bandwidth satellites (such as SES Astra) leveraged the benefits of the existing lower cost PAL transmissions with pre-existing set-top box technology. The IBA was rendered helpless and Murdoch made a voluntary agreement to adhere to those Broadcasting Standards Commission rules relating to non-economic matters, such as the technology used. Ironically the past-deadline encryption system in the D-MAC silicon chip technology was one primary reason for BSB having to merge with Sky, and hence the Far Eastern television manufacturers had largely unfettered access to the market when MAC was wound down in favour of PAL. Location and satellites The Marco Polo House headquarters were vacated, leading to redundancy for most BSB staff with only a few moving to work at Sky's headquarters in Isleworth. The building was retained by the new company and from 1 October 1993 became the home of shopping channel QVC when its British version launched. Broadcasting platform ITV Digital moved into part of the building as part of the settlement that saw Sky forced out of the original company. The building was demolished on 8 March 2014, it has been replaced by several blocks of luxury apartments in Chelsea Bridge. As the company focused on the Astra system which was not subject to IBA regulation, the Marcopolo satellites were eventually withdrawn and later sold (Marcopolo 1 on 21 December 1993 to NSAB of Sweden and Marcopolo 2 on 1 July 1992 to Telenor of Norway) within the former satellites were renamed Thor. NSAB operated Marcopolo 1 (as Sirius 1) until successfully sending it to a safe disposal orbit in 2003 as it reached the normal end of its operational life when fuel ran out. Marcopolo 2 was operated (as Thor 1) until January 2002 and disposed successfully. After the merger, BSB D-MAC receivers were sold off cheaply and some enthusiasts modified them to allow reception of D2-MAC services available on other satellites. BSB receivers, Ferguson in particular, could be modified by replacing a microprocessor. Upgrade kits from companies such as Trac Satellite allowed re-tuning whilst other kits allowed fully working menu systems and decoding of 'soft' encrypted channels, although this required the receiver to have one of the later MAC chipsets. Some kits even included smart card readers and full D2-MAC decoding capability. Sources Further reading New York Times, 20 December 1990; Murdoch's Time of Reckoning Peter Chippindale, Suzanne Franks and Roma Felstein, Dished!: Rise and Fall of British Satellite Broadcasting, (London: Simon & Schuster Ltd, 1991). Broadcasting and New Media Policies in Western Europe Kenneth H. F. Dyson, Peter Humphreys , 9780415005098 References External links Sky Group Direct broadcast satellite services Defunct mass media companies of the United Kingdom Mass media companies established in 1986 Mass media companies disestablished in 1990 1990 mergers and acquisitions British companies disestablished in 1990 British companies established in 1986
220633
https://en.wikipedia.org/wiki/Point%20of%20sale
Point of sale
The point of sale (POS) or point of purchase (POP) is the time and place where a retail transaction is completed. At the point of sale, the merchant calculates the amount owed by the customer, indicates that amount, may prepare an invoice for the customer (which may be a cash register printout), and indicates the options for the customer to make payment. It is also the point at which a customer makes a payment to the merchant in exchange for goods or after provision of a service. After receiving payment, the merchant may issue a receipt for the transaction, which is usually printed but can also be dispensed with or sent electronically. To calculate the amount owed by a customer, the merchant may use various devices such as weighing scales, barcode scanners, and cash registers (or the more advanced "POS cash registers", which are sometimes also called "POS systems"). To make a payment, payment terminals, touch screens, and other hardware and software options are available. The point of sale is often referred to as the point of service because it is not just a point of sale but also a point of return or customer order. POS terminal software may also include features for additional functionality, such as inventory management, CRM, financials, or warehousing. Businesses are increasingly adopting POS systems, and one of the most obvious and compelling reasons is that a POS system does away with the need for price tags. Selling prices are linked to the product code of an item when adding stock, so the cashier merely needs to scan this code to process a sale. If there is a price change, this can also be easily done through the inventory window. Other advantages include the ability to implement various types of discounts, a loyalty scheme for customers, and more efficient stock control, and these features are typical of almost all modern ePOS systems. Terminology Retailers and marketers will often refer to the area around the checkout instead as the point of purchase (POP) when they are discussing it from the retailer's perspective. This is particularly the case when planning and designing the area as well as when considering a marketing strategy and offers. Some point of sale vendors refer to their POS system as "retail management system" which is actually a more appropriate term given that this software is no longer just about processing sales but comes with many other capabilities such as inventory management, membership system, supplier record, bookkeeping, issuing of purchase orders, quotations and stock transfers, hide barcode label creation, sale reporting and in some cases remote outlets networking or linkage, to name some major ones. Nevertheless, it is the term POS system rather than retail management system that is in vogue among both end-users and vendors. The basic, fundamental definition of a POS System, is a system which allows the processing and recording of transactions between a company and their consumers, at the time in which goods and/or services are purchased. History Software before the 1990s Early electronic cash registers (ECR) were controlled with proprietary software and were limited in function and communication capability. In August 1973, IBM released the IBM 3650 and 3660 store systems that were, in essence, a mainframe computer used as a store controller that could control up to 128 IBM 3653/3663 point of sale registers. This system was the first commercial use of client-server technology, peer-to-peer communications, local area network (LAN) simultaneous backup, and remote initialization. By mid-1974, it was installed in Pathmark stores in New Jersey and Dillard's department stores. One of the first microprocessor-controlled cash register systems was built by William Brobeck and Associates in 1974, for McDonald's Restaurants. It used the Intel 8008, a very early microprocessor (and forerunner to the Intel 8088 processor used in the original IBM Personal Computer). Each station in the restaurant had its own device which displayed the entire order for a customer — for example, [2] Vanilla Shake, [1] Large Fries, [3] BigMac — using numeric keys and a button for every menu item. By pressing the [Grill] button, a second or third order could be worked on while the first transaction was in progress. When the customer was ready to pay, the [Total] button would calculate the bill, including sales tax for almost any jurisdiction in the United States. This made it accurate for McDonald's and very convenient for the servers and provided the restaurant owner with a check on the amount that should be in the cash drawers. Up to eight devices were connected to one of two interconnected computers so that printed reports, prices, and taxes could be handled from any desired device by putting it into Manager Mode. In addition to the error-correcting memory, accuracy was enhanced by having three copies of all important data with many numbers stored only as multiples of 3. Should one computer fail, the other could handle the entire store. In 1986, Eugene "Gene" Mosher introduced the first graphical point of sale software featuring a touchscreen interface under the ViewTouch trademark on the 16-bit Atari 520ST color computer. It featured a color touchscreen widget-driven interface that allowed configuration of widgets representing menu items without low level programming. The ViewTouch point of sale software was first demonstrated in public at Fall Comdex, 1986, in Las Vegas Nevada to large crowds visiting the Atari Computer booth. This was the first commercially available POS system with a widget-driven color graphic touch screen interface and was installed in several restaurants in the US and Canada. In 1986, IBM introduced its 468x series of POS equipment based on Digital Research's Concurrent DOS 286 and FlexOS 1.xx, a modular real-time multi-tasking multi-user operating system. Modern software (post-1990s) A wide range of POS applications have been developed on platforms such as Windows and Unix. The availability of local processing power, local data storage, networking, and graphical user interface made it possible to develop flexible and highly functional POS systems. Cost of such systems has also declined, as all the components can now be purchased off-the-shelf. In 1993, IBM adopted FlexOS 2.32 as the basis of their IBM 4690 OS in their 469x series of POS terminals. This was developed up to 2014 when it was sold to Toshiba, who continued to support it up to at least 2017. As far as computers are concerned, off-the-shelf versions are usually newer and hence more powerful than proprietary POS terminals. Custom modifications are added as needed. Other products, like touchscreen tablets and laptops, are readily available in the market, and they are more portable than traditional POS terminals. The only advantage of the latter is that they are typically built to withstand rough handling and spillages; a benefit for food & beverage businesses. The key requirements that must be met by modern POS systems include high and consistent operating speed, reliability, ease of use, remote supportability, low cost, and rich functionality. Retailers can reasonably expect to acquire such systems (including hardware) for about $4000 US (as of 2009) per checkout lane. Reliability depends not wholly on the developer but at times on the compatibility between a database and an OS version. For example, the widely used Microsoft Access database system had a compatibility issue when Windows XP machines were updated to a newer version of Windows. Microsoft offered no immediate solution. Some businesses were severely disrupted in the process, and many downgraded back to Windows XP for a quick resolution. Other companies utilized community support, for a registry tweak solution has been found for this. POS systems are one of the most complex software systems available because of the features that are required by different end-users. Many POS systems are software suites that include sale, inventory, stock counting, vendor ordering, customer loyalty and reporting modules. Sometimes purchase ordering, stock transferring, quotation issuing, barcode creating, bookkeeping or even accounting capabilities are included. Furthermore, each of these modules is interlinked if they are to serve their practical purpose and maximize their usability. For instance, the sale window is immediately updated on a new member entry through the membership window because of this interlinking. Similarly, when a sale transaction is made, any purchase by a member is on record for the membership window to report providing information like payment type, goods purchased, date of purchase and points accumulated. Comprehensive analysis performed by a POS machine may need to process several qualities about a single product, like selling price, balance, average cost, quantity sold, description and department. Highly complex programming is involved (and possibly considerable computer resources) to generate such extensive analyses. POS systems are designed not only to serve the retail, wholesale and hospitality industries as historically is the case. Nowadays POS systems are also used in goods and property leasing businesses, equipment repair shops, healthcare management, ticketing offices such as cinemas and sports facilities and many other operations where capabilities such as the following are required: processing monetary transactions, allocation and scheduling of facilities, keeping record and scheduling services rendered to customers, tracking of goods and processes (repair or manufacture), invoicing and tracking of debts and outstanding payments. Different customers have different expectations within each trade. The reporting functionality alone is subject to so many demands, especially from those in the retail/wholesale industry. To cite special requirements, some business's goods may include perishables and hence the inventory system must be capable of prompting the admin and cashier on expiring or expired products. Some retail businesses require the system to store credit for their customers, credit which can be used subsequently to pay for goods. A few companies even expect the POS system to behave like a full-fledged inventory management system, including the ability to provide even FIFO (First In First Out) and LIFO (Last In First Out), reports of their goods for accounting and tax purposes. In the hospitality industry, POS system capabilities can also diverge significantly. For instance, while a restaurant is typically concerned about how the sale window functions, whether it has functionality such as for creating item buttons, for various discounts, for adding a service charge, for holding of receipts, for queuing, for table service as well as for takeaways, merging and splitting of a receipt, these capabilities may yet be insufficient for a spa or slimming center which would require in addition a scheduling window with historical records of customers' attendance and their special requirements. A POS system can be made to serve different purposes to different end-users depending on their business processes. Quite often an off-the-shelf POS system is inadequate for customers; some customization is required, and this is why a POS system can become very complex. The complexity of a mature POS system even extends to remote networking or interlinking between remote outlets and the HQ such that updating both ways is possible. Some POS systems offer the linking of web-based orders to their sale window. Even when local networking is only required (as in the case of a high-traffic supermarket), there is the ever-present challenge for the developer to keep most if not all of their POS stations running. This puts high demand not just on software coding but also designing the whole system covering how individual stations and the network work together, and special consideration for the performance capability and usage of databases. Due to such complexity, bugs and errors encountered in POS systems are frequent. With regards to databases, POS systems are very demanding on their performance because of numerous submissions and retrievals of data - required for correct sequencing the receipt number, checking up on various discounts, membership, calculating subtotal, so forth - just to process a single sale transaction. The immediacy required of the system on the sale window such as may be observed at a checkout counter in a supermarket also cannot be compromised. This places much stress on individual enterprise databases if there are just several tens of thousands of sale records in the database. Enterprise database Microsoft SQL Server, for example, has been known to freeze up (including the OS) entirely for many minutes under such conditions showing a "Timeout Expired" error message. Even a lighter database like Microsoft Access will slow to a crawl over time if the problem of database bloating is not foreseen and managed by the system automatically. Therefore, the need to do extensive testing, debugging and improvisation of solutions to preempt failure of a database before commercialization further complicates the development. POS system accuracy is demanding, given that monetary transactions are involved continuously not only via the sale window but also at the back-end through the receiving and inputting of goods into the inventory. Calculations required are not always straightforward. There may be many discounts and deals that are unique to specific products, and the POS machine must quickly process the differences and the effect on pricing. There is much complexity in the programming of such operations, especially when no error in calculation can be allowed. Other requirements include that the system must have functionality for membership discount and points accumulation/usage, quantity and promotional discounts, mix and match offers, cash rounding up, invoice/delivery-order issuance with outstanding amount. It should enable a user to adjust the inventory of each product based on physical count, track expiry of perishable goods, change pricing, provide audit trail when modification of inventory records is performed, be capable of multiple outlet functionality, control of stocks from HQ, doubling as an invoicing system, just to name some. It is clear that POS system is a term that implies a wide range of capabilities depending on the end-user requirements. POS system review websites cannot be expected to cover most let alone all the features; in fact, unless one is a developer himself, it is unrealistic to expect the reviewer to know all the nuts and bolts of a POS system. For instance, a POS system might work smoothly on a test database during the review but not when the database grows significantly in size over months of usage. And this is only one among many hidden critical functionality issues of a POS system. Hardware interface standardization (post-1980s) Vendors and retailers are working to standardize development of computerized POS systems and simplify interconnecting POS devices. Two such initiatives were OPOS and JavaPOS, both of which conform to the UnifiedPOS standard led by The National Retail Foundation. OPOS (OLE for POS) was the first commonly adopted standard and was created by Microsoft, NCR Corporation, Epson and Fujitsu-ICL. OPOS is a COM-based interface compatible with all COM-enabled programming languages for Microsoft Windows. OPOS was first released in 1996. JavaPOS was developed by Sun Microsystems, IBM, and NCR Corporation in 1997 and first released in 1999. JavaPOS is for Java what OPOS is for Windows, and thus largely platform independent. There are several communication ways POS systems use to control peripherals such as: Logic Controls \ BemaTech Epson Esc/POS UTC Standard UTC Enhanced AEDEX ICD 2002 Ultimate CD 5220 DSP-800 ADM 787/788 HP There are also nearly as many proprietary protocols as there are companies making POS peripherals. Most POS peripherals, such as displays and printers, support several of these command protocols in order to work with many different brands of POS terminals and computers. User interface design The design of the sale window is the most important one for the user. This user interface is highly critical when compared to those in other software packages such as word editors or spreadsheet programs where the speed of navigation is not so crucial for business performance. For businesses at prime locations where real estate comes at a premium, it can be common to see a queue of customers. The faster a sale is completed the shorter the queue time which improves customer satisfaction, and the less space it takes, which benefits shoppers and staff. High-traffic operations such as grocery outlets and cafes need to process sales quickly at the sales counter so the UI flow is often designed with as few popups or other interruptions to ensure the operator isn't distracted and the transaction can be processed as quickly as possible. Although improving the ergonomics is possible, a clean, fast-paced look may come at the expense of sacrificing functions that are often wanted by end-users such as discounts, access to commission earned screens, membership and loyalty schemes can involve looking at a different function of the POS to ensure the point of sale screen contains only what a cashier needs at their disposal to serve customers. Cloud-based (post-2000s) The advent of cloud computing has given birth to the possibility of electronic point of sale (EPOS) systems to be deployed as software as a service, which can be accessed directly from the Internet using any internet browser. Using the previous advances in the communication protocols for POS's control of hardware, cloud-based POS systems are independent from platform and operating system limitations. EPOS systems based in the cloud (most small-business POS today) are generally subscription-based, which includes ongoing customer support. Compared to regular cash registers (which tend to be significantly cheaper but only process sales and prints receipts), POS systems include automatic updating of the inventory library stock levels when you sell products, real-time reports accessible from a remote computer, staff timesheets and a customer library with loyalty features. Cloud-based POS systems are also created to be compatible with a wide range of POS hardware and sometimes tablets such as Apple's iPad. Thus cloud-based POS also helped expand POS systems to mobile devices, such as tablet computers or smartphones. These devices can also act as barcode readers using a built-in camera and as payment terminals using built-in NFC technology or an external payment card reader. A number of POS companies built their software specifically to be cloud-based. Other businesses who launched pre-2000s have since adapted their software to evolving technology. Cloud-based POS systems are different from traditional POS largely because user data, including sales and inventory, are not stored locally, but in a remote server. The POS system is also not run locally, so there is no installation required. Depending on the POS vendor and the terms of contract, compared to traditional on-premises POS installation, the software is more likely to be continually updated by the developer with more useful features and better performance in terms of computer resources at the remote server and in terms of lesser bugs and errors. Other advantages of a cloud-based POS are instant centralization of data (important especially to chain stores), ability to access data from anywhere there is internet connection, and lower start-up costs. Cloud based POS requires an internet connection. For this reason it important to use a device with 3G connectivity in case the device's primary internet goes down. In addition to being significantly less expensive than traditional legacy point of sale systems, the real strength of a cloud based point of sale system is that there are developers all over the world creating software applications for cloud based POS. Cloud based POS systems are often described as future proof as new applications are constantly being conceived and built. A number of noted emerging cloud-based POS systems came on the scene less than a decade or even half a decade back. These systems are usually designed for restaurants, small and medium-sized retail operations with fairly simple sale processes as can be culled from POS system review sites. It appears from such software reviews that enterprise-level cloud-based POS systems are currently lacking in the market. "Enterprise-level" here means that the inventory should be capable of handling a large number of records, such as required by grocery stores and supermarkets. It can also mean that the system—software and cloud server—must be capable of generating reports such as analytics of sale against inventory for both a single and multiple outlets that are interlinked for administration by the headquarters of the business operation. POS vendors of such cloud based systems should also have a strong contingency plan for the breakdown of their remote server such as represented by fail-over server support. However, sometimes even a major data center can fail completely, such as in a fire. On-premises installations are therefore sometimes seen alongside cloud-based implementation to preempt such incidents, especially for businesses with very high traffic. However the on-premises installations may not have the most up-to-date inventory and membership information. For such contingency, a more innovative though highly complex approach for the developer is to have a trimmed down version of the POS system installed on the cashier computer at the outlet. On a daily basis the latest inventory and membership information from the remote server is automatically updated into the local database. Thus should the remote server fail, the cashier can switch over to the local sale window without disrupting sales. When the remote server is restored and the cashier switches over to the cloud system, the locally processed sale records are then automatically submitted to the remote system, thus maintaining the integrity of the remote database. Although cloud-based POS systems save the end-user startup cost and technical challenges in maintaining an otherwise on-premises installation, there is a risk that should the cloud-based vendor close down it may result in more immediate termination of services for the end-user compared to the case of a traditional full on-premises POS system where it can still run without the vendor. Another consideration is that a cloud-based POS system actually exposes business data to service providers - the hosting service company and the POS vendor which have access to both the application and database. The importance of securing critical business information such as supplier names, top selling items, customer relationship processes cannot be underestimated given that sometimes the few key success factors or trade secrets of a business are actually accessible through the POS system. This security and privacy concern is an ongoing issue in cloud computing. Retail industry The retail industry is one of the predominant users of POS terminals. A retail point of sale system typically includes a cash register (which in recent times comprises a computer, monitor, cash drawer, receipt printer, customer display and a barcode scanner) and the majority of retail POS systems also include a debit/credit card reader. It can also include a conveyor belt, checkout divider, weight scale, integrated credit card processing system, a signature capture device and a customer pin pad device. While the system may include a keyboard and mouse, more and more POS monitors use touch-screen technology for ease of use, and a computer is built into the monitor chassis for what is referred to as an all-in-one unit. All-in-one POS units liberate counter space for the retailer. The POS system software can typically handle a myriad of customer based functions such as sales, returns, exchanges, layaways, gift cards, gift registries, customer loyalty programs, promotions, discounts and much more. POS software can also allow for functions such as pre-planned promotional sales, manufacturer coupon validation, foreign currency handling and multiple payment types. The POS unit handles the sales to the consumer but it is only one part of the entire POS system used in a retail business. "Back-office" computers typically handle other functions of the POS system such as inventory control, purchasing, receiving and transferring of products to and from other locations. Other typical functions of a POS system are: store sales information for enabling customer returns, reporting purposes, sales trends and cost/price/profit analysis. Customer information may be stored for receivables management, marketing purposes and specific buying analysis. Many retail POS systems include an accounting interface that "feeds" sales and cost of goods information to independent accounting applications. A multiple point of sale system used by big retailers like supermarkets and department stores has a far more demanding database and software architecture than that of a single station seen in small retail outlets. A supermarket with high traffic cannot afford a systemic failure, hence each point of sale station should not only be very robust both in terms of software, database and hardware specifications but also designed in such a way as to prevent causing a systemic failure - such as may happen through the use of a single central database for operations. At the same time updating between multiple stations and the back end administrative computer should be capable of being efficiently performed, so that on one hand either at the start of the day or at any time each station will have the latest inventory to process all items for sale, while on the other hand at the end of the day the back end administrative computer can be updated in terms of all sale records. This gets even more complicated when there is a membership system requiring real-time two-way updating of membership points between sale stations and the back end administrative computer. Retail operations such as hardware stores (lumber yards), electronic stores and so-called multifaceted superstores need specialized additional features compared to other stores. POS software in these cases handles special orders, purchase orders, repair orders, service and rental programs as well as typical point of sale functions. Rugged hardware is required for point of sale systems used in outdoor environments. Wireless devices, battery powered devices, all-in-one units, and Internet-ready machines are typical in this industry. Recently new applications have been introduced, enabling POS transactions to be conducted using mobile phones and tablets. According to a recent study, mobile POS (mPOS) terminals are expected to replace the contemporary payment techniques because of various features including mobility, upfront low cost investment and better user experience. In the mid-2000s, the blind community in the United States engaged in structured negotiations to ensure that retail point of sale devices had tactile keypads. Without keys that can be felt, a blind person cannot independently enter her or his PIN. In the mid-2000s retailers began using "flat screen" or "signature capture" devices that eliminated tactile keypads. Blind people were forced to share their confidential PIN with store clerks in order to use their debit and other PIN-based cards. The blind community reached agreement with Walmart, Target, CVS and eight other retailers that required real keys so blind people could use the devices. Physical configuration Early stores typically kept merchandise behind a counter. Staff would fetch items for customers to prevent the opportunity for theft and sales would be made at the same counter. Self-service grocery stores such as Piggly Wiggly, beginning in 1916, allowed customers to fetch their own items and pass the point of sale on the way to the exit. Many stores have a number of checkout stations. Some stations may have an automated cashier (self-checkout). Express lanes might limit the type of payment, or number or type of goods, to expedite service. If each checkout station has a separate queue, customers have to guess which line will move the fastest, to minimize their wait times; they are often frustrated to be wrong or be stuck behind another customer who encounters a problem or who takes a long time to check out. Some stores use a single, much longer but faster-moving line, that is served by multiple registers, which produces the same average wait time, but reduces the frustration and variance in wait time from person to person. Regardless of the configuration, checkout lines usually pass by impulse buy items to grab the attention of otherwise idle customers. Hospitality industry Hospitality point of sale systems are computerized systems incorporating registers, computers and peripheral equipment, usually on a computer network to be used in restaurants, hair salons or hotels. Like other point of sale systems, these systems keep track of sales, labor and payroll, and can generate records used in accounting and bookkeeping. They may be accessed remotely by restaurant corporate offices, troubleshooters and other authorized parties. Point of sale systems have revolutionized the restaurant industry, particularly in the fast food sector. In the most recent technologies, registers are computers, sometimes with touch screens. The registers connect to a server, often referred to as a "store controller" or a "central control unit". Printers and monitors are also found on the network. Additionally, remote servers can connect to store networks and monitor sales and other store data. Typical restaurant POS software is able to create and print guest checks, print orders to kitchens and bars for preparation, process credit cards and other payment cards, and run reports. In addition, some systems implement wireless pagers and electronic signature-capture devices. In the fast food industry, displays may be at the front counter, or configured for drive-through or walk-through cashiering and order taking. Front counter registers allow taking and serving orders at the same terminal, while drive-through registers allow orders to be taken at one or more drive-through windows, to be cashiered and served at another. In addition to registers, drive-through and kitchen displays are used to view orders. Once orders appear they may be deleted or recalled by the touch interface or by bump bars. Drive-through systems are often enhanced by the use of drive-through wireless (or headset) intercoms. The efficiency of such systems has decreased service times and increased efficiency of orders. Another innovation in technology for the restaurant industry is wireless POS. Many restaurants with high volume use wireless handheld POS to collect orders which are sent to a server. The server sends required information to the kitchen in real time. Wireless systems consist of drive-through microphones and speakers (often one speaker will serve both purposes), which are wired to a "base station" or "center module." This, in turn, will broadcast to headsets. Headsets may be an all-in-one headset or one connected to a belt pack. In hotels, POS software allows for transfer of meal charges from dining room to guest room with a button or two. It may also need to be integrated with property management software. Newer, more sophisticated systems are getting away from the central database "file server" type system and going to what is called a "cluster database". This eliminates any crashing or system downtime that can be associated with the back office file server. This technology allows 100% of the information to not only be stored, but also pulled from the local terminal, thus eliminating the need to rely on a separate server for the system to operate. Tablet POS systems popular for retail solutions are now available for the restaurant industry. Initially these systems were not sophisticated and many of the early systems did not support a remote printer in the kitchen. Tablet systems today are being used in all types of restaurants including table service operations. Most tablet systems upload all information to the Internet so managers and owners can view reports from anywhere with a password and Internet connection. Smartphone Internet access has made alerts and reports from the POS very accessible. Tablets have helped create the Mobile POS system, and Mobile POS applications also include payments, loyalty, online ordering, table side ordering by staff and table top ordering by customers. Regarding the payments, mobile POS can accept all kinds of payment methods from contactless cards, EMV chip-enabled cards, and mobile NFC enabled cards. Mobile POS (AKA mPOS) is growing quickly with new developers entering the market almost on a daily basis. With the proliferation of low-priced touchscreen tablet computers, more and more restaurants have implemented self-ordering through tablet POS placed permanently on every table. Customers can browse through the menu on the tablet and place their orders which are then sent to the kitchen. Most restaurants that have iPad self-order menus include photos of the dishes so guests can easily choose what they want to order. This apparently improves service and saves manpower on the part of the restaurant. However this depends on how intelligently the system has been programmed to be. As a case in point, some self-ordering systems not requiring staff assistance may not properly recognize a subsequent order from the same customer at a table. As a result, the customer is left waiting and wondering why his second order of food and drink is not being served. Another example of how intelligent the system can be, is whether an order that has been placed but not yet been processed by the kitchen can be modified by the customer through the tablet POS. For such an unprocessed order the customer should be given the option to easily retrieve his order and modify it on the tablet POS. But when his order is being processed this function should then be automatically disabled. Self-ordering systems are not always free completely from intervention by the staff and for some good reasons. For example, some restaurants require that items selected by the customers be attended to and can only be placed by the waiter who has the password required to do so. This prevents fake orders - such as may be entered by playful kids - and subsequent dispute on the items ordered. If alcoholic drinks are ordered, it also becomes necessary for the waiter to first verify the age of the customer before sending the order. The technical specifications for implementing such self-ordering system are more demanding than a single cashier-controlled POS station. On the software and hardware side each tablet on a customer table has to be networked to the cashier POS station and the kitchen computer so that both are continually updated on orders placed. The common database that serves this network must also be capable of serving many concurrent users - cashier, customers, kitchen and perhaps even a drink bar. It is therefore to be noted by developers that some databases like popularly used Microsoft Access may have the specifications that it is capable of usage by multiple concurrent users. However under the stress of a POS system, they can fail miserably resulting in constant errors and corruption of data. POS systems are often designed for a variety of clients, and can be programmed by the end users to suit their needs. Some large clients write their own specifications for vendors to implement. In some cases, POS systems are sold and supported by third-party distributors, while in other cases they are sold and supported directly by the vendor. The selection of a restaurant POS system is critical to the restaurant's daily operation and is a major investment that the restaurant's management and staff must live with for many years. The restaurant POS system interfaces with all phases of the restaurant operation and with everyone that is involved with the restaurant including guests, suppliers, employees, managers and owners. The selection of a restaurant POS system is a complex process that should be undertaken by the restaurant owner and not delegated to an employee. The purchase process can be summarized into three steps: Design, Compare and Negotiate. The Design step requires research to determine which restaurant POS features are needed for the restaurant operation. With this information the restaurant owner or manager can Compare various restaurant POS solutions to determine which POS systems meet their requirements. The final step is to Negotiate the price, payment terms, included training, initial warranty and ongoing support costs. Accounting forensics POS systems record sales for business and tax purposes. Illegal software dubbed "zappers" can be used on POS devices to falsify these records with a view to evading the payment of taxes. In some countries, legislation is being introduced to make cash register systems more secure. For example, the French treasury is estimated to be failing to collect approximately €14 billion of VAT revenue each year. The Finance Bill of 2016 is intended to address some of this loss by making it compulsory for taxpayers to operate on “secure systems”. Therefore, from 1 January 2018, all retail businesses in France are required to record customer payments using certified secure accounting software or cash register systems. A certified cash register system must provide for the (i) incommutable, (ii) security and (iii) storage and archiving of data. All businesses required to comply must obtain a certificate from the cash register system provider which certifies that the system meets these requirements. This is because VAT taxpayers may need to provide a certificate to the tax authorities showing that their cash management system fulfills the new requirements. If the business cannot provide this certificate to the tax authorities, they may be fined. And, if the tax authorities can demonstrate fraudulent use of the system, both the business and the software provider can face tax penalties, fines, and criminal sanctions. Certification can be obtained either from: a body accredited by the French Accreditation Committee (Comité français d’accréditation or COFRAC) or the software provider of the cash register system. Security Despite the more advanced technology of a POS system as compared to a simple cash register, the POS system is still as vulnerable to employee theft through the sale window. A dishonest cashier at a retail outlet can collude with a friend who pretends to be just another customer. During checkout, the cashier can bypass scanning certain items or enter a lower quantity for some items thus profiting thereby from the "free" goods. The ability of a POS system to void a closed sale receipt for refund purpose without needing a password from an authorized superior also represents a security loophole. Even a function to issue a receipt with a negative amount which can be useful under certain circumstances, can be exploited by a cashier to easily lift money from the cash drawer. In order to prevent such employee theft, it is crucial for a POS system to provide an admin window for the boss or administrator to generate and inspect a daily list of sale receipts, especially pertaining to the frequency of cancelled receipts before completion, refunded receipts and negative receipts. This is one effective way to alert the company to any suspicious activity - such as a high number of cancelled sales by a certain cashier - that may be going on and to take monitoring action. To further deter employee theft, the sale counter should also be equipped with a closed-circuit television camera pointed at the POS system to monitor and record all the activities. At the back end, price and other changes like discounts to inventory items through the administration module should also be secured with passwords provided only to trusted administrators. Any changes made should also be logged and capable of being subsequently retrieved for inspection. The sale records and inventory are highly important to the business because they provide very useful information to the company in terms of customer preferences, customer membership particulars, what are the top selling products, who are the vendors and what margins the company is getting from them, the company monthly total revenue and cost, just to name some. It is therefore important that reports on these matters generated at the administrative back end be restricted only to trusted personnel. The database from which these reports are generated should also be secured via passwords or via encryption of data stored in the database so as to prevent them from being copied or tampered with. Despite all such precautions and more, the POS system can never be entirely watertight in security from internal misuse if a clever but dishonest employee knows how to exploit many of its otherwise useful capabilities. News reports on POS system hacking show that hackers are more interested in stealing credit card information than anything else. The ease and advantage offered by the ability of a POS system to integrate credit card processing thus have a downside. In 2011, hackers were able to steal credit card data from 80,000 customers because Subway's security and POS configuration standards for PCI compliance - which governs credit card and debit card payment systems security - were "directly and blatantly disregarded" by Subway franchisees. In June 2016, several hundred of Wendy's fast food restaurants had their POS systems hacked by an illegally installed malware. The report goes on to say that "the number of franchise restaurants impacted by these cyber security attacks is now expected to be considerably higher than the 300 restaurants already implicated" and that the "hackers made hundreds of thousands of fraudulent purchases on credit and debit cards issued by various financial institutions after breaching Wendy's computer systems late last year". Again, these exploits by hackers could only be made possible because payment cards were processed through the POS system allowing the malware to either intercept card data during processing or steal and transmit unencrypted card data that is stored in the system database. In April 2017, security researchers identified critical vulnerabilities in point of sale systems developed by SAP and Oracle and commented, “POS systems are plagued by vulnerabilities, and incidents occurred because their security drawbacks came under the spotlight.” If successfully exploited, these vulnerabilities provide a perpetrator with access to every legitimate function of the system, such as changing prices, and remotely starting and stopping terminals. To illustrate the attack vector, the researchers used the example of hacking POS to change the price of a MacBook to $1. The security issues were reported to the vendor, and a patch was released soon after the notification. Oracle confirmed security bug affects over 300,000 Oracle POS Systems In some countries, credit and debit cards are only processed via payment terminals. Thus one may see quite a number of such terminals for different cards cluttering up a sale counter. This inconvenience is however offset by the fact that credit and debit card data is far less vulnerable to hackers, unlike when payment cards are processed through the POS system where security is contingent upon the actions taken by end-users and developers. With the launch of mobile payment particularly Android Pay and Apple Pay both in 2015, it is expected that because of its greater convenience coupled with good security features, this would eventually eclipse other types of payment services - including the use of payment terminals. However,for mobile payment to go fully mainstream, mobile devices like smartphones that are NFC-enabled must first become universal. This would be a matter of several years from the time of this writing (2017) as more and more models of new smartphones are expected to become NFC-enabled for such a purpose. For instance, iPhone 6 is fully NFC-enabled for mobile payment while iPhone 5 and older models are not. The aforesaid disastrous security risks connected with processing payment card usage through a POS system would then be greatly diminished. See also EFTPOS ISO 8583 JavaPOS Point of sale companies category Comparison of shopping cart software: may or may not work together with EPOS software Point of sale display Point of Sale Malware Payment terminal POSXML Self checkout Standard Interchange Language UnifiedPOS Back-office Software Windows Embedded Industry (formerly Windows Embedded POSReady), an operating system largely used on POS machines References External links Retail store elements Payment systems Retail point of sale systems American inventions 20th-century inventions
221537
https://en.wikipedia.org/wiki/Exterior%20algebra
Exterior algebra
In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors and , denoted by , is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors. The magnitude of can be interpreted as the area of the parallelogram with sides and , which in three dimensions can also be computed using the cross product of the two vectors. More generally, all parallel plane surfaces with the same orientation and area have the same bivector as a measure of their oriented area. Like the cross product, the exterior product is anticommutative, meaning that for all vectors and , but, unlike the cross product, the exterior product is associative. When regarded in this manner, the exterior product of two vectors is called a 2-blade. More generally, the exterior product of any number k of vectors can be defined and is sometimes called a k-blade. It lives in a space known as the k-th exterior power. The magnitude of the resulting k-blade is the oriented hypervolume of the k-dimensional parallelotope whose edges are the given vectors, just as the magnitude of the scalar triple product of vectors in three dimensions gives the volume of the parallelepiped generated by those vectors. The exterior algebra, or Grassmann algebra after Hermann Grassmann, is the algebraic system whose product is the exterior product. The exterior algebra provides an algebraic setting in which to answer geometric questions. For instance, blades have a concrete geometric interpretation, and objects in the exterior algebra can be manipulated according to a set of unambiguous rules. The exterior algebra contains objects that are not only k-blades, but sums of k-blades; such a sum is called a k-vector. The k-blades, because they are simple products of vectors, are called the simple elements of the algebra. The rank of any k-vector is defined to be the smallest number of simple elements of which it is a sum. The exterior product extends to the full exterior algebra, so that it makes sense to multiply any two elements of the algebra. Equipped with this product, the exterior algebra is an associative algebra, which means that for any elements . The k-vectors have degree k, meaning that they are sums of products of k vectors. When elements of different degrees are multiplied, the degrees add like multiplication of polynomials. This means that the exterior algebra is a graded algebra. The definition of the exterior algebra makes sense for spaces not just of geometric vectors, but of other vector-like objects such as vector fields or functions. In full generality, the exterior algebra can be defined for modules over a commutative ring, and for other structures of interest in abstract algebra. It is one of these more general constructions where the exterior algebra finds one of its most important applications, where it appears as the algebra of differential forms that is fundamental in areas that use differential geometry. The exterior algebra also has many algebraic properties that make it a convenient tool in algebra itself. The association of the exterior algebra to a vector space is a type of functor on vector spaces, which means that it is compatible in a certain way with linear transformations of vector spaces. The exterior algebra is one example of a bialgebra, meaning that its dual space also possesses a product, and this dual product is compatible with the exterior product. This dual algebra is precisely the algebra of alternating multilinear forms, and the pairing between the exterior algebra and its dual is given by the interior product. Motivating examples Areas in the plane The Cartesian plane R2 is a real vector space equipped with a basis consisting of a pair of unit vectors Suppose that are a pair of given vectors in R2, written in components. There is a unique parallelogram having v and w as two of its sides. The area of this parallelogram is given by the standard determinant formula: Consider now the exterior product of v and w: where the first step uses the distributive law for the exterior product, and the last uses the fact that the exterior product is alternating, and in particular . (The fact that the exterior product is alternating also forces .) Note that the coefficient in this last expression is precisely the determinant of the matrix . The fact that this may be positive or negative has the intuitive meaning that v and w may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an area is called the signed area of the parallelogram: the absolute value of the signed area is the ordinary area, and the sign determines its orientation. The fact that this coefficient is the signed area is not an accident. In fact, it is relatively easy to see that the exterior product should be related to the signed area if one tries to axiomatize this area as an algebraic construct. In detail, if denotes the signed area of the parallelogram of which the pair of vectors v and w form two adjacent sides, then A must satisfy the following properties: for any real numbers r and s, since rescaling either of the sides rescales the area by the same amount (and reversing the direction of one of the sides reverses the orientation of the parallelogram). , since the area of the degenerate parallelogram determined by v (i.e., a line segment) is zero. , since interchanging the roles of v and w reverses the orientation of the parallelogram. for any real number r, since adding a multiple of w to v affects neither the base nor the height of the parallelogram and consequently preserves its area. , since the area of the unit square is one. With the exception of the last property, the exterior product of two vectors satisfies the same properties as the area. In a certain sense, the exterior product generalizes the final property by allowing the area of a parallelogram to be compared to that of any chosen parallelogram in a parallel plane (here, the one with sides e1 and e2). In other words, the exterior product provides a basis-independent formulation of area. Cross and triple products For vectors in a 3-dimensional oriented vector space with a bilinear scalar product, the exterior algebra is closely related to the cross product and triple product. Using a standard basis , the exterior product of a pair of vectors and is where is a basis for the three-dimensional space Λ2(R3). The coefficients above are the same as those in the usual definition of the cross product of vectors in three dimensions with a given orientation, the only differences being that the exterior product is not an ordinary vector, but instead is a 2-vector, and that the exterior product does not depend on the choice of orientation. Bringing in a third vector the exterior product of three vectors is where is the basis vector for the one-dimensional space Λ3(R3). The scalar coefficient is the triple product of the three vectors. The cross product and triple product in a three dimensional Euclidean vector space each admit both geometric and algebraic interpretations. The cross product can be interpreted as a vector which is perpendicular to both u and v and whose magnitude is equal to the area of the parallelogram determined by the two vectors. It can also be interpreted as the vector consisting of the minors of the matrix with columns u and v. The triple product of u, v, and w is a signed scalar representing a geometric oriented volume. Algebraically, it is the determinant of the matrix with columns u, v, and w. The exterior product in three dimensions allows for similar interpretations: it, too, can be identified with oriented lines, areas, volumes, etc., that are spanned by one, two or more vectors. The exterior product generalizes these geometric notions to all vector spaces and to any number of dimensions, even in the absence of a scalar product. Formal definitions and algebraic properties The exterior algebra of a vector space over a field is defined as the quotient algebra of the tensor algebra by the two-sided ideal generated by all elements of the form for (i.e. all tensors that can be expressed as the tensor product of a vector in by itself). The ideal I contains the ideal J generated by elements of the form and these ideals coincide if (if these ideals are different except for the zero vector space). So, is an associative algebra. Its multiplication is called the exterior product, and denoted . This means that the product of is induced by the tensor product of . As , , and , the inclusions of and in induce injections of and into . These injections are commonly considered as inclusions, and called natural embeddings, natural injections or natural inclusions. The word canonical is also commonly used in place of natural. Alternating product The exterior product is by construction alternating on elements of , which means that for all , by the above construction. It follows that the product is also anticommutative on elements of , for supposing that , hence More generally, if σ is a permutation of the integers , and x1, x2, ..., xk are elements of V, it follows that where sgn(σ) is the signature of the permutation σ. In particular, if xi = xj for some , then the following generalization of the alternating property also holds: Together with the distributive property of the exterior product, one further generalization is that if and only if {} is a linearly dependent set of vectors, then Exterior power The kth exterior power of V, denoted Λk(V), is the vector subspace of Λ(V) spanned by elements of the form If , then α is said to be a k-vector. If, furthermore, α can be expressed as an exterior product of k elements of V, then α is said to be decomposable. Although decomposable k-vectors span Λk(V), not every element of Λk(V) is decomposable. For example, in R4, the following 2-vector is not decomposable: (This is a symplectic form, since .) Basis and dimension If the dimension of is and is a basis for , then the set is a basis for . The reason is the following: given any exterior product of the form every vector can be written as a linear combination of the basis vectors ; using the bilinearity of the exterior product, this can be expanded to a linear combination of exterior products of those basis vectors. Any exterior product in which the same basis vector appears more than once is zero; any exterior product in which the basis vectors do not appear in the proper order can be reordered, changing the sign whenever two basis vectors change places. In general, the resulting coefficients of the basis -vectors can be computed as the minors of the matrix that describes the vectors in terms of the basis . By counting the basis elements, the dimension of is equal to a binomial coefficient: where is the dimension of the vectors, and is the number of vectors in the product. The binomial coefficient produces the correct result, even for exceptional cases; in particular, for . Any element of the exterior algebra can be written as a sum of -vectors. Hence, as a vector space the exterior algebra is a direct sum (where by convention , the field underlying , and   ), and therefore its dimension is equal to the sum of the binomial coefficients, which is 2. Rank of a k-vector If , then it is possible to express α as a linear combination of decomposable k-vectors: where each α(i) is decomposable, say The rank of the k-vector α is the minimal number of decomposable k-vectors in such an expansion of α. This is similar to the notion of tensor rank. Rank is particularly important in the study of 2-vectors . The rank of a 2-vector α can be identified with half the rank of the matrix of coefficients of α in a basis. Thus if ei is a basis for V, then α can be expressed uniquely as where (the matrix of coefficients is skew-symmetric). The rank of the matrix aij is therefore even, and is twice the rank of the form α. In characteristic 0, the 2-vector α has rank p if and only if and Graded structure The exterior product of a k-vector with a p-vector is a -vector, once again invoking bilinearity. As a consequence, the direct sum decomposition of the preceding section gives the exterior algebra the additional structure of a graded algebra, that is Moreover, if is the base field, we have and The exterior product is graded anticommutative, meaning that if and , then In addition to studying the graded structure on the exterior algebra, studies additional graded structures on exterior algebras, such as those on the exterior algebra of a graded module (a module that already carries its own gradation). Universal property Let be a vector space over the field . Informally, multiplication in is performed by manipulating symbols and imposing a distributive law, an associative law, and using the identity for . Formally, is the "most general" algebra in which these rules hold for the multiplication, in the sense that any unital associative -algebra containing with alternating multiplication on must contain a homomorphic image of . In other words, the exterior algebra has the following universal property: Given any unital associative -algebra and any -linear map such that for every in , then there exists precisely one unital algebra homomorphism such that for all in (here is the natural inclusion of in , see above). To construct the most general algebra that contains and whose multiplication is alternating on , it is natural to start with the most general associative algebra that contains , the tensor algebra , and then enforce the alternating property by taking a suitable quotient. We thus take the two-sided ideal in generated by all elements of the form for in , and define as the quotient (and use as the symbol for multiplication in . It is then straightforward to show that contains and satisfies the above universal property. As a consequence of this construction, the operation of assigning to a vector space its exterior algebra is a functor from the category of vector spaces to the category of algebras. Rather than defining first and then identifying the exterior powers as certain subspaces, one may alternatively define the spaces first and then combine them to form the algebra . This approach is often used in differential geometry and is described in the next section. Generalizations Given a commutative ring R and an R-module M, we can define the exterior algebra Λ(M) just as above, as a suitable quotient of the tensor algebra T(M). It will satisfy the analogous universal property. Many of the properties of Λ(M) also require that M be a projective module. Where finite dimensionality is used, the properties further require that M be finitely generated and projective. Generalizations to the most common situations can be found in . Exterior algebras of vector bundles are frequently considered in geometry and topology. There are no essential differences between the algebraic properties of the exterior algebra of finite-dimensional vector bundles and those of the exterior algebra of finitely generated projective modules, by the Serre–Swan theorem. More general exterior algebras can be defined for sheaves of modules. Alternating tensor algebra If K is a field of characteristic 0, then the exterior algebra of a vector space V over K can be canonically identified with the vector subspace of T(V) consisting of antisymmetric tensors. Recall that the exterior algebra is the quotient of T(V) by the ideal I generated by elements of the form . Let Tr(V) be the space of homogeneous tensors of degree r. This is spanned by decomposable tensors The antisymmetrization (or sometimes the skew-symmetrization) of a decomposable tensor is defined by where the sum is taken over the symmetric group of permutations on the symbols This extends by linearity and homogeneity to an operation, also denoted by Alt, on the full tensor algebra T(V). The image Alt(T(V)) is the alternating tensor algebra, denoted A(V). This is a vector subspace of T(V), and it inherits the structure of a graded vector space from that on T(V). It carries an associative graded product defined by Although this product differs from the tensor product, the kernel of Alt is precisely the ideal I (again, assuming that K has characteristic 0), and there is a canonical isomorphism Index notation Suppose that V has finite dimension n, and that a basis of V is given. then any alternating tensor can be written in index notation as where ti1⋅⋅⋅ir is completely antisymmetric in its indices. The exterior product of two alternating tensors t and s of ranks r and p is given by The components of this tensor are precisely the skew part of the components of the tensor product , denoted by square brackets on the indices: The interior product may also be described in index notation as follows. Let be an antisymmetric tensor of rank r. Then, for , iαt is an alternating tensor of rank , given by where n is the dimension of V. Duality Alternating operators Given two vector spaces V and X and a natural number k, an alternating operator from Vk to X is a multilinear map such that whenever v1, ..., vk are linearly dependent vectors in V, then The map which associates to vectors from their exterior product, i.e. their corresponding -vector, is also alternating. In fact, this map is the "most general" alternating operator defined on ; given any other alternating operator , there exists a unique linear map with . This universal property characterizes the space and can serve as its definition. Alternating multilinear forms The above discussion specializes to the case when , the base field. In this case an alternating multilinear function is called an alternating multilinear form. The set of all alternating multilinear forms is a vector space, as the sum of two such maps, or the product of such a map with a scalar, is again alternating. By the universal property of the exterior power, the space of alternating forms of degree k on V is naturally isomorphic with the dual vector space (ΛkV)∗. If V is finite-dimensional, then the latter is to Λk(V∗). In particular, if V is n-dimensional, the dimension of the space of alternating maps from Vk to K is the binomial coefficient Under such identification, the exterior product takes a concrete form: it produces a new anti-symmetric map from two given ones. Suppose and are two anti-symmetric maps. As in the case of tensor products of multilinear maps, the number of variables of their exterior product is the sum of the numbers of their variables. Depending on the choice of identification of elements of exterior power with multilinear forms, the exterior product is defined as or as where, if the characteristic of the base field K is 0, the alternation Alt of a multilinear map is defined to be the average of the sign-adjusted values over all the permutations of its variables: When the field K has finite characteristic, an equivalent version of the second expression without any factorials or any constants is well-defined: where here is the subset of (k,m) shuffles: permutations σ of the set such that , and . Interior product Suppose that V is finite-dimensional. If V∗ denotes the dual space to the vector space V, then for each , it is possible to define an antiderivation on the algebra Λ(V), This derivation is called the interior product with α, or sometimes the insertion operator, or contraction by α. Suppose that . Then w is a multilinear mapping of V∗ to K, so it is defined by its values on the k-fold Cartesian product . If u1, u2, ..., uk−1 are elements of V∗, then define Additionally, let whenever f is a pure scalar (i.e., belonging to Λ0V). Axiomatic characterization and properties The interior product satisfies the following properties: For each k and each ,(By convention, ) If v is an element of V (= Λ1V), then is the dual pairing between elements of V and elements of V∗. For each , iα is a graded derivation of degree −1: These three properties are sufficient to characterize the interior product as well as define it in the general infinite-dimensional case. Further properties of the interior product include: Hodge duality Suppose that V has finite dimension n. Then the interior product induces a canonical isomorphism of vector spaces by the recursive definition In the geometrical setting, a non-zero element of the top exterior power Λn(V) (which is a one-dimensional vector space) is sometimes called a volume form (or orientation form, although this term may sometimes lead to ambiguity). The name orientation form comes from the fact that a choice of preferred top element determines an orientation of the whole exterior algebra, since it is tantamount to fixing an ordered basis of the vector space. Relative to the preferred volume form σ, the isomorphism is given explicitly by If, in addition to a volume form, the vector space V is equipped with an inner product identifying V with V∗, then the resulting isomorphism is called the Hodge star operator, which maps an element to its Hodge dual: The composition of with itself maps and is always a scalar multiple of the identity map. In most applications, the volume form is compatible with the inner product in the sense that it is an exterior product of an orthonormal basis of V. In this case, where id is the identity mapping, and the inner product has metric signature — p pluses and q minuses. Inner product For V a finite-dimensional space, an inner product (or a pseudo-Euclidean inner product) on V defines an isomorphism of V with V∗, and so also an isomorphism of ΛkV with (ΛkV)∗. The pairing between these two spaces also takes the form of an inner product. On decomposable k-vectors, the determinant of the matrix of inner products. In the special case , the inner product is the square norm of the k-vector, given by the determinant of the Gramian matrix . This is then extended bilinearly (or sesquilinearly in the complex case) to a non-degenerate inner product on ΛkV. If ei, , form an orthonormal basis of V, then the vectors of the form constitute an orthonormal basis for Λk(V). With respect to the inner product, exterior multiplication and the interior product are mutually adjoint. Specifically, for , , and , where is the musical isomorphism, the linear functional defined by for all . This property completely characterizes the inner product on the exterior algebra. Indeed, more generally for , , and , iteration of the above adjoint properties gives where now is the dual l-vector defined by for all . Bialgebra structure There is a correspondence between the graded dual of the graded algebra Λ(V) and alternating multilinear forms on V. The exterior algebra (as well as the symmetric algebra) inherits a bialgebra structure, and, indeed, a Hopf algebra structure, from the tensor algebra. See the article on tensor algebras for a detailed treatment of the topic. The exterior product of multilinear forms defined above is dual to a coproduct defined on Λ(V), giving the structure of a coalgebra. The coproduct is a linear function which is given by on elements v∈V. The symbol 1 stands for the unit element of the field K. Recall that so that the above really does lie in This definition of the coproduct is lifted to the full space Λ(V) by (linear) homomorphism. The correct form of this homomorphism is not what one might naively write, but has to be the one carefully defined in the coalgebra article. In this case, one obtains Expanding this out in detail, one obtains the following expression on decomposable elements: where the second summation is taken over all -shuffles. The above is written with a notational trick, to keep track of the field element 1: the trick is to write , and this is shuffled into various locations during the expansion of the sum over shuffles. The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right. Observe that the coproduct preserves the grading of the algebra. Extending to the full space Λ(V), one has The tensor symbol ⊗ used in this section should be understood with some caution: it is not the same tensor symbol as the one being used in the definition of the alternating product. Intuitively, it is perhaps easiest to think it as just another, but different, tensor product: it is still (bi-)linear, as tensor products should be, but it is the product that is appropriate for the definition of a bialgebra, that is, for creating the object Any lingering doubt can be shaken by pondering the equalities and , which follow from the definition of the coalgebra, as opposed to naive manipulations involving the tensor and wedge symbols. This distinction is developed in greater detail in the article on tensor algebras. Here, there is much less of a problem, in that the alternating product Λ clearly corresponds to multiplication in the bialgebra, leaving the symbol ⊗ free for use in the definition of the bialgebra. In practice, this presents no particular problem, as long as one avoids the fatal trap of replacing alternating sums of ⊗ by the wedge symbol, with one exception. One can construct an alternating product from ⊗, with the understanding that it works in a different space. Immediately below, an example is given: the alternating product for the dual space can be given in terms of the coproduct. The construction of the bialgebra here parallels the construction in the tensor algebra article almost exactly, except for the need to correctly track the alternating signs for the exterior algebra. In terms of the coproduct, the exterior product on the dual space is just the graded dual of the coproduct: where the tensor product on the right-hand side is of multilinear linear maps (extended by zero on elements of incompatible homogeneous degree: more precisely, , where ε is the counit, as defined presently). The counit is the homomorphism that returns the 0-graded component of its argument. The coproduct and counit, along with the exterior product, define the structure of a bialgebra on the exterior algebra. With an antipode defined on homogeneous elements by , the exterior algebra is furthermore a Hopf algebra. Functoriality Suppose that V and W are a pair of vector spaces and is a linear map. Then, by the universal property, there exists a unique homomorphism of graded algebras such that In particular, Λ(f) preserves homogeneous degree. The k-graded components of Λ(f) are given on decomposable elements by Let The components of the transformation Λk(f) relative to a basis of V and W is the matrix of minors of f. In particular, if and V is of finite dimension n, then Λn(f) is a mapping of a one-dimensional vector space ΛnV to itself, and is therefore given by a scalar: the determinant of f. Exactness If is a short exact sequence of vector spaces, then is an exact sequence of graded vector spaces, as is Direct sums In particular, the exterior algebra of a direct sum is isomorphic to the tensor product of the exterior algebras: This is a graded isomorphism; i.e., In greater generality, for a short exact sequence of vector spaces , there is a natural filtration where for is spanned by elements of the form for and . The corresponding quotients admit a natural isomorphism given by In particular, if U is 1-dimensional then is exact, and if W is 1-dimensional then is exact. Applications Linear algebra In applications to linear algebra, the exterior product provides an abstract algebraic manner for describing the determinant and the minors of a matrix. For instance, it is well known that the determinant of a square matrix is equal to the volume of the parallelotope whose sides are the columns of the matrix (with a sign to track orientation). This suggests that the determinant can be defined in terms of the exterior product of the column vectors. Likewise, the minors of a matrix can be defined by looking at the exterior products of column vectors chosen k at a time. These ideas can be extended not just to matrices but to linear transformations as well: the determinant of a linear transformation is the factor by which it scales the oriented volume of any given reference parallelotope. So the determinant of a linear transformation can be defined in terms of what the transformation does to the top exterior power. The action of a transformation on the lesser exterior powers gives a basis-independent way to talk about the minors of the transformation. Technical details: Definitions Let be an n-dimensional vector space over field with basis . For , define on simple tensors by and expand the definition linearly to all tensors. More generally, we can define on simple tensors by i.e. choose k components on which A would act, then sum up all results obtained from different choices. If , define . Since is 1-dimensional with basis , we can identify with the unique number satisfying For , define the exterior transpose to be the unique operator satisfying For , define . These definitions is equivalent to the other versions. Basic properties All results obtained from other definitions of the determinant, trace and adjoint can be obtained from this definition (since these definitions are equivalent). Here are some basic properties related to these new definitions: is -linear. We have a canonical isomorphism However, there is no canonical isomorphism between and The entries of the transposed matrix of are -minors of . In particular, and hence In particular, The characteristic polynomial of can be given by Similarly, Leverrier's algorithm are the coefficients of the terms in the characteristic polynomial. They also appear in the expressions of and . Leverrier's Algorithm is an economical way of computing and : Set ; For , Physics In physics, many quantities are naturally represented by alternating operators. For example, if the motion of a charged particle is described by velocity and acceleration vectors in four-dimensional spacetime, then normalization of the velocity vector requires that the electromagnetic force must be an alternating operator on the velocity. Its six degrees of freedom are identified with the electric and magnetic fields. Linear geometry The decomposable k-vectors have geometric interpretations: the bivector represents the plane spanned by the vectors, "weighted" with a number, given by the area of the oriented parallelogram with sides u and v. Analogously, the 3-vector represents the spanned 3-space weighted by the volume of the oriented parallelepiped with edges u, v, and w. Projective geometry Decomposable k-vectors in ΛkV correspond to weighted k-dimensional linear subspaces of V. In particular, the Grassmannian of k-dimensional subspaces of V, denoted Grk(V), can be naturally identified with an algebraic subvariety of the projective space P(ΛkV). This is called the Plücker embedding. Differential geometry The exterior algebra has notable applications in differential geometry, where it is used to define differential forms. Differential forms are mathematical objects that evaluate the length of vectors, areas of parallelograms, and volumes of higher-dimensional bodies, so they can be integrated over curves, surfaces and higher dimensional manifolds in a way that generalizes the line integrals and surface integrals from calculus. A differential form at a point of a differentiable manifold is an alternating multilinear form on the tangent space at the point. Equivalently, a differential form of degree k is a linear functional on the k-th exterior power of the tangent space. As a consequence, the exterior product of multilinear forms defines a natural exterior product for differential forms. Differential forms play a major role in diverse areas of differential geometry. In particular, the exterior derivative gives the exterior algebra of differential forms on a manifold the structure of a differential graded algebra. The exterior derivative commutes with pullback along smooth mappings between manifolds, and it is therefore a natural differential operator. The exterior algebra of differential forms, equipped with the exterior derivative, is a cochain complex whose cohomology is called the de Rham cohomology of the underlying manifold and plays a vital role in the algebraic topology of differentiable manifolds. Representation theory In representation theory, the exterior algebra is one of the two fundamental Schur functors on the category of vector spaces, the other being the symmetric algebra. Together, these constructions are used to generate the irreducible representations of the general linear group; see fundamental representation. Superspace The exterior algebra over the complex numbers is the archetypal example of a superalgebra, which plays a fundamental role in physical theories pertaining to fermions and supersymmetry. A single element of the exterior algebra is called a supernumber or Grassmann number. The exterior algebra itself is then just a one-dimensional superspace: it is just the set of all of the points in the exterior algebra. The topology on this space is essentially the weak topology, the open sets being the cylinder sets. An -dimensional superspace is just the -fold product of exterior algebras. Lie algebra homology Let L be a Lie algebra over a field K, then it is possible to define the structure of a chain complex on the exterior algebra of L. This is a K-linear mapping defined on decomposable elements by The Jacobi identity holds if and only if , and so this is a necessary and sufficient condition for an anticommutative nonassociative algebra L to be a Lie algebra. Moreover, in that case ΛL is a chain complex with boundary operator ∂. The homology associated to this complex is the Lie algebra homology. Homological algebra The exterior algebra is the main ingredient in the construction of the Koszul complex, a fundamental object in homological algebra. History The exterior algebra was first introduced by Hermann Grassmann in 1844 under the blanket term of Ausdehnungslehre, or Theory of Extension. This referred more generally to an algebraic (or axiomatic) theory of extended quantities and was one of the early precursors to the modern notion of a vector space. Saint-Venant also published similar ideas of exterior calculus for which he claimed priority over Grassmann. The algebra itself was built from a set of rules, or axioms, capturing the formal aspects of Cayley and Sylvester's theory of multivectors. It was thus a calculus, much like the propositional calculus, except focused exclusively on the task of formal reasoning in geometrical terms. In particular, this new development allowed for an axiomatic characterization of dimension, a property that had previously only been examined from the coordinate point of view. The import of this new theory of vectors and multivectors was lost to mid 19th century mathematicians, until being thoroughly vetted by Giuseppe Peano in 1888. Peano's work also remained somewhat obscure until the turn of the century, when the subject was unified by members of the French geometry school (notably Henri Poincaré, Élie Cartan, and Gaston Darboux) who applied Grassmann's ideas to the calculus of differential forms. A short while later, Alfred North Whitehead, borrowing from the ideas of Peano and Grassmann, introduced his universal algebra. This then paved the way for the 20th century developments of abstract algebra by placing the axiomatic notion of an algebraic system on a firm logical footing. See also Exterior calculus identities Alternating algebra Symmetric algebra, the symmetric analog Clifford algebra, a generalization of exterior algebra using a nonzero quadratic form Weyl algebra, a quantum deformation of the symmetric algebra by a symplectic form Multilinear algebra Tensor algebra Geometric algebra Koszul complex Wedge sum Notes References Mathematical references Includes a treatment of alternating tensors and alternating forms, as well as a detailed discussion of Hodge duality from the perspective adopted in this article. This is the main mathematical reference for the article. It introduces the exterior algebra of a module over a commutative ring (although this article specializes primarily to the case when the ring is a field), including a discussion of the universal property, functoriality, duality, and the bialgebra structure. See §III.7 and §III.11. This book contains applications of exterior algebras to problems in partial differential equations. Rank and related concepts are developed in the early chapters. Chapter XVI sections 6–10 give a more elementary account of the exterior algebra, including duality, determinants and minors, and alternating forms. Contains a classical treatment of the exterior algebra as alternating tensors, and applications to differential geometry. Historical references (The Linear Extension Theory – A new Branch of Mathematics) alternative reference ; . Other references and further reading An introduction to the exterior algebra, and geometric algebra, with a focus on applications. Also includes a history section and bibliography. Includes applications of the exterior algebra to differential forms, specifically focused on integration and Stokes's theorem. The notation ΛkV in this text is used to mean the space of alternating k-forms on V; i.e., for Spivak ΛkV is what this article would call ΛkV∗. Spivak discusses this in Addendum 4. Includes an elementary treatment of the axiomatization of determinants as signed areas, volumes, and higher-dimensional volumes. Wendell H. Fleming (1965) Functions of Several Variables, Addison-Wesley. Chapter 6: Exterior algebra and differential calculus, pages 205–38. This textbook in multivariate calculus introduces the exterior algebra of differential forms adroitly into the calculus sequence for colleges. An introduction to the coordinate-free approach in basic finite-dimensional linear algebra, using exterior products. Chapter 10: The Exterior Product and Exterior Algebras "The Grassmann method in projective geometry" A compilation of English translations of three notes by Cesare Burali-Forti on the application of exterior algebra to projective geometry C. Burali-Forti, "Introduction to Differential Geometry, following the method of H. Grassmann" An English translation of an early book on the geometric applications of exterior algebras "Mechanics, according to the principles of the theory of extension" An English translation of one Grassmann's papers on the applications of exterior algebra Algebras Multilinear algebra Differential forms
222333
https://en.wikipedia.org/wiki/Western%20Digital
Western Digital
Western Digital Corporation (WDC, commonly known as Western Digital or WD) is an American computer hard disk drive manufacturer and data storage company, headquartered in San Jose, California. It designs, manufactures and sells data technology products, including storage devices, data center systems and cloud storage services. Western Digital has a long history in the electronics industry as an integrated circuit maker and a storage products company. It is one of the largest computer hard disk drive manufacturers, along with producing SSDs and flash memory devices. Its competitors include the data management and storage companies Seagate Technology and Micron Technology. History 1970s Western Digital was founded on April 23, 1970, by Alvin B. Phillips, a Motorola employee, as General Digital, initially a manufacturer of MOS test equipment. It was originally based in Santa Ana, California, and would go on to become one of the largest technology firms headquartered in Orange County. It rapidly became a specialty semiconductor maker, with start-up capital provided by several individual investors and industrial giant Emerson Electric. Around July 1971, it adopted its current name and soon introduced its first product, the WD1402A UART. During the early 1970s, the company focused on making and selling calculator chips, and by 1975, Western Digital was the largest independent calculator chip maker in the world. The oil crisis of the mid-1970s and the bankruptcy of its biggest calculator customer, Bowmar Instrument, changed its fortunes, however, and in 1976 Western Digital declared Chapter 11 bankruptcy. After this, Emerson Electric withdrew their support of the company. Chuck Missler joined Western Digital as chairman and chief executive in June 1977, and became the largest shareholder of Western Digital. In 1973, Western Digital established its Malaysian plant, initially to manufacture semiconductors. Western Digital introduced several products during the late 1970s, including the MCP-1600 multi-chip, microcoded CPU. The MCP-1600 was used to implement DEC's LSI-11 system, the WD16, and their own Pascal MicroEngine microcomputer which ran the UCSD p-System Version III and UCSD Pascal. However, the WD integrated circuit that arguably drove Western's forward integration was the FD1771, one of the first single-chip floppy disk drive formatter/controllers, which could replace significant amounts of TTL logic. 1980s The FD1771 and its kin were Western Digital's first entry into the data storage industry; by the early 1980s, they were making hard disk drive controllers, and in 1983, they won the contract to provide IBM with controllers for the PC/AT. That controller, the WD1003, became the basis of the ATA interface (which Western Digital developed along with Compaq and Control Data Corporation's MPI division, now owned by Seagate Technology), starting in 1986. Throughout most of the 1980s, the family of controllers based on the WD1003 provided the bulk of Western Digital's revenues and profits, and for a time generated enormous corporate growth. Much of the mid-to-late 1980s saw an effort by Western Digital to use the profits from their ATA storage controllers to become a general-purpose OEM hardware supplier for the PC industry. As a result, Western Digital purchased a number of hardware companies. These included graphics cards (through its Paradise subsidiary, purchased 1986, which became Western Digital Imaging), core logic chipsets (by purchasing Faraday Electronics Inc. in 1987), SCSI controller chips for disk and tape devices (by purchasing ADSI in 1986), networking (WD8003, WD8013 Ethernet and WD8003S StarLAN). They did well (especially Paradise, which produced one of the best VGA cards of the era), but storage-related chips and disk controllers were their biggest money makers. In 1986, they introduced the WD33C93 single-chip SCSI interface, which was used in the first 16-bit bus mastering SCSI host adapter, the WD7000 "FASST"; in 1987 they introduced the WD37C65, a single-chip implementation of the PC/AT's floppy disk controller circuitry, and the grandfather of modern super I/O chips; in 1988 they introduced the WD42C22 "Vanilla", the first single-chip ATA hard disk controller. 1988 also brought what would be the biggest change in Western Digital's history. That year, Western Digital bought the hard drive production assets of PC hardware maker Tandon; the first products of that union under Western Digital's own name were the "Centaur" series of ATA and XT attachment drives. 1990s By 1991, things were starting to slow down, as the PC industry moved from ST-506 and ESDI drives to ATA and SCSI, and thus were buying fewer hard disk controller boards. That year saw the rise of Western Digital's Caviar drives, brand new designs that used the latest in embedded servo and computerized diagnostic systems. Eventually, the successful sales of the Caviar drives resulted in Western Digital starting to sell some of its divisions. Paradise was sold to Philips, and since disappeared. Its networking and floppy drive controller divisions went to SMC Networks and its SCSI chip business went to Future Domain, which was later bought out by market leader Adaptec. Around 1995, the technological lead that the Caviar drives had enjoyed was eclipsed by newer offerings from other companies, especially Quantum Corp., and Western Digital fell into a slump. In 1994, Western Digital began producing hard drives at its Malaysian factory, employing 13,000 people. Products and ideas of this time did not go far. The Portfolio drive (a form factor model, developed with JT Storage) was a flop, as was the SDX hard disk to CD-ROM interface. Western Digital's drives started to slip further behind competing products, and quality began to suffer; system builders and PC enthusiasts who used to recommend Western Digital above all else, were going to the competition, particularly Maxtor, whose products had improved significantly by the late 1990s. In an attempt to turn the tide in 1998, Western Digital recruited the help of IBM. This agreement gave Western Digital the rights to use certain IBM technologies, including giant magneto-resistive (GMR) heads and access to IBM production facilities. The result was the Expert line of drives, introduced in early 1999. The idea worked, and Western Digital regained respect in the press and among users, even despite a recall in 2000 (which was due to bad motor driver chips). Western Digital later broke ties with IBM. 2000s In 2001, Western Digital became the first manufacturer to offer mainstream ATA hard disk drives with 8 MiB of disk buffer. At that time, most desktop hard disk drives had 2 MB of buffer. Western Digital labeled these 8 MB models as "Special Edition" and distinguished them with the JB code (the 2 MB models had the BB code). The first 8 MB cache drive was the 100 GB WD1000JB, followed by other models starting with 40 GB capacity. Western Digital advertised the JB models for cost-effective file servers. In October 2001, Western Digital restated its prior year results to reflect the adoption of SEC Staff Accounting Bulletin No.101 and the reclassification of Connex and SANavigator results as discontinued operations. In 2003, Western Digital acquired most of the assets of bankrupt one-time market leading magnetic hard drive read-write head developer Read-Rite Corporation. In the same year, Western Digital offered the first 10,000 rpm Serial ATA HDD, the WD360GD "Raptor", with a capacity of 36 GB and an average access time of less than six milliseconds. Soon, the 74 GB WD740GD followed, which was also much quieter. In 2004, Western Digital redesigned its logo for the first time since 1997, with the design of new logo focusing on the company's initials ("WD"). In 2005, Western Digital released a 150 GB version, the WD1500, which was also available in a special version with a transparent window enabling the user to see the drive's heads move over the platters while the drive read and wrote data. , the Western Digital Raptor drives have a five-year warranty, making them a more attractive choice for inexpensive storage servers, where a large number of drives in constant use increases the likelihood of a drive failure. In 2006, Western Digital introduced its My Book line of mass market external hard drives that feature a compact book-like design. On October 7, 2007, Western Digital released several editions of a single 1 TB hard drive, the largest in its My Book line. In 2007, Western Digital acquired magnetic media maker Komag. Also in the same year, Western Digital adopted perpendicular recording technology in its line of notebook and desktop drives. This allowed it to produce notebook and desktop drives in the largest classes of the time. Western Digital also started to produce the energy efficient GP (Green Power) range of drives. In 2007, Western Digital announced the WD GP drive touting rotational speed "between 7200 and 5400 rpm", which is technically correct while also being misleading; the drive spins at 5405 rpm, and the Green Power spin speed is not variable. WD GP drives are programmed to unload the heads whenever idle for a very short period of time. Many Linux installations write to the file system a few times a minute in the background. As a result, there may be 100 or more load cycles per hour, and the 300,000 load cycles rating of a WD GP drive may be exceeded in less than a year. On April 21, 2008, Western Digital announced the next generation of its 10,000 rpm SATA WD Raptor series of hard drives. The new drives, called WD VelociRaptor, featured 300 GB capacity and platters enclosed in the IcePack, a mounting frame with a built-in heat sink. Western Digital said that the new drives are 35 percent faster than the previous generation. On September 12, 2008, Western Digital shipped a 500 GB notebook hard drive which is part of their Scorpio Blue series of notebook hard drives. On January 27, 2009, Western Digital shipped the first 2 TB internal hard disk drive. On March 30, 2009, they entered the solid-state drive market with the acquisition of Siliconsystems, Inc. Its acquisition was unsuccessful, and few years later Western Digital discontinued all solid-state storage products based on Siliconsystems design (SiliconEdge and SiliconDrive families of SSDs and memory cards), but its inventions was used later in development of various other solid-state storage products, with larger developments going on after 2016 acquisition of SanDisk. On July 27, 2009, Western Digital announced the first 1 TB mobile hard disk drive, which shipped as both a Passport series portable USB drive as well as a Scorpio Blue series notebook drive. In October 2009, Western Digital announced the shipment of first 3 TB internal hard disk drive, which has 750 GB-per-platter density with SATA interface. 2010s In March 2011, Western Digital agreed to acquire the storage unit of Hitachi, HGST, for about $4.3 billion of which $3.5 billion was paid in cash and the rest with 25 million shares of Western Digital. In 2011, Western Digital established an R&D facility at its Malaysian plant at a cost of 1.2 billion US dollars. In March 2012, Western Digital completed the acquisition of HGST and became the largest traditional hard drive manufacturer in the world. To address the requirements of regulatory agencies, in May 2012 Western Digital divested assets to manufacture and sell certain 3.5-inch hard drives for the desktop and consumer electronics markets to Toshiba, in exchange for one of its 2.5-inch hard drive factories in Thailand. In December 2013, Western Digital stopped manufacturing parallel ATA hard disk drives for laptops (2.5-inch form factor) and desktop PCs (3.5-inch form factor). Until that time, they were last hard disk manufacturer to produce PATA hard disk drives. Furthermore, they were the only manufacturer that had 250 GB and 320 GB in 2.5-inch form factor. In February 2014, Western Digital announced a new "Purple" line of hard disk drives for use in video surveillance systems, with capacities from 1 to 4 TB. They feature internal optimizations for applications that involve near-constant disk writing, and "AllFrame" technology which is designed to reduce write errors. In October 2015, after being required to operate the company autonomously from WD, the Chinese Ministry of Commerce issued a decision allowing the company to begin integrating HGST into its main business, but under the condition that it maintain the HGST brand and sales team for at least two more years. The HGST brand was phased out in 2018, and since then, all HGST-branded products are just branded Western Digital. In May 2016, Western Digital acquired SanDisk for US$19 billion. In the summer of 2017, Western Digital licensed the Fusion-io/SanDisk ION Accelerator software to One Stop Systems. In 2016, HGST closed its Malaysian plant. In August 2017, Western Digital bought cloud storage provider Upthere, with the intention to continue building out the service. In September 2017, Western Digital acquired Tegile Systems, maker of flash memory storage arrays. Western Digital rebranded Tegile as IntelliFlash and sold it to DataDirect Networks in September 2019. In April 2017, Western Digital moved its headquarters from Irvine, California to HGST's headquarters in San Jose, California. In December 2017, Western Digital reached an agreement with Toshiba about the sale of the jointly owned NAND production facility in Japan. In May 2018, Toshiba reached an agreement with the Bain consortium about the sale of that chip unit. In October 2017, Western Digital shipped the world's first 14 TB HDD, the helium-filled HGST Ultrastar Hs14. In June 2018, Western Digital acquired Wearable, Inc., a small company based in the Chicago area that produced the SanDisk Wireless Drive and SanDisk Connect Wireless Stick, which were derived from Wearable Inc.’s AirStash wireless server platform. In July 2018, Western Digital announced their plan to close their hard disk production facility in Kuala Lumpur to shift the company towards flash drive production, leaving the company with just two HDD production facilities in Thailand. The company ranked 158th on the 2018 Fortune 500 of the largest United States corporations by revenue. In June 2019, Kioxia experienced a power cut at one of its factories in Yokkaichi, Japan, resulting in the loss of at least 6 exabytes of flash memory, with some sources estimating the loss as high as 15 exabytes. Western Digital used (and still uses) Kioxia's facilities for making its own flash memory chips. 2020s In November 2020, Western Digital produced a new consumer SSD, the WD Black SN850 1TB. Using a proprietary NVMe version 1.4 controller ("G2"), it is reported to outperform Samsung's 980 Pro 1TB as well as other, new-to-market SSDs containing the Phison E18 controller that arrived after the SN850 became available. The only higher-performing SSD at that time was Intel's Optane line, which is a non-consumer, workstation/server-based SSD with a cost of over five times the SN850. In June 2021, users reported that their My Book Live NAS drives, which were discontinued products last manufactured in 2013, had been erased, leading to the company advising that the devices be disconnected from the internet. In August 2021, Western Digital and Japanese memory-chip supplier Kioxia (formerly Toshiba Memory) began working out the details of a merger to be finalized in September 2021. In October of the same year, it became clear that the merger talks stalled. In February 2022, Western Digital and Kioxia reported that contamination issues have affected the output of their flash memory joint-production factories, with WD admitting that at least 6.5 exabytes of memory output being affected. The Kiakami and Yokkaichi factories in Japan stopped producing due to the contamination. Products Storage devices Western Digital's offerings include HDDs and SSDs for computing devices (e.g. PCs, security surveillance systems, gaming consoles and set-top boxes); NAND-flash embedded storage products for mobile devices, notebook PCs and other portable and IoT devices; and NAND flash memory wafers. Western Digital's embedded storage devices include the iNAND product line and custom embedded products. Western Digital also provides microSD and SD card products to OEMs only for automotive and industrial applications. Use case classes Western Digital color-codes certain storage devices based on their intended use case: WD Green drives are energy efficient and are currently only available as an SSD. The WD Green HDD series was discontinued in 2015, and instead merged with WD Blue. WD Purple hard drives are designed for write-heavy workloads; for instance, security applications (ex: recording video). These drives feature AllFrame technology, which attempts to reduce video frame loss, time limited error recovery, and support for the ATA streaming command set. WD brand Western Digital also sells external hard drives under the WD brand, with product families called My Passport, My Book, WD Elements, and Easystore. While traditionally these products have used HDDs, Western Digital has started to offer SSD versions, such as the My Passport SSD, its first portable SSD. Western Digital external hard drives with encryption software (sold under the My Passport brand) have been reported to have severe data protection faults and to be easy to decrypt. As of 2019, the WD Elements line consists of WD Elements Portable (1-5TB, 4.3 x 3.2 x 0.5 inch), WD Elements Desktop (3-14 TB, 5.3 x 1.8 x 6.5 inch), and WD Elements SE. SanDisk Under the SanDisk brand, Western Digital offers mobile storage products, cards and readers, USB flash drives, SSDs and MP3 players. Most of Western Digital's consumer flash memory products are offered through SanDisk. The SanDisk product family, including the Flash Drive and Base, is made specifically for use with the Apple iPhone and iPad. The 400GB SanDisk Ultra microSDXC UHS-I card was designed primarily for use in Android smartphones that include an expansion slot. G-Technology Under the G-Technology brand, Western Digital offers HDD, SSD, platforms and systems products designed specifically for creative professionals. The G-Technology brand has partnerships with Apple, Atomos, and Intel. Other products After first offering the Western Digital Media Center in 2004 (which was actually only a storage device), Western Digital offered the WD TV series of products between 2008 and 2016. The WD TV series of products functioned as a home theater PC, able play videos, images, and music from USB drives or network locations. Western Digital offers the My Cloud series of products, which function as home media servers. In September 2015, Western Digital released My Cloud OS 3, a platform that enables connected HDDs to sync between PCs and mobile devices. Through Western Digital's acquisition of Upthere, the company offers personal cloud storage through the Upthere Home app and UpOS operating system. Western Digital sells data center hardware and software including an enterprise-class Ultrastar product line that was previously sold under the HGST brand. Current hardware products include the 20 TB CMR helium-filled HC560, the 20 TB SMR helium-filled HC650, and the 6.4 TB U.2 NVMe SN840 SSD, Corporate affairs Western Digital Capital is Western Digital's investment arm. It has contributed funding for data technology companies such as Elastifile and Avere Systems. Lawsuits Lawsuits have been filed against various manufacturers including Western Digital, related to the claimed capacity of their drives. The drives are labelled using the convention of 103 (1,000) bytes to the kilobyte, resulting in a perceived capacity shortfall when reported by most operating systems, which tend to use 210 (1,024) bytes to the kilobyte. While Western Digital maintained that they used "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006, with a $30 refund to affected customers in the form of backup and recovery software of the same value. In May 2020, Western Digital was sued for using shingled magnetic recording technology in its NAS line of consumer drives without explicitly informing consumers. The lawsuit alleged that SMR technology is not suitable for the advertised use of the drives in a RAID array and intended to end any use of SMR in NAS drives. Seagate, another data storage company and a direct competitor of Western Digital, stated that SMR is not suitable for NAS use and that Seagate uses only conventional magnetic recording in its NAS-oriented products. In response to the controversy, Western Digital introduced a new naming scheme, in which "WD Red Plus" describes disks using conventional magnetic recording and "WD Red" means SMR. Acquisitions References External links American brands American companies established in 1970 Technology companies based in the San Francisco Bay Area Manufacturing companies based in San Jose, California Computer hardware companies Multinational companies headquartered in the United States Companies listed on the Nasdaq Companies that filed for Chapter 11 bankruptcy in 1976 Computer storage companies Computer companies established in 1970 Computer companies of the United States Computer memory companies 1970 establishments in California
223796
https://en.wikipedia.org/wiki/Public%20key%20certificate
Public key certificate
In cryptography, a public key certificate, also known as a digital certificate or identity certificate, is an electronic document used to prove the validity of a public key. The certificate includes information about the key, information about the identity of its owner (called the subject), and the digital signature of an entity that has verified the certificate's contents (called the issuer). If the signature is valid, and the software examining the certificate trusts the issuer, then it can use that key to communicate securely with the certificate's subject. In email encryption, code signing, and e-signature systems, a certificate's subject is typically a person or organization. However, in Transport Layer Security (TLS) a certificate's subject is typically a computer or other device, though TLS certificates may identify organizations or individuals in addition to their core role in identifying devices. TLS, sometimes called by its older name Secure Sockets Layer (SSL), is notable for being a part of HTTPS, a protocol for securely browsing the web. In a typical public-key infrastructure (PKI) scheme, the certificate issuer is a certificate authority (CA), usually a company that charges customers to issue certificates for them. By contrast, in a web of trust scheme, individuals sign each other's keys directly, in a format that performs a similar function to a public key certificate. The most common format for public key certificates is defined by X.509. Because X.509 is very general, the format is further constrained by profiles defined for certain use cases, such as Public Key Infrastructure (X.509) as defined in . Types of certificate TLS/SSL server certificate The Transport Layer Security (TLS) protocol – as well as its outdated predecessor, the Secure Sockets Layer (SSL) protocol – ensure that the communication between a client computer and a server is secure. The protocol requires the server to present a digital certificate, proving that it is the intended destination. The connecting client conducts certification path validation, ensuring that: The subject of the certificate matches the host name (not to be confused with the domain name) to which the client is trying to connect. A trusted certificate authority has signed the certificate. The Subject field of the certificate must identify the primary host name of the server as the Common Name. A certificate may be valid for multiple host names (e.g., a domain and its subdomains.) Such certificates are commonly called Subject Alternative Name (SAN) certificates or Unified Communications Certificates (UCC). These certificates contain the Subject Alternative Name field, though many CAs also put them into the Subject Common Name field for backward compatibility. If some of the host names contain an asterisk (*), a certificate may also be called a wildcard certificate. Once the certification path validation is successful, the client can establish an encrypted connection with the server. Internet-facing servers, such as public web servers, must obtain their certificates from a trusted, public certificate authority (CA). TLS/SSL client certificate Client certificates authenticate the client connecting to a TLS service, for instance to provide access control. Because most services provide access to individuals, rather than devices, most client certificates contain an email address or personal name rather than a host name. In addition, the certificate authority that issues the client certificate is usually the service provider to which client connects because it is the provider that needs to perform authentication. While most web browsers support client certificates, the most common form of authentication on the Internet is a username and password pair. Client certificates are more common in virtual private networks (VPN) and Remote Desktop Services, where they authenticate devices. Email certificate In accordance with the S/MIME protocol, email certificates can both establish the message integrity and encrypt messages. To establish encrypted email communication, the communicating parties must have their digital certificates in advance. Each must send the other one digitally signed email and opt to import the sender's certificate. Some publicly trusted certificate authorities provide email certificates, but more commonly S/MIME is used when communicating within a given organization, and that organization runs its own CA, which is trusted by participants in that email system. Self-signed and root certificates A self-signed certificate is a certificate with a subject that matches its issuer, and a signature that can be verified by its own public key. For most purposes, such a self-signed certificate is worthless. However, the digital certificate chain of trust starts with a self-signed certificate, called a "root certificate," "trust anchor," or "trust root." A certificate authority self-signs a root certificate to be able to sign other certificates. An intermediate certificate has a similar purpose to the root certificate; its only use is to sign other certificate. However, an intermediate certificate is not self-signed. A root certificate or another intermediate certificate need to sign it. An end-entity or leaf certificate is any certificate that cannot sign other certificates. For instance, TLS/SSL server and client certificates, email certificates, code signing certificates, and qualified certificates are all end-entity certificates. Other certificates EMV certificate: EMV is a payment method based on a technical standard for payment cards, payment terminals and automated teller machines (ATM). EMV payment cards are preloaded with a card issuer certificate, signed by the EMV certificate authority to validate authenticity of the payment card during the payment transaction. Code-signing certificate: Certificates can validate apps (or their binaries) to ensure they were not tampered with during delivery. Qualified certificate: A certificate identifying an individual, typically for electronic signature purposes. These are most commonly used in Europe, where the eIDAS regulation standardizes them and requires their recognition. Role-based certificate: Defined in the X.509 Certificate Policy for the Federal Bridge Certification Authority (FBCA), role-based certificates "identify a specific role on behalf of which the subscriber is authorized to act rather than the subscriber’s name and are issued in the interest of supporting accepted business practices." Group certificate: Defined in the X.509 Certificate Policy for the Federal Bridge Certification Authority (FBCA), for "cases where there are several entities acting in one capacity, and where non-repudiation for transactions is not desired." Common fields These are some of the most common fields in certificates. Most certificates contain a number of fields not listed here. Note that in terms of a certificate's X.509 representation, a certificate is not "flat" but contains these fields nested in various structures within the certificate. Serial Number: Used to uniquely identify the certificate within a CA's systems. In particular this is used to track revocation information. Subject: The entity a certificate belongs to: a machine, an individual, or an organization. Issuer: The entity that verified the information and signed the certificate. Not Before: The earliest time and date on which the certificate is valid. Usually set to a few hours or days prior to the moment the certificate was issued, to avoid clock skew problems. Not After: The time and date past which the certificate is no longer valid. Key Usage: The valid cryptographic uses of the certificate's public key. Common values include digital signature validation, key encipherment, and certificate signing. Extended Key Usage: The applications in which the certificate may be used. Common values include TLS server authentication, email protection, and code signing. Public Key: A public key belonging to the certificate subject. Signature Algorithm: This contain a hashing algorithm and an encryption algorithm. For example "sha256RSA" where sha256 is the hashing algorithm and RSA is the encryption algorithm. Signature: The body of the certificate is hashed (hashing algorithm in "Signature Algorithm" field is used) and then the hash is encrypted (encryption algorithm in the "Signature Algorithm" field is used) with the issuer's private key. Example This is an example of a decoded SSL/TLS certificate retrieved from SSL.com's website. The issuer's common name (CN) is shown as SSL.com EV SSL Intermediate CA RSA R3, identifying this as an Extended Validation (EV) certificate. Validated information about the website's owner (SSL Corp) is located in the Subject field. The X509v3 Subject Alternative Name field contains a list of domain names covered by the certificate. The X509v3 Extended Key Usage and X509v3 Key Usage fields show all appropriate uses. Certificate: Data: Version: 3 (0x2) Serial Number: 72:14:11:d3:d7:e0:fd:02:aa:b0:4e:90:09:d4:db:31 Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, ST=Texas, L=Houston, O=SSL Corp, CN=SSL.com EV SSL Intermediate CA RSA R3 Validity Not Before: Apr 18 22:15:06 2019 GMT Not After : Apr 17 22:15:06 2021 GMT Subject: C=US, ST=Texas, L=Houston, O=SSL Corp/serialNumber=NV20081614243, CN=www.ssl.com/postalCode=77098/businessCategory=Private Organization/street=3100 Richmond Ave/jurisdictionST=Nevada/jurisdictionC=US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:ad:0f:ef:c1:97:5a:9b:d8:1e ... Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: keyid:BF:C1:5A:87:FF:28:FA:41:3D:FD:B7:4F:E4:1D:AF:A0:61:58:29:BD Authority Information Access: CA Issuers - URI:http://www.ssl.com/repository/SSLcom-SubCA-EV-SSL-RSA-4096-R3.crt OCSP - URI:http://ocsps.ssl.com X509v3 Subject Alternative Name: DNS:www.ssl.com, DNS:answers.ssl.com, DNS:faq.ssl.com, DNS:info.ssl.com, DNS:links.ssl.com, DNS:reseller.ssl.com, DNS:secure.ssl.com, DNS:ssl.com, DNS:support.ssl.com, DNS:sws.ssl.com, DNS:tools.ssl.com X509v3 Certificate Policies: Policy: 2.23.140.1.1 Policy: 1.2.616.1.113527.2.5.1.1 Policy: 1.3.6.1.4.1.38064.1.1.1.5 CPS: https://www.ssl.com/repository X509v3 Extended Key Usage: TLS Web Client Authentication, TLS Web Server Authentication X509v3 CRL Distribution Points: Full Name: URI:http://crls.ssl.com/SSLcom-SubCA-EV-SSL-RSA-4096-R3.crl X509v3 Subject Key Identifier: E7:37:48:DE:7D:C2:E1:9D:D0:11:25:21:B8:00:33:63:06:27:C1:5B X509v3 Key Usage: critical Digital Signature, Key Encipherment CT Precertificate SCTs: Signed Certificate Timestamp: Version : v1 (0x0) Log ID : 87:75:BF:E7:59:7C:F8:8C:43:99 ... Timestamp : Apr 18 22:25:08.574 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:44:02:20:40:51:53:90:C6:A2 ... Signed Certificate Timestamp: Version : v1 (0x0) Log ID : A4:B9:09:90:B4:18:58:14:87:BB ... Timestamp : Apr 18 22:25:08.461 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:45:02:20:43:80:9E:19:90:FD ... Signed Certificate Timestamp: Version : v1 (0x0) Log ID : 55:81:D4:C2:16:90:36:01:4A:EA ... Timestamp : Apr 18 22:25:08.769 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:45:02:21:00:C1:3E:9F:F0:40 ... Signature Algorithm: sha256WithRSAEncryption 36:07:e7:3b:b7:45:97:ca:4d:6c ... Usage in the European Union In the European Union, (advanced) electronic signatures on legal documents are commonly performed using digital signatures with accompanying identity certificates. However, only qualified electronic signatures (which require using a qualified trust service provider and signature creation device) are given the same power as a physical signature. Certificate authorities In the X.509 trust model, a certificate authority (CA) is responsible for signing certificates. These certificates act as an introduction between two parties, which means that a CA acts as a trusted third party. A CA processes requests from people or organizations requesting certificates (called subscribers), verifies the information, and potentially signs an end-entity certificate based on that information. To perform this role effectively, a CA needs to have one or more broadly trusted root certificates or intermediate certificates and the corresponding private keys. CAs may achieve this broad trust by having their root certificates included in popular software, or by obtaining a cross-signature from another CA delegating trust. Other CAs are trusted within a relatively small community, like a business, and are distributed by other mechanisms like Windows Group Policy. Certificate authorities are also responsible for maintaining up-to-date revocation information about certificates they have issued, indicating whether certificates are still valid. They provide this information through Online Certificate Status Protocol (OCSP) and/or Certificate Revocation Lists (CRLs). Some of the larger certificate authorities in the market include IdenTrust, DigiCert, and Sectigo. Root programs Some major software contain a list of certificate authorities that are trusted by default. This makes it easier for end-users to validate certificates, and easier for people or organizations that request certificates to know which certificate authorities can issue a certificate that will be broadly trusted. This is particularly important in HTTPS, where a web site operator generally wants to get a certificate that is trusted by nearly all potential visitors to their web site. The policies and processes a provider uses to decide which certificate authorities their software should trust are called root programs. The most influential root programs are: Microsoft Root Program Apple Root Program Mozilla Root Program Oracle Java root program Adobe AATL Adobe Approved Trust List and EUTL root programs (used for document signing) Browsers other than Firefox generally use the operating system's facilities to decide which certificate authorities are trusted. So, for instance, Chrome on Windows trusts the certificate authorities included in the Microsoft Root Program, while on macOS or iOS, Chrome trusts the certificate authorities in the Apple Root Program. Edge and Safari use their respective operating system trust stores as well, but each is only available on a single OS. Firefox uses the Mozilla Root Program trust store on all platforms. The Mozilla Root Program is operated publicly, and its certificate list is part of the open source Firefox web browser, so it is broadly used outside Firefox. For instance, while there is no common Linux Root Program, many Linux distributions, like Debian, include a package that periodically copies the contents of the Firefox trust list, which is then used by applications. Root programs generally provide a set of valid purposes with the certificates they include. For instance, some CAs may be considered trusted for issuing TLS server certificates, but not for code signing certificates. This is indicated with a set of trust bits in a root certificate storage system. Website security The most common use of certificates is for HTTPS-based web sites. A web browser validates that an HTTPS web server is authentic, so that the user can feel secure that his/her interaction with the web site has no eavesdroppers and that the web site is who it claims to be. This security is important for electronic commerce. In practice, a web site operator obtains a certificate by applying to a certificate authority with a certificate signing request. The certificate request is an electronic document that contains the web site name, company information and the public key. The certificate provider signs the request, thus producing a public certificate. During web browsing, this public certificate is served to any web browser that connects to the web site and proves to the web browser that the provider believes it has issued a certificate to the owner of the web site. As an example, when a user connects to https://www.example.com/ with their browser, if the browser does not give any certificate warning message, then the user can be theoretically sure that interacting with https://www.example.com/ is equivalent to interacting with the entity in contact with the email address listed in the public registrar under "example.com", even though that email address may not be displayed anywhere on the web site. No other surety of any kind is implied. Further, the relationship between the purchaser of the certificate, the operator of the web site, and the generator of the web site content may be tenuous and is not guaranteed. At best, the certificate guarantees uniqueness of the web site, provided that the web site itself has not been compromised (hacked) or the certificate issuing process subverted. A certificate provider can opt to issue three types of certificates, each requiring its own degree of vetting rigor. In order of increasing rigor (and naturally, cost) they are: Domain Validation, Organization Validation and Extended Validation. These rigors are loosely agreed upon by voluntary participants in the CA/Browser Forum. Validation levels Domain validation A certificate provider will issue a domain-validated (DV) certificate to a purchaser if the purchaser can demonstrate one vetting criterion: the right to administratively manage the affected DNS domain(s). Organization validation A certificate provider will issue an organization validation (OV) class certificate to a purchaser if the purchaser can meet two criteria: the right to administratively manage the domain name in question, and perhaps, the organization's actual existence as a legal entity. A certificate provider publishes its OV vetting criteria through its certificate policy. Extended validation To acquire an Extended Validation (EV) certificate, the purchaser must persuade the certificate provider of its legal identity, including manual verification checks by a human. As with OV certificates, a certificate provider publishes its EV vetting criteria through its certificate policy. Until 2019, major browsers such as Chrome and Firefox generally offered users a visual indication of the legal identity when a site presented an EV certificate. This was done by showing the legal name before the domain, and a bright green color to highlight the change. Most browsers deprecated this feature providing no visual difference to the user on the type of certificate used. This change followed security concerns raised by forensic experts and successful attempts to purchase EV certificates to impersonate famous organizations, proving the inefficiency of these visual indicators and highlighting potential abuses. Weaknesses A web browser will give no warning to the user if a web site suddenly presents a different certificate, even if that certificate has a lower number of key bits, even if it has a different provider, and even if the previous certificate had an expiry date far into the future. Where certificate providers are under the jurisdiction of governments, those governments may have the freedom to order the provider to generate any certificate, such as for the purposes of law enforcement. Subsidiary wholesale certificate providers also have the freedom to generate any certificate. All web browsers come with an extensive built-in list of trusted root certificates, many of which are controlled by organizations that may be unfamiliar to the user. Each of these organizations is free to issue any certificate for any web site and have the guarantee that web browsers that include its root certificates will accept it as genuine. In this instance, end users must rely on the developer of the browser software to manage its built-in list of certificates and on the certificate providers to behave correctly and to inform the browser developer of problematic certificates. While uncommon, there have been incidents in which fraudulent certificates have been issued: in some cases, the browsers have detected the fraud; in others, some time passed before browser developers removed these certificates from their software. The list of built-in certificates is also not limited to those provided by the browser developer: users (and to a degree applications) are free to extend the list for special purposes such as for company intranets. This means that if someone gains access to a machine and can install a new root certificate in the browser, that browser will recognize websites that use the inserted certificate as legitimate. For provable security, this reliance on something external to the system has the consequence that any public key certification scheme has to rely on some special setup assumption, such as the existence of a certificate authority. Usefulness versus unsecured web sites In spite of the limitations described above, certificate-authenticated TLS is considered mandatory by all security guidelines whenever a web site hosts confidential information or performs material transactions. This is because, in practice, in spite of the weaknesses described above, web sites secured by public key certificates are still more secure than unsecured http:// web sites. Standards The National Institute of Standards and Technology (NIST) Computer Security Division provides guidance documents for public key certificates: SP 800-32 Introduction to Public Key Technology and the Federal PKI Infrastructure SP 800-25 Federal Agency Use of Public Key Technology for Digital Signatures and Authentication See also Authorization certificate Pretty Good Privacy References Public-key cryptography Key management Public key infrastructure
223942
https://en.wikipedia.org/wiki/Rootkit
Rootkit
A rootkit is a collection of computer software, typically malicious, designed to enable access to a computer or an area of its software that is not otherwise allowed (for example, to an unauthorized user) and often masks its existence or the existence of other software. The term rootkit is a compound of "root" (the traditional name of the privileged account on Unix-like operating systems) and the word "kit" (which refers to the software components that implement the tool). The term "rootkit" has negative connotations through its association with malware. Rootkit installation can be automated, or an attacker can install it after having obtained root or administrator access. Obtaining this access is a result of direct attack on a system, i.e. exploiting a known vulnerability (such as privilege escalation) or a password (obtained by cracking or social engineering tactics like "phishing"). Once installed, it becomes possible to hide the intrusion as well as to maintain privileged access. Full control over a system means that existing software can be modified, including software that might otherwise be used to detect or circumvent it. Rootkit detection is difficult because a rootkit may be able to subvert the software that is intended to find it. Detection methods include using an alternative and trusted operating system, behavioral-based methods, signature scanning, difference scanning, and memory dump analysis. Removal can be complicated or practically impossible, especially in cases where the rootkit resides in the kernel; reinstallation of the operating system may be the only available solution to the problem. When dealing with firmware rootkits, removal may require hardware replacement, or specialized equipment. History The term rootkit or root kit originally referred to a maliciously modified set of administrative tools for a Unix-like operating system that granted "root" access. If an intruder could replace the standard administrative tools on a system with a rootkit, the intruder could obtain root access over the system whilst simultaneously concealing these activities from the legitimate system administrator. These first-generation rootkits were trivial to detect by using tools such as Tripwire that had not been compromised to access the same information. Lane Davis and Steven Dake wrote the earliest known rootkit in 1990 for Sun Microsystems' SunOS UNIX operating system. In the lecture he gave upon receiving the Turing award in 1983, Ken Thompson of Bell Labs, one of the creators of Unix, theorized about subverting the C compiler in a Unix distribution and discussed the exploit. The modified compiler would detect attempts to compile the Unix login command and generate altered code that would accept not only the user's correct password, but an additional "backdoor" password known to the attacker. Additionally, the compiler would detect attempts to compile a new version of the compiler, and would insert the same exploits into the new compiler. A review of the source code for the login command or the updated compiler would not reveal any malicious code. This exploit was equivalent to a rootkit. The first documented computer virus to target the personal computer, discovered in 1986, used cloaking techniques to hide itself: the Brain virus intercepted attempts to read the boot sector, and redirected these to elsewhere on the disk, where a copy of the original boot sector was kept. Over time, DOS-virus cloaking methods became more sophisticated. Advanced techniques included hooking low-level disk INT 13H BIOS interrupt calls to hide unauthorized modifications to files. The first malicious rootkit for the Windows NT operating system appeared in 1999: a trojan called NTRootkit created by Greg Hoglund. It was followed by HackerDefender in 2003. The first rootkit targeting Mac OS X appeared in 2009, while the Stuxnet worm was the first to target programmable logic controllers (PLC). Sony BMG copy protection rootkit scandal In 2005, Sony BMG published CDs with copy protection and digital rights management software called Extended Copy Protection, created by software company First 4 Internet. The software included a music player but silently installed a rootkit which limited the user's ability to access the CD. Software engineer Mark Russinovich, who created the rootkit detection tool RootkitRevealer, discovered the rootkit on one of his computers. The ensuing scandal raised the public's awareness of rootkits. To cloak itself, the rootkit hid from the user any file starting with "$sys$". Soon after Russinovich's report, malware appeared which took advantage of that vulnerability of affected systems. One BBC analyst called it a "public relations nightmare." Sony BMG released patches to uninstall the rootkit, but it exposed users to an even more serious vulnerability. The company eventually recalled the CDs. In the United States, a class-action lawsuit was brought against Sony BMG. Greek wiretapping case 2004–05 The Greek wiretapping case 2004–05, also referred to as Greek Watergate, involved the illegal telephone tapping of more than 100 mobile phones on the Vodafone Greece network belonging mostly to members of the Greek government and top-ranking civil servants. The taps began sometime near the beginning of August 2004 and were removed in March 2005 without discovering the identity of the perpetrators. The intruders installed a rootkit targeting Ericsson's AXE telephone exchange. According to IEEE Spectrum, this was "the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch." The rootkit was designed to patch the memory of the exchange while it was running, enable wiretapping while disabling audit logs, patch the commands that list active processes and active data blocks, and modify the data block checksum verification command. A "backdoor" allowed an operator with sysadmin status to deactivate the exchange's transaction log, alarms and access commands related to the surveillance capability. The rootkit was discovered after the intruders installed a faulty update, which caused SMS texts to be undelivered, leading to an automated failure report being generated. Ericsson engineers were called in to investigate the fault and discovered the hidden data blocks containing the list of phone numbers being monitored, along with the rootkit and illicit monitoring software. Uses Modern rootkits do not elevate access, but rather are used to make another software payload undetectable by adding stealth capabilities. Most rootkits are classified as malware, because the payloads they are bundled with are malicious. For example, a payload might covertly steal user passwords, credit card information, computing resources, or conduct other unauthorized activities. A small number of rootkits may be considered utility applications by their users: for example, a rootkit might cloak a CD-ROM-emulation driver, allowing video game users to defeat anti-piracy measures that require insertion of the original installation media into a physical optical drive to verify that the software was legitimately purchased. Rootkits and their payloads have many uses: Provide an attacker with full access via a backdoor, permitting unauthorized access to, for example, steal or falsify documents. One of the ways to carry this out is to subvert the login mechanism, such as the /bin/login program on Unix-like systems or GINA on Windows. The replacement appears to function normally, but also accepts a secret login combination that allows an attacker direct access to the system with administrative privileges, bypassing standard authentication and authorization mechanisms. Conceal other malware, notably password-stealing key loggers and computer viruses. Appropriate the compromised machine as a zombie computer for attacks on other computers. (The attack originates from the compromised system or network, instead of the attacker's system.) "Zombie" computers are typically members of large botnets that can–amongst other things–launch denial-of-service attacks, distribute e-mail spam, and conduct click fraud. In some instances, rootkits provide desired functionality, and may be installed intentionally on behalf of the computer user: Detect and prevent cheating in online games with software like Warden and GameGuard. Detect attacks, for example, in a honeypot. Enhance emulation software and security software. Alcohol 120% and Daemon Tools are commercial examples of non-hostile rootkits used to defeat copy-protection mechanisms such as SafeDisc and SecuROM. Kaspersky antivirus software also uses techniques resembling rootkits to protect itself from malicious actions. It loads its own drivers to intercept system activity, and then prevents other processes from doing harm to itself. Its processes are not hidden, but cannot be terminated by standard methods. Anti-theft protection: Laptops may have BIOS-based rootkit software that will periodically report to a central authority, allowing the laptop to be monitored, disabled or wiped of information in the event that it is stolen. Bypassing Microsoft Product Activation Types There are at least five types of rootkit, ranging from those at the lowest level in firmware (with the highest privileges), through to the least privileged user-based variants that operate in Ring 3. Hybrid combinations of these may occur spanning, for example, user mode and kernel mode. User mode User-mode rootkits run in Ring 3, along with other applications as user, rather than low-level system processes. They have a number of possible installation vectors to intercept and modify the standard behavior of application programming interfaces (APIs). Some inject a dynamically linked library (such as a .DLL file on Windows, or a .dylib file on Mac OS X) into other processes, and are thereby able to execute inside any target process to spoof it; others with sufficient privileges simply overwrite the memory of a target application. Injection mechanisms include: Use of vendor-supplied application extensions. For example, Windows Explorer has public interfaces that allow third parties to extend its functionality. Interception of messages. Debuggers. Exploitation of security vulnerabilities. Function hooking or patching of commonly used APIs, for example, to hide a running process or file that resides on a filesystem. Kernel mode Kernel-mode rootkits run with the highest operating system privileges (Ring 0) by adding code or replacing portions of the core operating system, including both the kernel and associated device drivers. Most operating systems support kernel-mode device drivers, which execute with the same privileges as the operating system itself. As such, many kernel-mode rootkits are developed as device drivers or loadable modules, such as loadable kernel modules in Linux or device drivers in Microsoft Windows. This class of rootkit has unrestricted security access, but is more difficult to write. The complexity makes bugs common, and any bugs in code operating at the kernel level may seriously impact system stability, leading to discovery of the rootkit. One of the first widely known kernel rootkits was developed for Windows NT 4.0 and released in Phrack magazine in 1999 by Greg Hoglund. Kernel rootkits can be especially difficult to detect and remove because they operate at the same security level as the operating system itself, and are thus able to intercept or subvert the most trusted operating system operations. Any software, such as antivirus software, running on the compromised system is equally vulnerable. In this situation, no part of the system can be trusted. A rootkit can modify data structures in the Windows kernel using a method known as direct kernel object manipulation (DKOM). This method can be used to hide processes. A kernel mode rootkit can also hook the System Service Descriptor Table (SSDT), or modify the gates between user mode and kernel mode, in order to cloak itself. Similarly for the Linux operating system, a rootkit can modify the system call table to subvert kernel functionality. It is common that a rootkit creates a hidden, encrypted filesystem in which it can hide other malware or original copies of files it has infected. Operating systems are evolving to counter the threat of kernel-mode rootkits. For example, 64-bit editions of Microsoft Windows now implement mandatory signing of all kernel-level drivers in order to make it more difficult for untrusted code to execute with the highest privileges in a system. Bootkits A kernel-mode rootkit variant called a bootkit can infect startup code like the Master Boot Record (MBR), Volume Boot Record (VBR), or boot sector, and in this way can be used to attack full disk encryption systems. An example of such an attack on disk encryption is the "evil maid attack", in which an attacker installs a bootkit on an unattended computer. The envisioned scenario is a maid sneaking into the hotel room where the victims left their hardware. The bootkit replaces the legitimate boot loader with one under their control. Typically the malware loader persists through the transition to protected mode when the kernel has loaded, and is thus able to subvert the kernel. For example, the "Stoned Bootkit" subverts the system by using a compromised boot loader to intercept encryption keys and passwords. In 2010, the Alureon rootkit has successfully subverted the requirement for 64-bit kernel-mode driver signing in Windows 7, by modifying the master boot record. Although not malware in the sense of doing something the user doesn't want, certain "Vista Loader" or "Windows Loader" software work in a similar way by injecting an ACPI SLIC (System Licensed Internal Code) table in the RAM-cached version of the BIOS during boot, in order to defeat the Windows Vista and Windows 7 activation process. This vector of attack was rendered useless in the (non-server) versions of Windows 8, which use a unique, machine-specific key for each system, that can only be used by that one machine. Many antivirus companies provide free utilities and programs to remove bootkits. Hypervisor level Rootkits have been created as Type II Hypervisors in academia as proofs of concept. By exploiting hardware virtualization features such as Intel VT or AMD-V, this type of rootkit runs in Ring -1 and hosts the target operating system as a virtual machine, thereby enabling the rootkit to intercept hardware calls made by the original operating system. Unlike normal hypervisors, they do not have to load before the operating system, but can load into an operating system before promoting it into a virtual machine. A hypervisor rootkit does not have to make any modifications to the kernel of the target to subvert it; however, that does not mean that it cannot be detected by the guest operating system. For example, timing differences may be detectable in CPU instructions. The "SubVirt" laboratory rootkit, developed jointly by Microsoft and University of Michigan researchers, is an academic example of a virtual-machine–based rootkit (VMBR), while Blue Pill software is another. In 2009, researchers from Microsoft and North Carolina State University demonstrated a hypervisor-layer anti-rootkit called Hooksafe, which provides generic protection against kernel-mode rootkits. Windows 10 introduced a new feature called "Device Guard", that takes advantage of virtualization to provide independent external protection of an operating system against rootkit-type malware. Firmware and hardware A firmware rootkit uses device or platform firmware to create a persistent malware image in hardware, such as a router, network card, hard drive, or the system BIOS. The rootkit hides in firmware, because firmware is not usually inspected for code integrity. John Heasman demonstrated the viability of firmware rootkits in both ACPI firmware routines and in a PCI expansion card ROM. In October 2008, criminals tampered with European credit-card-reading machines before they were installed. The devices intercepted and transmitted credit card details via a mobile phone network. In March 2009, researchers Alfredo Ortega and Anibal Sacco published details of a BIOS-level Windows rootkit that was able to survive disk replacement and operating system re-installation. A few months later they learned that some laptops are sold with a legitimate rootkit, known as Absolute CompuTrace or Absolute LoJack for Laptops, preinstalled in many BIOS images. This is an anti-theft technology system that researchers showed can be turned to malicious purposes. Intel Active Management Technology, part of Intel vPro, implements out-of-band management, giving administrators remote administration, remote management, and remote control of PCs with no involvement of the host processor or BIOS, even when the system is powered off. Remote administration includes remote power-up and power-down, remote reset, redirected boot, console redirection, pre-boot access to BIOS settings, programmable filtering for inbound and outbound network traffic, agent presence checking, out-of-band policy-based alerting, access to system information, such as hardware asset information, persistent event logs, and other information that is stored in dedicated memory (not on the hard drive) where it is accessible even if the OS is down or the PC is powered off. Some of these functions require the deepest level of rootkit, a second non-removable spy computer built around the main computer. Sandy Bridge and future chipsets have "the ability to remotely kill and restore a lost or stolen PC via 3G". Hardware rootkits built into the chipset can help recover stolen computers, remove data, or render them useless, but they also present privacy and security concerns of undetectable spying and redirection by management or hackers who might gain control. Installation and cloaking Rootkits employ a variety of techniques to gain control of a system; the type of rootkit influences the choice of attack vector. The most common technique leverages security vulnerabilities to achieve surreptitious privilege escalation. Another approach is to use a Trojan horse, deceiving a computer user into trusting the rootkit's installation program as benign—in this case, social engineering convinces a user that the rootkit is beneficial. The installation task is made easier if the principle of least privilege is not applied, since the rootkit then does not have to explicitly request elevated (administrator-level) privileges. Other classes of rootkits can be installed only by someone with physical access to the target system. Some rootkits may also be installed intentionally by the owner of the system or somebody authorized by the owner, e.g. for the purpose of employee monitoring, rendering such subversive techniques unnecessary. Some malicious rootkit installations are commercially driven, with a pay-per-install (PPI) compensation method typical for distribution. Once installed, a rootkit takes active measures to obscure its presence within the host system through subversion or evasion of standard operating system security tools and application programming interface (APIs) used for diagnosis, scanning, and monitoring. Rootkits achieve this by modifying the behavior of core parts of an operating system through loading code into other processes, the installation or modification of drivers, or kernel modules. Obfuscation techniques include concealing running processes from system-monitoring mechanisms and hiding system files and other configuration data. It is not uncommon for a rootkit to disable the event logging capacity of an operating system, in an attempt to hide evidence of an attack. Rootkits can, in theory, subvert any operating system activities. The "perfect rootkit" can be thought of as similar to a "perfect crime": one that nobody realizes has taken place. Rootkits also take a number of measures to ensure their survival against detection and "cleaning" by antivirus software in addition to commonly installing into Ring 0 (kernel-mode), where they have complete access to a system. These include polymorphism (changing so their "signature" is hard to detect), stealth techniques, regeneration, disabling or turning off anti-malware software, and not installing on virtual machines where it may be easier for researchers to discover and analyze them. Detection The fundamental problem with rootkit detection is that if the operating system has been subverted, particularly by a kernel-level rootkit, it cannot be trusted to find unauthorized modifications to itself or its components. Actions such as requesting a list of running processes, or a list of files in a directory, cannot be trusted to behave as expected. In other words, rootkit detectors that work while running on infected systems are only effective against rootkits that have some defect in their camouflage, or that run with lower user-mode privileges than the detection software in the kernel. As with computer viruses, the detection and elimination of rootkits is an ongoing struggle between both sides of this conflict. Detection can take a number of different approaches, including looking for virus "signatures" (e.g. antivirus software), integrity checking (e.g. digital signatures), difference-based detection (comparison of expected vs. actual results), and behavioral detection (e.g. monitoring CPU usage or network traffic). For kernel-mode rootkits, detection is considerably more complex, requiring careful scrutiny of the System Call Table to look for hooked functions where the malware may be subverting system behavior, as well as forensic scanning of memory for patterns that indicate hidden processes. Unix rootkit detection offerings include Zeppoo, chkrootkit, rkhunter and OSSEC. For Windows, detection tools include Microsoft Sysinternals RootkitRevealer, Avast Antivirus, Sophos Anti-Rootkit, F-Secure, Radix, GMER, and WindowsSCOPE. Any rootkit detectors that prove effective ultimately contribute to their own ineffectiveness, as malware authors adapt and test their code to escape detection by well-used tools. Detection by examining storage while the suspect operating system is not operational can miss rootkits not recognised by the checking software, as the rootkit is not active and suspicious behavior is suppressed; conventional anti-malware software running with the rootkit operational may fail if the rootkit hides itself effectively. Alternative trusted medium The best and most reliable method for operating-system-level rootkit detection is to shut down the computer suspected of infection, and then to check its storage by booting from an alternative trusted medium (e.g. a "rescue" CD-ROM or USB flash drive). The technique is effective because a rootkit cannot actively hide its presence if it is not running. Behavioral-based The behavioral-based approach to detecting rootkits attempts to infer the presence of a rootkit by looking for rootkit-like behavior. For example, by profiling a system, differences in the timing and frequency of API calls or in overall CPU utilization can be attributed to a rootkit. The method is complex and is hampered by a high incidence of false positives. Defective rootkits can sometimes introduce very obvious changes to a system: the Alureon rootkit crashed Windows systems after a security update exposed a design flaw in its code. Logs from a packet analyzer, firewall, or intrusion prevention system may present evidence of rootkit behaviour in a networked environment. Signature-based Antivirus products rarely catch all viruses in public tests (depending on what is used and to what extent), even though security software vendors incorporate rootkit detection into their products. Should a rootkit attempt to hide during an antivirus scan, a stealth detector may notice; if the rootkit attempts to temporarily unload itself from the system, signature detection (or "fingerprinting") can still find it. This combined approach forces attackers to implement counterattack mechanisms, or "retro" routines, that attempt to terminate antivirus programs. Signature-based detection methods can be effective against well-published rootkits, but less so against specially crafted, custom-root rootkits. Difference-based Another method that can detect rootkits compares "trusted" raw data with "tainted" content returned by an API. For example, binaries present on disk can be compared with their copies within operating memory (in some operating systems, the in-memory image should be identical to the on-disk image), or the results returned from file system or Windows Registry APIs can be checked against raw structures on the underlying physical disks—however, in the case of the former, some valid differences can be introduced by operating system mechanisms like memory relocation or shimming. A rootkit may detect the presence of such a difference-based scanner or virtual machine (the latter being commonly used to perform forensic analysis), and adjust its behaviour so that no differences can be detected. Difference-based detection was used by Russinovich's RootkitRevealer tool to find the Sony DRM rootkit. Integrity checking Code signing uses public-key infrastructure to check if a file has been modified since being digitally signed by its publisher. Alternatively, a system owner or administrator can use a cryptographic hash function to compute a "fingerprint" at installation time that can help to detect subsequent unauthorized changes to on-disk code libraries. However, unsophisticated schemes check only whether the code has been modified since installation time; subversion prior to that time is not detectable. The fingerprint must be re-established each time changes are made to the system: for example, after installing security updates or a service pack. The hash function creates a message digest, a relatively short code calculated from each bit in the file using an algorithm that creates large changes in the message digest with even smaller changes to the original file. By recalculating and comparing the message digest of the installed files at regular intervals against a trusted list of message digests, changes in the system can be detected and monitored—as long as the original baseline was created before the malware was added. More-sophisticated rootkits are able to subvert the verification process by presenting an unmodified copy of the file for inspection, or by making code modifications only in memory, reconfiguration registers, which are later compared to a white list of expected values. The code that performs hash, compare, or extend operations must also be protected—in this context, the notion of an immutable root-of-trust holds that the very first code to measure security properties of a system must itself be trusted to ensure that a rootkit or bootkit does not compromise the system at its most fundamental level. Memory dumps Forcing a complete dump of virtual memory will capture an active rootkit (or a kernel dump in the case of a kernel-mode rootkit), allowing offline forensic analysis to be performed with a debugger against the resulting dump file, without the rootkit being able to take any measures to cloak itself. This technique is highly specialized, and may require access to non-public source code or debugging symbols. Memory dumps initiated by the operating system cannot always be used to detect a hypervisor-based rootkit, which is able to intercept and subvert the lowest-level attempts to read memory—a hardware device, such as one that implements a non-maskable interrupt, may be required to dump memory in this scenario. Virtual machines also make it easier to analyze the memory of a compromised machine from the underlying hypervisor, so some rootkits will avoid infecting virtual machines for this reason. Removal Manual removal of a rootkit is often extremely difficult for a typical computer user, but a number of security-software vendors offer tools to automatically detect and remove some rootkits, typically as part of an antivirus suite. , Microsoft's monthly Windows Malicious Software Removal Tool is able to detect and remove some classes of rootkits. Also, Windows Defender Offline can remove rootkits, as it runs from a trusted environment before the operating system starts. Some antivirus scanners can bypass file system APIs, which are vulnerable to manipulation by a rootkit. Instead, they access raw file system structures directly, and use this information to validate the results from the system APIs to identify any differences that may be caused by a rootkit. There are experts who believe that the only reliable way to remove them is to re-install the operating system from trusted media. This is because antivirus and malware removal tools running on an untrusted system may be ineffective against well-written kernel-mode rootkits. Booting an alternative operating system from trusted media can allow an infected system volume to be mounted and potentially safely cleaned and critical data to be copied off—or, alternatively, a forensic examination performed. Lightweight operating systems such as Windows PE, Windows Recovery Console, Windows Recovery Environment, BartPE, or Live Distros can be used for this purpose, allowing the system to be "cleaned". Even if the type and nature of a rootkit is known, manual repair may be impractical, while re-installing the operating system and applications is safer, simpler and quicker. Defenses System hardening represents one of the first layers of defence against a rootkit, to prevent it from being able to install. Applying security patches, implementing the principle of least privilege, reducing the attack surface and installing antivirus software are some standard security best practices that are effective against all classes of malware. New secure boot specifications like Unified Extensible Firmware Interface have been designed to address the threat of bootkits, but even these are vulnerable if the security features they offer are not utilized. For server systems, remote server attestation using technologies such as Intel Trusted Execution Technology (TXT) provide a way of verifying that servers remain in a known good state. For example, Microsoft Bitlocker's encryption of data-at-rest verifies that servers are in a known "good state" on bootup. PrivateCore vCage is a software offering that secures data-in-use (memory) to avoid bootkits and rootkits by verifying servers are in a known "good" state on bootup. The PrivateCore implementation works in concert with Intel TXT and locks down server system interfaces to avoid potential bootkits and rootkits. See also Computer security conference Host-based intrusion detection system Man-in-the-middle attack The Rootkit Arsenal: Escape and Evasion in the Dark Corners of the System Notes References Further reading External links Types of malware Privilege escalation exploits Cryptographic attacks Cyberwarfare
224672
https://en.wikipedia.org/wiki/Dianne%20Feinstein
Dianne Feinstein
Dianne Goldman Berman Feinstein ( ; born Dianne Emiel Goldman; June 22, 1933) is an American politician who serves as the senior United States senator from California, a seat she has held since 1992. A member of the Democratic Party, she was mayor of San Francisco from 1978 to 1988. Born in San Francisco, Feinstein graduated from Stanford University in 1955. In the 1960s, she worked in local government in San Francisco. Feinstein was elected to the San Francisco Board of Supervisors in 1969. She served as the board's first female president in 1978, during which time the assassinations of Mayor George Moscone and City Supervisor Harvey Milk by Dan White drew national attention. Feinstein succeeded Moscone as mayor and became the first woman to serve in that position. During her tenure, she led the renovation of the city's cable car system and oversaw the 1984 Democratic National Convention. Despite a failed recall attempt in 1983, Feinstein was a very popular mayor and was named the most effective mayor in the country by City & State in 1987. After losing a race for governor in 1990, Feinstein won a 1992 special election to the U.S. Senate. The special election was triggered by the resignation of Pete Wilson, who defeated her in the 1990 gubernatorial election. Despite being elected on the same ballot as her peer Barbara Boxer, Feinstein became California's first female U.S. senator, as she was elected in a special election and sworn in before Boxer. She became California's senior senator a few weeks later in 1993 when Alan Cranston retired. Feinstein has been reelected five times and in the 2012 election received 7.86 million votesthe most popular votes in any U.S. Senate election in history. Feinstein authored the 1994 Federal Assault Weapons Ban, which expired in 2004. She introduced a new assault weapons bill in 2013 that failed to pass. Feinstein is the first woman to have chaired the Senate Rules Committee and the Senate Intelligence Committee, and the first woman to have presided over a U.S. presidential inauguration. She was the ranking member of the Senate Judiciary Committee from 2017 to 2021 and had chaired the International Narcotics Control Caucus from 2009 to 2015. At , Feinstein is the oldest sitting U.S. senator. In March 2021, Feinstein became the longest-serving U.S. senator from California, surpassing Hiram Johnson. Upon Barbara Mikulski's retirement in January 2017, Feinstein became the longest-tenured female senator currently serving; should she serve through November 5, 2022, Feinstein will surpass Mikulski's record as the longest-tenured female senator. In January 2021, Feinstein filed the initial Federal Election Commission paperwork needed to seek reelection in 2024, when she will be 91. Feinstein's staff later clarified that this was due to election law technicalities, and did not indicate her intentions in 2024. Early life and education Feinstein was born Dianne Emiel Goldman in San Francisco to Leon Goldman, a surgeon, and his wife Betty (née Rosenburg), a former model. Her paternal grandparents were Jewish immigrants from Poland. Her maternal grandparents, the Rosenburgs, were from Saint Petersburg, Russia. While they were of German-Jewish ancestry, they practiced the Russian Orthodox (Christian) faith, as was required for Jews in Saint Petersburg. Christianity was passed down to Feinstein's mother, who insisted on her transferral from a Jewish day school to a prestigious local Catholic school, but Feinstein lists her religion as Judaism. She graduated from Convent of the Sacred Heart High School in 1951 and from Stanford University in 1955 with a Bachelor of Arts in history. Early political career Feinstein was a fellow at the Coro Foundation in San Francisco from 1955 to 1956. Governor Pat Brown appointed her to the California Women's Parole Board in 1960. She served on the board until 1966. San Francisco Board of Supervisors Feinstein was elected to the San Francisco Board of Supervisors in 1969. She remained on the board for nine years. During her tenure on the Board of Supervisors, she unsuccessfully ran for mayor of San Francisco twice, in 1971 against Mayor Joseph Alioto, and in 1975, when she lost the contest for a runoff slot (against George Moscone) by one percentage point to Supervisor John Barbagelata. Because of her position, Feinstein became a target of the New World Liberation Front, an anti-capitalist terrorist group that carried out bombings in California in the 1970s. In 1976 the NWLF placed a bomb on the windowsill of her home that failed to explode. The group later shot out the windows of a beach house she owned. Feinstein was elected president of the San Francisco Board of Supervisors in 1978 with initial opposition from Quentin L. Kopp. Mayor of San Francisco On November 27, 1978, Mayor George Moscone and Supervisor Harvey Milk were assassinated by former supervisor Dan White. Feinstein became acting mayor as she was president of the Board of Supervisors. Supervisors John Molinari, Ella Hill Hutch, Ron Pelosi, Robert Gonzales, and Gordon Lau endorsed Feinstein for an appointment as mayor by the Board of Supervisors. Gonzales initially ran to be appointed by the Board of Supervisors as mayor, but dropped out. The Board of Supervisors voted six to two to appoint Feinstein as mayor. She was inaugurated by Chief Justice Rose Bird of the Supreme Court of California on December 4, 1978, becoming San Francisco's first female mayor. Molinari was selected to replace Feinstein as president of the Board of Supervisors by a vote of eight to two. One of Feinstein's first challenges as mayor was the state of the San Francisco cable car system, which was shut down for emergency repairs in 1979; an engineering study concluded that it needed comprehensive rebuilding at a cost of $60 million. Feinstein helped win federal funding for the bulk of the work. The system closed for rebuilding in 1982 and it was completed just in time for the 1984 Democratic National Convention. Feinstein also oversaw policies to increase the number of high-rise buildings in San Francisco. Feinstein was seen as a relatively moderate Democrat in one of the country's most liberal cities. As a supervisor, she was considered part of the centrist bloc that included White and generally opposed Moscone. As mayor, Feinstein angered the city's large gay community by vetoing domestic partner legislation in 1982. In the 1980 presidential election, while a majority of Bay Area Democrats continued to support Senator Ted Kennedy's primary challenge to President Jimmy Carter even after it was clear Kennedy could not win, Feinstein strongly supported the Carter–Mondale ticket. She was given a high-profile speaking role on the opening night of the August Democratic National Convention, urging delegates to reject the Kennedy delegates' proposal to "open" the convention, thereby allowing delegates to ignore their states' popular vote, a proposal that was soundly defeated. In the run-up to the 1984 Democratic National Convention, there was considerable media and public speculation that Mondale might pick Feinstein as his running mate. He chose Geraldine Ferraro instead. Also in 1984, Feinstein proposed banning handguns in San Francisco, and became subject to a recall attempt organized by the White Panther Party. She won the recall election and finished her second term as mayor on January 8, 1988. Feinstein revealed sensitive details about the hunt for serial killer Richard Ramirez at a 1985 press conference, antagonizing detectives by publicizing details of his crimes known only to law enforcement, and thus jeopardizing their investigation. City and State magazine named Feinstein the nation's "Most Effective Mayor" in 1987. She served on the Trilateral Commission during the 1980s. Gubernatorial election Feinstein made an unsuccessful bid for governor of California in 1990. She won the Democratic Party's nomination, but lost the general election to Republican Senator Pete Wilson, who resigned from the Senate to assume the governorship. In 1992, Feinstein was fined $190,000 for failure to properly report campaign contributions and expenditures in that campaign. U.S. Senate career Elections Feinstein won the November 3, 1992, special election to fill the Senate seat vacated a year earlier when Wilson resigned to take office as governor. In the primary, she had defeated California State Controller Gray Davis. The special election was held at the same time as the general election for U.S. president and other offices. Barbara Boxer was elected at the same time to the Senate seat being vacated by Alan Cranston. Because Feinstein was elected to an unexpired term, she became a senator as soon as the election was certified in November, while Boxer did not take office until the expiration of Cranston's term in January; thus Feinstein became California's senior senator, even though she was elected at the same time as Boxer and Boxer had previous congressional service. Feinstein also became the first female Jewish senator in the United States, though Boxer is also Jewish. Feinstein and Boxer were also the first female pair of U.S. senators to represent any state at the same time. Feinstein was reelected in 1994, 2000, 2006, 2012, and 2018. In 2012, she set the record for the most popular votes in any U.S. Senate election in history, with 7.75 million, making her the first Senate candidate to get 7 million votes in an election. The record was previously held by Boxer, who received 6.96 million votes in her 2004 reelection; and before that by Feinstein in 2000 and 1992, when she became the first Democrat to get more than 5 million votes in a Senate race. In October 2017, Feinstein declared her intention to run for reelection in 2018. She lost the endorsement of the California Democratic Party's executive board, which opted to support State Senator Kevin de León, but finished first in the state's "jungle primary" and was reelected in the November 6 general election. At , Feinstein is the oldest sitting U.S. senator. On March 28, 2021, Feinstein became the longest-serving U.S. senator from California, surpassing Hiram Johnson. Upon Barbara Mikulski's retirement in January 2017, Feinstein became the longest-tenured female U.S. senator currently serving. Should she serve through November 5, 2022, Feinstein will become the longest-serving woman in U.S. Senate history. In January 2021, Feinstein filed the initial Federal Election Commission paperwork needed to seek reelection in 2024, when she will be 91. Committee assignments Feinstein is the first and only woman to have chaired the Senate Rules Committee (2007–09) and the Select Committee on Intelligence (2009–15). Committee on Appropriations Subcommittee on Agriculture, Rural Development, Food and Drug Administration, and Related Agencies Subcommittee on Commerce, Justice, Science, and Related Agencies Subcommittee on Defense Subcommittee on Energy and Water Development (Ranking Member, 116th Congress; chair, 117th Congress) Subcommittee on Interior, Environment, and Related Agencies Subcommittee on Transportation, Housing and Urban Development, and Related Agencies Committee on the Judiciary (Ranking Member, 115th and 116th Congresses) Subcommittee on Crime and Terrorism Subcommittee on Immigration, Border Security, and Refugees Subcommittee on Privacy, Technology and the Law Subcommittee on Human Rights and the Law (Chair, 117th Congress) Committee on Rules and Administration (Chair, 110th Congress) Select Committee on Intelligence (Chair, 111th, 112th, 113th Congresses) Caucus memberships Afterschool Caucuses Congressional NextGen 9-1-1 Caucus Senate New Democrat Coalition (defunct) Political positions According to the Los Angeles Times, Feinstein emphasized her centrism when she first ran for statewide offices in the 1990s, at a time when California was more conservative. Over time, she has moved left of center as California became one of the most Democratic states, although she has never joined the ranks of progressives, and was once a member of the Senate's moderate, now-defunct Senate New Democrat Coalition. Military While delivering the commencement address at Stanford Stadium on June 13, 1994, Feinstein said: In 2017, she criticized the banning of transgender enlistments in the military under the Trump administration. Feinstein voted for Trump's $675 billion defense budget bill for FY 2019. National security Feinstein voted for the extension of the Patriot Act and the FISA provisions in 2012. Health care Feinstein has supported the Affordable Care Act, repeatedly voting to defeat initiatives aimed against it. She has voted to regulate tobacco as a drug; expand the Children's Health Insurance Program; override the president's veto of adding 2 to 4 million children to SCHIP eligibility; increase Medicaid rebate for producing generic drugs; negotiate bulk purchases for Medicare prescription drugs; allow re-importation of prescription drugs from Canada; allow patients to sue HMOs and collect punitive damages; cover prescription drugs under Medicare, and means-test Medicare. She has voted against the Paul Ryan Budget's Medicare choice, tax and spending cuts; and allowing tribal Indians to opt out of federal healthcare. Feinstein's Congressional voting record was rated as 88% by the American Public Health Association (APHA), the figure ostensibly reflecting the percentage of time the representative voted the organization's preferred position. At an April 2017 town hall meeting in San Francisco, Feinstein said, "[i]f single-payer health care is going to mean the complete takeover by the government of all health care, I am not there." During a news conference at the University of California, San Diego in July 2017, she estimated that Democratic opposition would prove sufficient to defeat Republican attempts to repeal the ACA. Feinstein wrote in an August 2017 op-ed that Trump could secure health care reform if he compromised with Democrats: "We now know that such a closed process on a major issue like health care doesn't work. The only path forward is a transparent process that allows every senator to bring their ideas to the table." Capital punishment When Feinstein first ran for statewide office in 1990, she favored capital punishment. In 2004, she called for the death penalty in the case of San Francisco police officer Isaac Espinoza, who was killed while on duty. By 2018, she opposed capital punishment. Energy and environment Feinstein achieved a score of 100% from the League of Conservation Voters in 2017. Her lifetime average score is 90%. Feinstein co-sponsored (with Oklahoma Republican Tom Coburn) an amendment through the Senate to the Economic Development Revitalization Act of 2011 that eliminated the Volumetric Ethanol Excise Tax Credit. The Senate passed the amendment on June 16, 2011. Introduced in 2004, the subsidy provided a 45-cent-per-gallon credit on pure ethanol, and a 54-cent-per-gallon tariff on imported ethanol. These subsidies had resulted in an annual expenditure of $6 billion. In February 2019, when youth associated with the Sunrise Movement confronted Feinstein about why she does not support the Green New Deal, she told them "there’s no way to pay for it" and that it could not pass a Republican-controlled Senate. In a tweet following the confrontation, Feinstein said that she remains committed "to enact real, meaningful climate change legislation." Supreme Court nominations In September 2005, Feinstein was one of five Democratic senators on the Senate Judiciary Committee to vote against Supreme Court nominee John Roberts, saying that Roberts had "failed to state his positions on such social controversies as abortion and the right to die". Feinstein stated that she would vote against Supreme Court nominee Samuel Alito in January 2006, though she expressed disapproval of a filibuster: "When it comes to filibustering a Supreme Court appointment, you really have to have something out there, whether it's gross moral turpitude or something that comes to the surface. This is a man I might disagree with, [but] that doesn't mean he shouldn't be on the court." On July 12, 2009, Feinstein stated her belief that the Senate would confirm Supreme Court nominee Sonia Sotomayor, praising her for her experience and for overcoming "adversity and disadvantage". After President Obama nominated Merrick Garland to the Supreme Court in March 2016, Feinstein met with Garland on April 6 and later called on Republicans to do "this institution the credit of sitting down and meeting with him". In February 2017, Feinstein requested that Supreme Court nominee Neil Gorsuch provide information on cases in which he had assisted with decision-making regarding either litigation or craft strategy. In mid-March, she sent Gorsuch a letter stating her request had not been met. Feinstein formally announced her opposition to his nomination on April 3, citing Gorsuch's "record at the Department of Justice, his tenure on the bench, his appearance before the Senate and his written questions for the record". Following the nomination of Brett Kavanaugh to the Supreme Court of the United States, Feinstein received a July 30, 2018, letter from Christine Blasey Ford in which Ford accused Kavanaugh of having sexually assaulted her in the 1980s. Ford requested that her allegation be kept confidential. Feinstein did not refer the allegation to the FBI until September 14, 2018, after the Senate Judiciary Committee had completed its hearings on Kavanaugh's nomination and "after leaks to the media about [the Ford allegation] had reached a 'fever pitch'". Feinstein faced "sharp scrutiny" for her decision to keep quiet about the Ford allegation for several weeks; she responded that she kept the letter and Ford's identity confidential because Ford had requested it. After an additional hearing and a supplemental FBI investigation, Kavanaugh was confirmed to the Supreme Court on October 6, 2018. Feinstein announced she would step down from her position on the Judiciary Committee after pressure from progressives due to her performance at the Supreme Court nomination hearings of Justice Amy Coney Barrett in October 2020. Articles in The New Yorker and The New York Times cited unnamed Democratic senators and aides expressing concern over her advancing age and ability to lead the committee. Weapons sales In September 2016, Feinstein backed the Obama administration's plan to sell more than $1.15 billion worth of weapons to Saudi Arabia. Mass surveillance; citizens' privacy Feinstein co-sponsored PIPA on May 12, 2011. She met with representatives of technology companies, including Google and Facebook, in January 2012. A Feinstein spokesperson said she "is doing all she can to ensure that the bill is balanced and protects the intellectual property concerns of the content community without unfairly burdening legitimate businesses such as Internet search engines". Following her 2012 vote to extend the Patriot Act and the FISA provisions, and after the 2013 mass surveillance disclosures involving the National Security Agency (NSA), Feinstein promoted and supported measures to continue the information collection programs. Feinstein and Saxby Chambliss also defended the NSA's request to Verizon for all the metadata about phone calls made within the U.S. and from the U.S. to other countries. They said the information gathered by intelligence on the phone communications is used to connect phone lines to terrorists and that it did not contain the content of the phone calls or messages. Foreign Policy wrote that she had a "reputation as a staunch defender of NSA practices and [of] the White House's refusal to stand by collection activities targeting foreign leaders". In October 2013, Feinstein criticized the NSA for monitoring telephone calls of foreign leaders friendly to the U.S. In November 2013, she promoted the FISA Improvements Act bill, which included a "backdoor search provision" that allows intelligence agencies to continue certain warrantless searches as long as they are logged and "available for review" to various agencies. In June 2013, Feinstein called Edward Snowden a "traitor" after his leaks went public. In October 2013, she said she stood by that. While praising the NSA, Feinstein had accused the CIA of snooping and removing files through Congress members' computers, saying, "[t]he CIA did not ask the committee or its staff if the committee had access to the internal review or how we obtained it. Instead, the CIA just went and searched the committee's computer." She claimed the "CIA's search may well have violated the separation of powers principles embodied in the United States Constitution". After the 2016 FBI–Apple encryption dispute, Feinstein and Richard Burr sponsored a bill that would be likely to criminalize all forms of strong encryption in electronic communication between citizens. The bill would require technology companies to design their encryption so that they can provide law enforcement with user data in an "intelligible format" when required to do so by court order. In 2020, Feinstein co sponsored the EARN IT act, which seeks to create a 19-member committee to decide a list of best practices websites must follow to be protected by section 230 of the Communications Decency Act. The EARN IT act effectively outlaws end-to-end encryption, depriving the world of secure, private communications tools. Assault weapons ban Feinstein introduced the Federal Assault Weapons Ban, which became law in 1994 and expired in 2004. In January 2013about a month after the Sandy Hook Elementary School shootingshe and Representative Carolyn McCarthy proposed a bill that would "ban the sale, transfer, manufacturing or importation of 150 specific firearms including semiautomatic rifles or pistols that can be used with a detachable or fixed ammunition magazines that hold more than 10 rounds and have specific military-style features, including pistol grips, grenade launchers or rocket launchers". The bill would have exempted 900 models of guns used for sport and hunting. Feinstein said of the bill, "The common thread in each of these shootings is the gunman used a semi-automatic assault weapon or large-capacity ammunition magazines. Military assault weapons only have one purpose, and in my opinion, it's for the military." The bill failed on a Senate vote of 60 to 40. Marijuana legalization Feinstein has opposed a number of reforms to cannabis laws at the state and federal level. In 2016 she opposed Proposition 64, the Adult Use of Marijuana Act, to legalize recreational cannabis in California. In 1996 she opposed Proposition 215 to legalize the medical use of cannabis in California. In 2015 she was the only Democrat at a Senate hearing to vote against the Rohrabacher–Farr amendment, legislation that limits the enforcement of federal law in states that have legalized medical cannabis. Feinstein cited her belief that cannabis is a gateway drug in voting against the amendment. In 2018, Feinstein softened her views on marijuana and cosponsored the STATES Act, legislation that would protect states from federal interference regarding both medical and recreational use. She also supported legislation in 2015 to allow medical cannabis to be recommended to veterans in states where its use is legal. Immigration In September 2017, after Attorney General Jeff Sessions announced the rescinding of the Deferred Action for Childhood Arrivals program, Feinstein admitted the legality of the program was questionable while citing this as a reason for why a law should be passed. In her opening remarks at a January 2018 Senate Judiciary Committee hearing, she said she was concerned the Trump administration's decision to terminate temporary protected status might be racially motivated, based on comments Trump made denigrating African countries, Haiti, and El Salvador. Iran Feinstein announced her support for the Iran nuclear deal framework in July 2015, tweeting that the deal would usher in "unprecedented & intrusive inspections to verify cooperation" on the part of Iran. On June 7, 2017, Feinstein and Senator Bernie Sanders issued dual statements urging the Senate to forgo a vote for sanctions on Iran in response to the Tehran attacks that occurred earlier in the day. In July 2017, Feinstein voted for the Countering America's Adversaries Through Sanctions Act that grouped together sanctions against Iran, Russia and North Korea. Israel In September 2016in advance of UN Security Council resolution 2334 condemning Israeli settlements in the occupied Palestinian territoriesFeinstein signed an AIPAC-sponsored letter urging Obama to veto "one-sided" resolutions against Israel. Feinstein opposed Trump's decision to recognize Jerusalem as Israel's capital, saying, "Recognizing Jerusalem as Israel's capitalor relocating our embassy to Jerusalemwill spark violence and embolden extremists on both sides of the debate." North Korea During a July 2017 appearance on Face the Nation after North Korea conducted a second test of an intercontinental ballistic missile, Feinstein said the country had proven itself a danger to the U.S. She also expressed her disappointment with China's lack of response. Responding to reports that North Korea had achieved successful miniaturization of nuclear warheads, Feinstein issued an August 8, 2017, statement insisting isolation of North Korea had proven ineffective and Trump's rhetoric was not helping resolve potential conflict. She also called for the U.S. to "quickly engage North Korea in a high-level dialogue without any preconditions". In September 2017, after Trump's first speech to the United Nations General Assembly in which he threatened North Korea, Feinstein released a statement disagreeing with his remarks: "Trump's bombastic threat to destroy North Korea and his refusal to present any positive pathways forward on the many global challenges we face are severe disappointments." China Feinstein supports a conciliatory approach between China and Taiwan and fostered increased dialogue between high-level Chinese representatives and U.S. senators during her first term as senator. When asked about her relation with Beijing, Feinstein said, "I sometimes say that in my last life maybe I was Chinese." Feinstein has criticized Beijing's missile tests near Taiwan and has called for dismantlement of missiles pointed at the island. She promoted stronger business ties between China and Taiwan over confrontation, and suggested that the U.S. patiently "use two-way trade across Taiwan Strait as a platform for more political dialogue and closer ties". She believes that deeper cross-strait economic integration "will one day lead to political integration and will ultimately provide the solution" to the Taiwan issue. On July 27, 2018, reports surfaced that a Chinese staff member who worked as Feinstein's personal driver, gofer and liaison to the Asian-American community for 20 years, was caught reporting to China's Ministry of State Security. According to the reports, the FBI contacted Feinstein five years earlier warning her about the employee. The employee was later interviewed by authorities and forced to retire by Feinstein. No criminal charges were filed against them. Torture Feinstein has served on the Senate's Select Committee on Intelligence since before 9/11 and her time on the committee has coincided with the Senate Report on Pre-war Intelligence on Iraq and the debates on the torture/"enhanced interrogation" of terrorists and alleged terrorists. On the Senate floor on December 9, 2014, the day parts of the Senate Intelligence Committee report on CIA torture were released to the public, Feinstein called the government's detention and interrogation program a "stain on our values and on our history". Fusion GPS interview transcript release On January 9, 2018, Feinstein caused a stir when, as ranking member of the Senate Judiciary Committee, she released a transcript of its August 2017 interview with Fusion GPS co-founder Glenn Simpson about the dossier regarding connections between Trump's campaign and the Russian government. She did this unilaterally after the committee's chairman, Chuck Grassley, refused to release the transcript. Presidential politics During the 1980 presidential election, Feinstein served on President Jimmy Carter's steering committee in California and as a Carter delegate to the Democratic National Convention. She was selected to serve as one of the four chairs of the 1980 Democratic National Convention. Feinstein endorsed former Vice President Walter Mondale during the 1984 presidential election. She and Democratic National Committee chairman Charles Manatt signed a contract in 1983, making San Francisco the host of the 1984 Democratic National Convention. As a superdelegate in the 2008 Democratic presidential primaries, Feinstein said she would support Clinton for the nomination. But after Barack Obama became the presumptive nominee, she fully backed his candidacy. Days after Obama amassed enough delegates to win the nomination, Feinstein lent her Washington, D.C., home to Clinton and Obama for a private one-on-one meeting. She did not attend the 2008 Democratic National Convention in Denver because she had fallen and broken her ankle earlier in the month. Feinstein chaired the United States Congress Joint Committee on Inaugural Ceremonies and acted as mistress of ceremonies, introducing each participant at the 2009 presidential inauguration. She is the first woman to have presided over a U.S. presidential inauguration. Ahead of the 2016 presidential election, Feinstein was one of 16 female Democratic senators to sign an October 20, 2013, letter endorsing Hillary Clinton for president. As the 2020 presidential election approached, Feinstein indicated her support for former Vice President Joe Biden. This came as a surprise to many pundits, due to the potential candidacy of fellow California senator Kamala Harris, of whom Feinstein said "I'm a big fan of Sen. Harris, and I work with her. But she's brand-new here, so it takes a little bit of time to get to know somebody." Awards and honors Feinstein was awarded the honorary degree of Doctor of Laws from Golden Gate University in San Francisco on June 4, 1977. She was awarded the Legion of Honour by France in 1984. Feinstein received with the Woodrow Wilson Award for public service from the Woodrow Wilson Center of the Smithsonian Institution on November 3, 2001, in Los Angeles. In 2002, Feinstein won the American Medical Association's Nathan Davis Award for "the Betterment of the Public Health". She was named as one of The Forward 50 in 2015. Offices held Personal life Feinstein has been married three times. She married Jack Berman ( 2002), who was then working in the San Francisco District Attorney's Office, in 1956. She and Berman divorced three years later. Their daughter, Katherine Feinstein Mariano ( 1957), was the presiding judge of the San Francisco Superior Court for 12 years, through 2012. In 1962, shortly after beginning her career in politics, Feinstein married her second husband, neurosurgeon Bertram Feinstein, who died of colon cancer in 1978. Feinstein was then married to investment banker Richard C. Blum from 1980 until his death from cancer in 2022. In 2003, Feinstein was ranked the fifth-wealthiest senator, with an estimated net worth of $26 million. Her net worth increased to between $43 and $99 million by 2005. Her 347-page financial-disclosure statement, characterized by the San Francisco Chronicle as "nearly the size of a phone book", claims to draw clear lines between her assets and her husband's, with many of her assets in blind trusts. Feinstein had an artificial cardiac pacemaker inserted at George Washington University Hospital in January 2017. In the fall of 2020, following Ruth Bader Ginsburg's death and the confirmation hearings for Supreme Court Justice Amy Coney Barrett, there was concern about Feinstein's ability to continue performing her job. She said there was no cause for concern and that she had no plans to leave the Senate. In mass media The 2019 film The Report, about the Senate Intelligence Committee investigation into the CIA's use of torture, extensively features Feinstein, portrayed by Annette Bening. Electoral history See also Rosalind Wiener Wyman, co-chair of Feinstein political campaigns. Women in the United States Senate 2020 congressional insider trading scandal References Additional sources Roberts, Jerry (1994). Dianne Feinstein: Never Let Them See You Cry, Harpercollins. Talbot, David (2012). Season of the Witch: Enchantment, Terror and Deliverance in the City of Love, New York: Simon and Schuster. 480 p. . Weiss, Mike (2010). Double Play: The Hidden Passions Behind the Double Assassination of George Moscone and Harvey Milk, Vince Emery Productions. External links Senator Dianne Feinstein official U.S. Senate website Campaign website Membership at the Council on Foreign Relations Statements Op-ed archives at Project Syndicate Dianne Feinstein's Opening Remarks at the 2009 Presidential Inauguration at AmericanRhetoric.com, video, audio and text |- |- |- |- |- |- |- |- |- |- |- 1933 births 20th-century American politicians 20th-century American women politicians 21st-century American politicians 21st-century American women politicians Activists from California Schools of the Sacred Heart alumni American gun control activists American people of German-Jewish descent American people of Polish-Jewish descent American people of Russian-Jewish descent American women activists California Democrats Democratic Party United States senators from California Female United States senators Women in California politics Jewish activists Jewish mayors of places in the United States Jewish United States senators Jewish women politicians Living people Mayors of San Francisco Members of the Council on Foreign Relations San Francisco Board of Supervisors members Women city councillors in California Stanford University alumni United States senators from California Women mayors of places in California Jewish American people in California politics 21st-century American Jews
225169
https://en.wikipedia.org/wiki/Windows%20Media%20Video
Windows Media Video
Windows Media Video (WMV) is a series of video codecs and their corresponding video coding formats developed by Microsoft. It is part of the Windows Media framework. WMV consists of three distinct codecs: The original video compression technology known as WMV, was originally designed for Internet streaming applications, as a competitor to RealVideo. The other compression technologies, WMV Screen and WMV Image, cater for specialized content. After standardization by the Society of Motion Picture and Television Engineers (SMPTE), WMV version 9 was adapted for physical-delivery formats such as HD DVD and Blu-ray Disc and became known as VC-1. Microsoft also developed a digital container format called Advanced Systems Format to store video encoded by Windows Media Video. History In 2003, Microsoft drafted a video compression specification based on its WMV 9 format and submitted it to SMPTE for standardization. The standard was officially approved in March 2006 as SMPTE 421M, better known as VC-1, thus making the WMV 9 format an open standard. VC-1 became one of the three video formats for the Blu-ray video disc, along with H.262/MPEG-2 Part 2 and H.264/MPEG-4 AVC. Container format A WMV file uses the Advanced Systems Format (ASF) container format to encapsulate the encoded multimedia content. While the ASF can encapsulate multimedia in other encodings than those the WMV file standard specifies, those ASF files should use the file extension and not the file extension. The ASF container can optionally support digital rights management using a combination of elliptic curve cryptography key exchange, DES block cipher, a custom block cipher, RC4 stream cipher and the SHA-1 hashing function. Although WMV is generally packed into the ASF container format, it can also be put into the Matroska container format (with file extension ), or AVI container format (extension ). One common way to store WMV in an AVI file is to use the WMV 9 Video Compression Manager (VCM) codec implementation. Video compression formats Windows Media Video Windows Media Video (WMV) is the most recognized video compression format within the WMV family. Usage of the term WMV often refers to the Microsoft Windows Media Video format only. Its main competitors are MPEG-4 AVC, AVS, RealVideo, and MPEG-4 ASP. The first version of the format, WMV 7, was introduced in 1999, and was built upon Microsoft's implementation of MPEG-4 Part 2. Continued proprietary development led to newer versions of the format, but the bit stream syntax was not frozen until WMV 9. While all versions of WMV support variable bit rate, average bit rate, and constant bit rate, WMV 9 introduced several important features including native support for interlaced video, non-square pixels, and frame interpolation. WMV 9 also introduced a new profile titled Windows Media Video 9 Professional, which is activated automatically whenever the video resolution exceeds 300,000 pixels (e.g., 528 px × 576 px, 640 px × 480 px or 768 px × 432 px and beyond) and the bitrate 1000 kbit/s. It is targeted towards high-definition video content, at resolutions such as 720p and 1080p. The Simple and Main profile levels in WMV 9 are compliant with the same profile levels in the VC-1 specification. The Advanced Profile in VC-1 is implemented in a new WMV format called Windows Media Video 9 Advanced Profile. It improves compression efficiency for interlaced content and is made transport-independent, making it able to be encapsulated in an MPEG transport stream or RTP packet format. The format is not compatible with previous WMV 9 formats, however. WMV is a mandatory video format for PlaysForSure-certified online stores and devices, as well as Portable Media Center devices. The Microsoft Zune, Xbox 360, Windows Mobile-powered devices with Windows Media Player, as well as many uncertified devices, support the format. WMV HD mandates the use of WMV 9 for its certification program, at quality levels specified by Microsoft. WMV used to be the only supported video format for the Microsoft Silverlight platform, but the H.264 format is now also supported starting with version 3. Windows Media Video Screen Windows Media Video Screen (WMV Screen) are video formats that specialise in screencast content. They can capture live screen content, or convert video from third-party screen-capture programs into WMV 9 Screen files. They work best when the source material is mainly static and contains a small color palette. One of the uses for the format is computer step-by-step demonstration videos. The first version of the format was WMV 7 Screen. The second version, WMV 9 Screen, supports VBR encoding in addition to CBR. Additionally there is MSA1 (aka “MS ATC Screen codec” or “MSS3”) which is used in Live Meeting 2007. FourCCs for the formats are MSS1, MSS2 and MSA1. Windows Media Video Image Windows Media Video Image (WMV Image) is a video slideshow format. The format works by applying timing, panning and transition effects to a series of images during playback. The codec achieves a higher compression ratio and image quality than WMV 9 for still images as files encoded with WMV Image store static images rather than full-motion video. Since the format relies on the decoder (player) to generate video frames in real-time, playing WMV Image files even at moderate resolutions (e.g. 30 frames per second at 1024 px × 768 px resolution) requires heavy computer processing. The latest version of the format, WMV 9.1 Image, used by Photo Story 3, features additional transformation effects, but is not compatible with the original WMV 9 Image format. Hardware support for WMV Image is available from Portable Media Centers, Windows Mobile-powered devices with Windows Media Player 10 Mobile. Since no known domestic DVD player supports this format, users of Photo Story 3 wishing to generate material capable of being played in a DVD player will first have to convert to MPEG-2 before burning a DVD (average file sizes in MPEG-2 are 5 to 6 times the .wmv file). Versions Audio compression formats The audio format used in conjunction with Windows Media Video is typically some version of Windows Media Audio, or in rarer cases, the deprecated Sipro ACELP.net audio format. Microsoft recommends that ASF files containing non-Windows Media formats use the generic file extension. Players Software that can play WMV files includes Windows Media Player, RealPlayer, MPlayer, Media Player Classic, VLC Media Player and K-Multimedia Player. The Microsoft Zune media management software supports the WMV format, but uses a Zune-specific variation of Windows Media DRM which is used by PlaysForSure. Many third-party players exist for various platforms such as Linux that use the FFmpeg implementation of the WMV format. On the Macintosh platform, Microsoft released a PowerPC version of Windows Media Player for Mac OS X in 2003, but further development of the software ceased. From January 2006 to May 2014, Microsoft endorsed and distributed the 3rd party Flip4Mac, a QuickTime Component developed by Telestream that allowed Macintosh users to play WMV files in any player that used the QuickTime framework. Telestream ended sales of Flip4Mac on 1 July 2019 and officially ended support on 28 June 2020. Encoders Many programs can export video in WMV format; a few examples are Windows Movie Maker, Windows Media Encoder, Microsoft Expression Encoder, Sorenson Squeeze, Sony Vegas Pro, AVS Video Editor, VSDC Free Video Editor, Telestream Episode, Telestream FlipFactory, and FFmpeg. Programs that encode using the WMV Image format include Windows Media Encoder, AVS Video Editor, and Photo Story. Digital rights management While none of the WMV formats themselves contain any digital rights management facilities, the ASF container format, in which a WMV stream may be encapsulated, can. Windows Media DRM, which can be used in conjunction with WMV, supports time-limited subscription video services such as those offered by CinemaNow. Windows Media DRM, a component of PlaysForSure and Windows Media Connect, is supported on many modern portable video devices and streaming media clients such as the Xbox 360. Criticism WMV has been the subject of numerous complaints from users and the press. Users dislike the digital rights management system which is sometimes attached to WMV files. In 2007, the loss of the ability to restore licenses for WMV files in the Windows Media Player 11 was not positively received. See also JPEG XR/HDHD, an image file format and format developed by Microsoft References External links Description of the algorithm used for Windows Media encryption Demonstration of WMV 9 delivering 720p video at 1.8 mbit/s ABR Demonstration of WMV 9 delivering 1080p video at 10 mbit/s ABR History of Windows Media Player (archived 2009) Video codecs Microsoft proprietary codecs Microsoft Windows multimedia technology Digital rights management systems
226947
https://en.wikipedia.org/wiki/CNBC%20Europe
CNBC Europe
CNBC Europe (referred to on air simply as CNBC) is a business and financial news television channel which airs across Europe. The station is based in London, where it shares the Adrian Smith-designed 10 Fleet Place building with organisations including Dow Jones & Company. Along with CNBC Asia, the channel is operated by the Singapore-headquartered CNBC subsidiary company CNBC International, which is in turn wholly owned by NBCUniversal. As the most viewed pan-European financial TV channel according to the 2010 EMS survey, the broadcaster reaches over 100 million households across the continent. CNBC Europe produces four hours of live programming each weekday and airs reports and content for its global sister stations and the outlets of NBC News. History 1990s CNBC Europe began broadcasts in March 1996, as a wholly owned subsidiary of NBC. On 9 December 1997, the channel announced that it would merge with the Dow Jones news channel in Europe, European Business News (EBN), which had been on air since 1995. The merger took place in February 1998, upon which the channel then became known officially as "CNBC Europe – A Service of NBC and Dow Jones". 2000s CNBC Europe has leaned generally on the U.S. CNBC on-air graphical look in the past. However, in June 2003, it revamped a number of its programmes, taking many of them away from the U.S. formats. CNBC Europe re-launched its on-air image in September 2004, but instead of adapting the U.S. title sequences for programmes, designed all of its title sequences itself from scratch (while still using the U.S. music adopted in September 2003). In July 2005, NBC Universal announced that it would be acquiring the Dow Jones stake in CNBC Europe, subject to required regulatory clearances. On 30 December 2005, CNBC Europe became a wholly owned subsidiary of NBC Universal. Dow Jones continues to provide content to the channel. On 1 January 2006, in line with this, the channel dropped the "A Service of NBC Universal and Dow Jones" tagline. On 18 September 2006, CNBC Europe debuted a new graphics package, which is similar to that used by its U.S. counterpart (first seen in the United States on 19 December 2005). Like CNBC Asia (which debuted a new graphics package similar to CNBC U.S. and Europe on 30 October 2006), it elected to keep the previous theme music (CNBC Asia did so until March 2007). In addition, CNBC Europe also elected to keep its September 2004 opening titles for most programmes. The channel adopted a new schedule on 26 March 2007 which included a new pan-regional programme, Capital Connection. New title sequences were given to Power Lunch Europe and Europe Tonight to coincide with changes to the form and content of those programmes, but unlike CNBC Asia, no other changes were made to the channel's on air look on this date (although Capital Connection uses CNBC Asia's new graphics as it is produced by that channel). On 7 January 2008, the channel unveiled a revamped studio and new "lower thirds". The lower-third style was distinct to CNBC Europe, but adopted some elements of the CNBC U.S. style. On 29 September 2008 the channel dropped "Europe" from its on-screen name, returning to the CNBC brand it had previously used for a spell in the 1990s. This positioned the station in-line with its U.S. and Asian counterparts, which are also referred to simply as CNBC. Some minor on-screen changes were introduced to coincide with the rebrand. On 1 December 2008 the channel relaunched its flagship programme Squawk Box Europe, with a new look not derived from CNBC U.S. at all. At the same time a third line was added to the ticker detailing general news stories. On 15 December 2008 the channel announced that long running show Power Lunch Europe would be removed from the schedule and be replaced, in both Ireland and the United Kingdom only, with a 12-week run of Strictly Money, a new programme focussing specifically on UK issues. This marks the creation of a new UK/Ireland opt-out for CNBC Europe. The new schedule aired from 12 January 2009, with Strictly Money remaining in the schedule until its cancellation in March 2011. CNBC Europe debuted a new lower thirds, which were completely different from its sister U.S. and Asian channels, on 27 July 2009. 2010s - 2020s On 22 January 2010, the station ended its encryption on digital satellite television in the UK to increase its viewer footprint to an estimated 11 million households. The channel was subsequently added to Freesat on 23 February 2010. A significantly revamped studio was unveiled in May 2011 along with a new format for various programmes. The network was formally merged with CNBC Asia in December 2011 to form a new Singapore-based company, CNBC International, to manage the two stations. As a result of the merger CNBC Asia managing director Satpal Brainch was appointed to lead the new company, with his European counterpart Mick Buckley leaving his post. On 31 March 2014, CNBC Europe launched in widescreen (16:9) and changed its lower thirds to match the on-air style of its sister CNBC Asia channel, which also launched in widescreen on the same day. The new look also saw the removal of the on-screen clock, which CNBC Europe had shown during live European and American programming since the channel was launched. This new on-air style did not carry over to CNBC US, which continued to use the old on-air style. CNBC US would ultimately follow with its own launch in 16:9 widescreen on 13 October 2014. An on-screen clock returned on this day (13 October) but it was a world clock with the time from various financial capitals shown on a rotating basis. CNBC Europe's current on-air style (which is based on the US design used since 13 October 2014) was launched 9 March 2015, exactly a month after its sister Asia channel. On 10 November 2015, CNBC announced cutbacks to its international television operation, including the closure of its Paris and Tokyo bureaus, and a two-hour reduction in local programming from London (which will be filled with more programming from the U.S. feed). The cuts, which will result in the layoff of 15 employees, comes as part of a wider focus on providing European market coverage via digital platforms, such as the CNBC website. The programming cutbacks from London took effect on 4 January 2016. Only two programmes, Squawk Box Europe and the European version of Street Signs (the latter debuted on the same day), are produced out of CNBC Europe's Fleet Place studios in London. On 1 February 2019, CNBC Europe launched free-to-air in HD on Astra 28.2°E. and 19 June 2021, change frequency free-to-air in HD on Astra 28.2°E to 12,168 GHz. On 12 November 2020, CNBC Europe also launched channel free-to-air in HD on Hot Bird 13°E. Ratings Unlike its American sister station, CNBC Europe does not have its ratings measured on a daily basis: the channel resigned its membership of the UK's Broadcasters' Audience Research Board in September 2004 in protest at its refusal to incorporate out-of-home viewing into its audience figures. The network instead focuses its viewership measure strictly towards the top 20% income bracket, where figures are compiled as part of Synovate's European Media and Marketing Survey (EMS). CNBC Europe's monthly viewership grew steadily from 1.7 million to 6.7 million in the decade after its 1998 merger with European Business News, with annual growth coming in at around 10%. In the EMS survey covering 2010, the network's monthly reach was reported to be 6.8 million. Programming European Business Day Current programming CNBC Europe produces live business day programming from 7h to 11h CET. The major business day programmes, all broadcast from London, on CNBC Europe are: Squawk Box Europe – Geoff Cutmore, Steve Sedgwick & Karen Tso Street Signs – Joumanna Bercetche Decision Time (for live coverage of UK and European Central Bank lender rate announcements) – Joumanna Bercetche In addition, CNBC Europe produces other business-related programmes. These programmes are premiered at 23h CET and repeated at various times over the weekend. These are: Access: Middle East The Edge Marketing Media Money The CNBC Conversation During the business day, the CNBC Europe Ticker is displayed during both programmes and commercials, providing information on share prices from the leading European stock exchanges (this means that advertisements on CNBC Europe are formatted differently from those on most television channels, taking up only part of the screen). When programming from CNBC Asia is shown, that network's ticker is displayed. A stack (or bug) providing index and commodity prices was displayed in the bottom right hand corner of the screen until December 2005, when it was replaced with a strip across the top of the screen (in line with the other CNBC channels). The ticker was decreased in size at the same time. The bug was moved back to the bottom right hand corner of the screen on 13 October 2014. Past programming Rebroadcasts of CNBC U.S. and CNBC Asia In addition to its own programming, CNBC Europe also broadcasts live almost all of the business day programming from CNBC U.S.. Worldwide Exchange, Squawk on the Street, TechCheck, Fast Money Half Time Report, Power Lunch and Closing Bell are all broadcast in their entirety. Squawk Box is also now shown in full right across Europe, but prior to March 2011 only the final two hours of the show were available to viewers in the UK and Ireland because CNBC Europe broadcast Strictly Money to UK and Irish viewers. However, on the day when CNBC Europe broadcasts its coverage of the monthly announcements of the UK and European Central Bank lender rates, only the first hour of Squawk Box is shown on CNBC Europe. Squawk Alley was originally not shown because it clashed with European Closing Bell, until the latter show was cancelled on 18 December 2015 (Squawk Alley has since been replaced by TechCheck, which debuted 12 April 2021). Fast Money is occasionally seen on CNBC Europe, such as during major events as a way of providing the channel with continued live programming, Mad Money has yet to be seen on CNBC Europe and The News with Shepard Smith is currently being shown between November and March on a four-hour tape delay to fill the one-hour gap between the end of Street Signs and the start of Capital Connection, created by Europe not being on Daylight Saving Time. While the U.S. markets are open, the CNBC Europe Ticker is modified to carry U.S. share prices. A break filler, consisting of HotBoards (CNBC's custom stock price graphs) is often broadcast during U.S. programming, owing to the increased number of advertising breaks. In addition, for many years, during the evening a recorded Europe Update, a 90-second run down of the European closing prices, and for a time in 2013 this concept was extended into daytime when CNBC Europe broadcast brief European updates twice an hour when the network was broadcasting CNBC U.S.'s Squawk programmes. These segments were broadcast live and, as with the recorded evening updates, were inserted into commercial breaks. Europe Update has now been discontinued and has been replaced with an insert detailing current items on CNBC Europe's website. The channel also broadcasts live the majority of CNBC Asia's output. However broadcasts of CNBC Asia's live programming had been scaled back in the late 2000s as the channel had broadcast teleshopping and, latterly, poker programming overnight. During the period when poker was shown CNBC Europe only broadcast the final hour (final two hours between April and October) of Asian programming, apart from late Sunday night/early Monday morning when the channel broadcast CNBC Asia's full morning line-up. In 2009, the majority of Asian programming was reinstated although the entire broadcast day of CNBC Asia is still only shown on Sunday night/early Monday morning. Other programmes For two hours each weeknight and all weekend the channel does not air live business programming. The weeknight non-business output runs from 10 pm until midnight UK time and consists of an edition of a weekly business magazine show, an edition of Late Night With Seth Meyers and a live broadcast of NBC Nightly News with Lester Holt. At the weekend, programming consists of weekly business magazine programmes such as On The Money and Managing Asia, news and current affairs, sport, several editions of chat show Late Night With Seth Meyers, paid religious programming and special programmes, such as CNBC on Assignment, dedicated to the world of financial news and politics. The channel also broadcasts four hours of sports programming under the banner of CNBC Sports. The block airs on Saturday and Sunday between 10 am and 2 pm UK time. The middle two hours are devoted to highlights of the US PGA Golf Tour with the rest made up of other highlights and of magazine show Mobil 1 The Grid. The channel airs some programmes from sister network NBC. These include NBC talk show Late Night With Seth Meyers and NBC Nightly News with Lester Holt. The channel also airs NBC's Sunday morning political talk show Meet The Press, showing it a few hours after its live broadcast. Paid programming CNBC Europe carries paid religious programmes. They are shown in a two-hour block on Sundays, between 7am and 9am, with one 30-minute programme broadcast on Saturday mornings. Previously, the channel had given over much of the overnight hours to teleshopping. Most teleshopping output was broadcast at the weekend although for a time in the mid-2000s, teleshopping was broadcast overnight during the week. Teleshopping ended on CNBC Europe in the early 2010s. Former programming CNBC Life In February 2008 a weekend nine-hour CNBC Life strand, was launched. This slot, which ran during the afternoon and evening, incorporated the already established weekend afternoon sporting coverage of sports such as PGA Tour golf, tennis and yachting with new programming which included travel programmes produced by the Travel Channel, output from The Luxury Channel, news and current affairs broadcasts as well as the airing of programs from sister channels, such as The Tonight Show and Meet the Press. In September 2010 CNBC Europe began airing a series of operas and ballets on Sunday afternoons under the title of CNBC Performance. The 20-part series began in September 2010 and ran until the end of January 2011. This programming was repeated during the rest of 2011. Since 2012 CNBC Life began to be wound down in favour of a schedule more focused on its core remit of business programming and the lifestyle, travel and CNBC Performance elements started to be removed from the schedule. The CNBC Life branding finally disappeared in 2018. Simulcasts of MSNBC The channel used to air American news channel MSNBC during weekend overnights and during the afternoon on American public holidays. CNBC Europe also carried MSNBC during major non-business related breaking news. By the end of the 2000s, CNBC Europe had stopped showing MSNBC. Standard weekend programming replaced the overnight broadcasts and on American bank holidays CNBC Europe now broadcasts replays of its weekly magazine programmes. Coverage of non-business related breaking news now comes from either CNBC U.S. or NBC News. Extended programming In the past CNBC Europe has broadcast extended European programming on U.S. bank holidays. In the mid 2000s, this took the form of an extended edition of Power Lunch Europe, during 2009 and 2010 CNBC had broadcast Strictly Money to the whole of Europe and in 2012 and 2013 the network broadcast a three-hour edition of Worldwide Exchange and a two-hour edition of European Closing Bell. In 2014 and 2015, CNBC Europe did not broadcast any extended programming on U.S. bank holidays, although on many of the 2016 American bank holidays, CNBC Europe broadcast two-hour editions of Street Signs. Since the start of 2016, CNBC Europe has broadcast almost all of the CNBC US live business day schedule. Previously, the full schedule had only been seen on Europe-wide bank holidays which were regular working days in the United States (CNBC Asia produced Worldwide Exchange on those days) and between Christmas and the new year as CNBC Europe produces less European programming at this time. On the day each month when the bank lending rates are announced, CNBC Europe broadcasts Decision Time, which airs between 1300 CET and 1500 CET. The channel provides extra programming during the annual January gathering in Davos of the World Economic Forum. In addition to coverage during its regular programmes, the channel broadcasts a daily one-hour special programme beginning at 1600 CET. The channel also occasionally opts out of American programming for one-off interviews and/or special coverage of a specific event. Simulcasts outside Europe All of CNBC Europe's live programming is broadcast in their entirety in the U.S. on CNBC World and Squawk Box Europe and Street Signs are shown on CNBC Asia. The CNBC Europe ticker is seen on CNBC World but not on CNBC Asia and CNBC U.S. Presenters Current anchors and correspondents Staff are based in London unless otherwise stated. Joumanna Bercetche Geoff Cutmore Hadley Gamble (Abu Dhabi) Rosanna Lockwood Steve Sedgwick – also CNBC Europe's OPEC reporter Julianna Tatelbaum Karen Tso Annette Weisbach (Frankfurt) – also CNBC's European Central Bank reporter Contributors Tania Bryer Past anchors and reporters Becky Anderson (now with CNN International) Beccy Barr (formerly Rebecca Meehan; later with BBC North West; left television industry in July 2019, now a firefighter) Louisa Bojesen (left 28 April 2017) Julia Chatterley (later with Bloomberg Television and now with CNN International) Ros Childs (now with Australia's ABC News Channel) Emma Crosby (now with Sky News) Anna Edwards (now with Bloomberg Europe) Raymond Frenken (was Amsterdam Market Reporter and EU Correspondent) Wilfred Frost (later with CNBC US; left 16 February 2022) Yousef Gamal El-Din (now with Bloomberg) Aaron Heslehurst (now with BBC World News) Simon Hobbs (later with CNBC US; left in July 2016) John Holland (was Frankfurt Bureau Chief) Guy Johnson (now with Bloomberg Europe) Shellie Karabell (Paris Bureau Chief 1999-2004; Time Warner Cable , Forbes.com) Susan Li (moved to CNBC US; left in August 2017, now with Fox Business Network) Willem Marx (now with NBC News as a London-based correspondent) Ed Mitchell Seema Mody (rejoined CNBC US in September 2015) Stéphane Pedrazzi (now at BFM Business) Nigel Roberts Carolin Roth Patricia Szarvas (now a moderator, media coach, writer) Silvia Wadhwa Ross Westgate (now with Infinity Creative Media) Affiliate channels and partnerships There is a feed of CNBC Europe for Scandinavian countries called CNBC Nordic. It shows identical programmes to CNBC Europe but has a ticker focussing on Scandinavian stock exchanges. The channel also operates a separate feed for the United Kingdom. Before late 2008 this was used only occasionally, usually for advertising purposes. The network has since begun to actively market the feed to potential advertisers, and at the start of 2009 its first UK-specific programming, Strictly Money, began, initially as a 12-week experiment but the programme continued to air until March 2011. Now the only UK-specific programming is the occasional weekend teleshopping broadcast. Viewers in Ireland also receive this feed. The following European channels also fall under the CNBC brand: CNBC-e, the defunct Turkish version of CNBC. This is unique in the CNBC family, in that after business day hours, it broadcasts popular general entertainment programmes and films, plus children's programming from Nickelodeon. Owned and operated under license by Doğuş Holding. NBCUniversal's share was acquired in 2015 by Discovery Communications and renamed TLC. Class CNBC (formerly CFN-CNBC), the Italian version of the network, operated in conjunction with Class Editori and Mediaset. CNBC Arabiya, the Arabic version of the channel. Owned and operated under license by Middle East Business News. On 10 July 2007, CNBC Europe announced the creation of a new Polish business channel, TVN CNBC Biznes, operated under license by TVN. The channel launched on 3 September, and shares resources with CNBC Europe through a permanent link to their London headquarters. In December 2003, CNBC Europe signed an agreement with German television news channel N24 to provide regular updates from the Frankfurt Stock Exchange. Correspondents Silvia Wadhwa, Patricia Szarvas and Annette Weisbach report throughout the day in German. In June 2008 the channel also began producing thrice-daily video reports in German for the website of Focus magazine. Other services CNBC Europe is narrowcast in London's black cabs on the Cabvision network. Since 2005, CNBC Europe also produces the monthly magazine CNBC Business (formerly named CNBC European Business) in conjunction with Ink Publishing. The magazine is aimed at senior business people and business travellers. References External links CNBC global channels Television channels in the Netherlands Television channels in Flanders Television channels in Belgium Television channels in the United Kingdom Television channels and stations established in 1996
228547
https://en.wikipedia.org/wiki/Mozilla%20Thunderbird
Mozilla Thunderbird
Mozilla Thunderbird is a free and open-source cross-platform email client, personal information manager, news client, RSS and chat client developed by the Mozilla Foundation and operated by subsidiary MZLA Technologies Corporation. The project strategy was originally modeled after that of Mozilla's Firefox web browser. Features Thunderbird is an email, newsgroup, news feed, and chat (XMPP/IRC) client with personal information manager (PIM) functionality, inbuilt since version 78.0 and previously available from the Lightning calendar extension. Additional features are available from extensions. Message management Thunderbird manages multiple email, newsgroup, and news feed accounts and supports multiple identities within accounts. Features such as quick search, saved search folders ("virtual folders"), advanced message filtering, message grouping, and tags help manage and find messages. On Linux-based systems, system mail (movemail) accounts were supported until version 91.0. Thunderbird provides basic support for system-specific new email notifications and can be extended with advanced notification support using an add-on. Junk filtering Thunderbird incorporates a Bayesian spam filter, a whitelist based on the included address book, and can also understand classifications by server-based filters such as SpamAssassin. Extensions and themes Extensions allow the addition of features through the installation of XPInstall modules (known as "XPI" or "zippy" installation) via the add-ons website that also features an update functionality to update the extensions. Thunderbird supports a variety of themes for changing its overall look and feel. These packages of CSS and image files can be downloaded via the add-ons website at Mozilla Add-ons. Standards support Thunderbird follows industry standards for email: POP. Basic email retrieval protocol. IMAP. Thunderbird has implemented many of the capabilities in IMAP, in addition to adding their own extensions and the de facto standards by Google and Apple. LDAP address auto-completion. S/MIME: Inbuilt support for email encryption and signing using X.509 keys provided by a centralised certificate authority. OpenPGP: Inbuilt support for email encryption and signing since version 78.2.1, while older versions used extensions such as Enigmail. For web feeds (e.g. news aggregators), it supports Atom and RSS. For chat, it supports the IRC and XMPP protocol. For newsfeeds, it uses NNTP and supports NNTPS. File formats supported Thunderbird provides mailbox format support using plugins, but this feature is not yet enabled due to related work in progress. The mailbox formats supported are: mbox – Unix mailbox format (one file holding many emails) maildir – known as maildir-lite (one file per email). "there are still many bugs", so this is disabled by default. Thunderbird also uses Mork and (since version 3) MozStorage (which is based on SQLite) for its internal database. Mork was due to be replaced with MozStorage in Thunderbird 3.0, but the 8.0 release still uses the Mork file format. Big file linking Since version 38, Thunderbird has integrated support for automatic linking of large files instead of attaching them directly to the mail message. HTML formatting and code insertion Thunderbird provides a wysiwyg editor for composing messages formatted with HTML (default). The delivery format auto-detect feature will send unformatted messages as plain text (controlled by a user preference). Certain special formatting like subscript, superscript and strikethrough is available from the Format menu. The Insert > HTML menu provides the ability to edit the HTML source code of the message. There is basic support for HTML template messages, which are stored in a dedicated templates folder for each account. Limitations and known issues As with any software, there may be limitations to the number and sizes of files and objects represented. For example, POP3 folders are subject to filesystem design limitations, such as maximum file sizes on filesystems that do not have large-file support, as well as possible limitations of long filenames, and other issues. Cross-platform support Thunderbird runs on a variety of platforms. Releases available on the primary distribution site support the following operating systems: Linux Windows macOS Unofficial ports are available for: FreeBSD OpenBSD Ports for older versions available for OS/2 (including ArcaOS and eComStation). The source code is freely available and can be compiled to be run on a variety of other architectures and operating systems. Internationalization and localization With contributors all over the world, Thunderbird has been translated into more than 65 languages, although email addresses are currently limited to ASCII local parts. Thunderbird does not yet support SMTPUTF8 (RFC 6531) or Email Address Internationalization. Security Thunderbird provides security features such as TLS/SSL connections to IMAP and SMTP servers. It also offers inbuilt support for secure email with digital signing and message encryption through OpenPGP (using public and private keys) or S/MIME (using certificates). Any of these security features can take advantage of smartcards with the installation of additional extensions. Other security features may be added through extensions. Up to version 68, the Enigmail extension was required for OpenPGP support (now inbuilt). Optional security protections also include disabling loading of remote images within messages, enabling only specific media types (sanitizer), and disabling JavaScript. The French military uses Thunderbird and contributes to its security features, which are claimed to match the requirements for NATO's closed messaging system. History Originally launched as Minotaur shortly after Phoenix (the original name for Mozilla Firefox), the project failed to gain momentum. With the success of Firefox, however, demand increased for a mail client to go with it, and the work on Minotaur was revived under the new name of Thunderbird, and migrated to the new toolkit developed by the Firefox team. On December 7, 2004, version 1.0 was released, and received more than 500,000 downloads in its first three days of release, and 1,000,000 in ten days. Significant work on Thunderbird restarted with the announcement that from version 1.5 onward the main Mozilla suite would be designed around separate applications using this new toolkit. This contrasts with the previous all-in-one approach, allowing users to mix and match the Mozilla applications with alternatives. The original Mozilla Suite continues to be developed as SeaMonkey. On December 23, 2004, Project Lightning was announced which tightly integrated calendar functionality (scheduling, tasks, etc.) into Thunderbird. Lightning supports the full range of calendar mechanisms and protocols supported by the Mozilla Calendar infrastructure, just as with modern (post-0.2) Sunbird. On October 11, 2006, Qualcomm and the Mozilla Foundation announced that "future versions of Eudora will be based upon the same technology platform as the open source Mozilla Thunderbird email program." The project was code-named Penelope. In late 2006, Debian rebranded Thunderbird as Icedove due to trademark and copyright reasons. This was the second product to be rebranded. On July 26, 2007, the Mozilla Foundation announced that Thunderbird would be developed by an independent organization, because the Mozilla Corporation (a subsidiary of the foundation) was focusing on Mozilla Firefox development. On September 17, 2007, the Mozilla Foundation announced the funding of a new internet communications initiative with David Ascher of ActiveState. The purpose of this initiative was "to develop Internet communications software based on the Thunderbird product, code, and brand". On February 19, 2008, Mozilla Messaging started operations as a subsidiary of the Mozilla Foundation responsible for the development of email and similar communications. Its initial focus was on the then upcoming version of Thunderbird 3. Alpha Preview releases of Thunderbird 3 were codenamed "Shredder". On April 4, 2011, Mozilla Messaging was merged into the Mozilla Labs group of the Mozilla Foundation. On July 6, 2012, a confidential memo from Jb Piacentino, the Thunderbird Managing Director at Mozilla, was leaked and published to TechCrunch. The memo indicated that Mozilla would be moving some of the team off the project and further development of new features would be left up to the community. The memo was slated for release on July 9, 2012. A subsequent article by the Executive Chair of Mozilla, Mitchell Baker, stated Mozilla's decision to make a transition of Thunderbird to a new release and governance model. On July 6, 2012, Mozilla announced the company was dropping the priority of Thunderbird development because the continuous effort to extend Thunderbird's feature set was mostly fruitless. The new development model shifted to Mozilla offering only "Extended Support Releases", which deliver security and maintenance updates, while allowing the community to take over the development of new features. On November 25, 2014, Kent James of the volunteer-led Thunderbird Council announced on the Thunderbird blog that active contributors to Thunderbird gathered at the Mozilla office in Toronto and discussed the future of the application. They decided that more staff were required working full-time on Thunderbird so that the Thunderbird Team could release a stable and reliable product and make progress on features that had been frequently requested by the community. On December 1, 2015, Mitchell Baker suggested in a company-wide memo that Thunderbird should be uncoupled from Firefox's infrastructure. She referred to Thunderbird as being a tax on Firefox and said that she did not believe Thunderbird had the potential for "industry-wide impact" that Firefox did. Mozilla remained interested in having a role in Thunderbird, but sought more assistance to help with development. On December 1, 2015, Mozilla Executive Chair Mitchell Baker announced in a company-wide memo that Thunderbird development needed to be uncoupled from Firefox. She referred to Thunderbird developers spending large efforts responding to changes to Mozilla technologies, while Firefox was paying a tax to support Thunderbird development. She also said that she does not believe Thunderbird has the potential for "industry-wide impact" that Firefox does. At the same time, it was announced that Mozilla Foundation would provide at least a temporary legal and financial home for the Thunderbird project. On May 9, 2017, Philipp Kewisch announced that the Mozilla Foundation would continue to serve as the legal and fiscal home for the Thunderbird project, but that Thunderbird would migrate off Mozilla Corporation infrastructure, separating the operational aspects of the project. Mozilla brought Thunderbird back in-house in an announcement on May 9, 2017, and continued to support its development. The Thunderbird development team expanded by adding several new members and overhauled security and the user interface. The interim/beta versions Thunderbird 57 and 58, released in late 2017, began to make changes influenced by Firefox Quantum, including a new "Photon" user interface. Despite the removal in Firefox Quantum of support for XUL-based legacy add-ons in favor of WebExtensions, the stable/ESR release of Thunderbird 60 in mid-2018 continued to support them, although most would require updates, and it did not support WebExtensions except for Themes. In 2018, work was underway for planned support in Thunderbird 63 of WebExtensions and to continue to "somewhat" support legacy addons, according to Mozilla. With the release of Thunderbird 68 in August 2019 it now only supports WebExtension addons. Legacy Addons can still be used if a special "legacy mode" is enabled, but even for this, the legacy Addon has to be converted. Instead of upgrading to WebExtension technology, the Thunderbird version of the popular OpenPGP addon Enigmail was retired and its functionality was largely integrated into Thunderbird instead. Mainly for licensing reasons, this is no longer based on GnuPG, but on the RNP library, which has more liberal licensing terms. On January 28, 2020, the Mozilla Foundation announced that the project would henceforth be operating from a new wholly owned subsidiary, MZLA Technologies Corporation, in order to explore offering products and services that were not previously possible and to collect revenue through partnerships and non-charitable donations. As of version 78.7.1, Thunderbird will no longer allow installation of addons that use Legacy WebExtensions. Only MailExtensions are now compatible with Thunderbird. MailExtensions are WebExtensions but with "some added features specific to Thunderbird". With version 91. The main improvements and new features of Thunderbird are: User interface improvements to Calendar display, message compose window, message composer, and customized the order of accounts; Improved the performance, New context menu in the compose window; Updated printing UI; Major revamped account setup wizard; Import or Export Thunderbird profiles; Automatically suggesting replacements for discontinued or incompatible add-ons; Supports CardDAV address books; PDF.js viewer is now included by default; Updates the enterprise policies and the printing dialog; Defaults the IRC server for new chat accounts to Libera Chat; Encrypt emails to BCC recipients; Native support for Apple Silicon CPUs. Releases Thunderbird development releases occur in three stages, called Beta, Earlybird, and Daily, which correspond to Firefox's Beta, Aurora, and Nightly stages. The release dates and Gecko versions are exactly the same as Firefox; for example, Firefox 7 and Thunderbird 7 were both released on September 27, 2011, and were both based on Gecko 7.0. References External links 2003 software Cross-platform free software Email client software for Linux Free email software Free multilingual software Free software programmed in C++ Free Usenet clients News aggregator software MacOS email clients OS/2 software Portable software Software that uses XUL Software using the Mozilla license Unix Internet software Windows email clients Software that uses SQLite
229436
https://en.wikipedia.org/wiki/Tropospheric%20scatter
Tropospheric scatter
Tropospheric scatter, also known as troposcatter, is a method of communicating with microwave radio signals over considerable distances – often up to and further depending on frequency of operation, equipment type, terrain, and climate factors. This method of propagation uses the tropospheric scatter phenomenon, where radio waves at UHF and SHF frequencies are randomly scattered as they pass through the upper layers of the troposphere. Radio signals are transmitted in a narrow beam aimed just above the horizon in the direction of the receiver station. As the signals pass through the troposphere, some of the energy is scattered back toward the Earth, allowing the receiver station to pick up the signal. Normally, signals in the microwave frequency range travel in straight lines, and so are limited to line-of-sight applications, in which the receiver can be 'seen' by the transmitter. Communication distances are limited by the visual horizon to around . Troposcatter allows microwave communication beyond the horizon. It was developed in the 1950s and used for military communications until communications satellites largely replaced it in the 1970s. Because the troposphere is turbulent and has a high proportion of moisture, the tropospheric scatter radio signals are refracted and consequently only a tiny proportion of the transmitted radio energy is collected by the receiving antennas. Frequencies of transmission around are best suited for tropospheric scatter systems as at this frequency the wavelength of the signal interacts well with the moist, turbulent areas of the troposphere, improving signal to noise ratios. Overview Discovery Previous to World War II, prevailing radio physics theory predicted a relationship between frequency and diffraction that suggested radio signals would follow the curvature of the Earth, but that the strength of the effect would fall off rapidly and especially at higher frequencies. However, during the war, there were numerous incidents in which high-frequency radar signals were able to detect targets at ranges far beyond the theoretical calculations. In spite of these repeated instances of anomalous range, the matter was never seriously studied. In the immediate post-war era, the limitation on television construction was lifted in the United States and millions of sets were sold. This drove an equally rapid expansion of new television stations. Based on the same calculations used during the war, the Federal Communications Commission (FCC) arranged frequency allocations for the new VHF and UHF channels to avoid interference between stations. To everyone's surprise, interference was common, even between widely separated stations. As a result, licenses for new stations were put on hold in what is known as the "television freeze" of 1948. Bell Labs was among the many organizations that began studying this effect, and concluded it was a previously unknown type of reflection off the tropopause. This was limited to higher frequencies, in the UHF and microwave bands, which is why it had not been seen prior to the war when these frequencies were beyond the ability of existing electronics. Although the vast majority of the signal went through the troposphere and on to space, the tiny amount that was reflected was useful if combined with powerful transmitters and very sensitive receivers. In 1952, Bell began experiments with Lincoln Labs, the MIT-affiliated radar research lab. Using Lincoln's powerful microwave transmitters and Bell's sensitive receivers, they built several experimental systems to test a variety of frequencies and weather effects. When Bell Canada heard of the system they felt it might be useful for a new communications network in Labrador and took one of the systems there for cold weather testing. In 1954 the results from both test series were complete and construction began on the first troposcatter system, the Pole Vault system that linked Pinetree Line radar systems along the coast of Labrador. Using troposcatter reduced the number of stations from 50 microwave relays scatted through the wilderness to only 10, all located at the radar stations. In spite of their higher unit costs, the new network cost half as much to build as a relay system. Pole Vault was quickly followed by similar systems like White Alice, relays on the Mid-Canada Line and the DEW Line, and during the 1960s, across the Atlantic Ocean and Europe as part of NATO's ACE High system. Use The propagation losses are very high; only about one trillionth () of the transmit power is available at the receiver. This demands the use of antennas with extremely large antenna gain. The original Pole Vault system used large parabolic reflector dish antennas, but these were soon replaced by billboard antennas which were somewhat more robust, an important quality given that these systems were often found in harsh locales. Paths were established at distances over . They required antennas ranging from and amplifiers ranging from to . These were analogue systems which were capable of transmitting a few voice channels. Troposcatter systems have evolved over the years. With communication satellites used for long-distance communication links, current troposcatter systems are employed over shorter distances than previous systems, use smaller antennas and amplifiers, and have much higher bandwidth capabilities. Typical distances are between , though greater distances can be achieved depending on the climate, terrain, and data rate required. Typical antenna sizes range from while typical amplifier sizes range from to . Data rates over can be achieved with today's technology. Tropospheric scatter is a fairly secure method of propagation as dish alignment is critical, making it extremely difficult to intercept the signals, especially if transmitted across open water, making them highly attractive to military users. Military systems have tended to be ‘thin-line’ tropo – so called because only a narrow bandwidth ‘information’ channel was carried on the tropo system; generally up to 32 analogue ( bandwidth) channels. Modern military systems are "wideband" as they operate 4-16 Mbit/s digital data channels. Civilian troposcatter systems, such as the British Telecom (BT) North Sea oil communications network, required higher capacity ‘information’ channels than were available using HF (high frequency – to ) radio signals, before satellite technology was available. The BT systems, based at Scousburgh in the Shetland Islands, Mormond Hill in Aberdeenshire and Row Brow near Scarborough, were capable of transmitting and receiving 156 analogue ( bandwidth) channels of data and telephony to / from North Sea oil production platforms, using frequency-division multiplexing (FDMX) to combine the channels. Because of the nature of the turbulence in the troposphere, quadruple diversity propagation paths were used to ensure reliability of the service, equating to about 3 minutes of downtime due to propagation drop out per month. The quadruple space and polarisation diversity systems needed two separate dish antennas (spaced several metres apart) and two differently polarised feed horns – one using vertical polarisation, the other using horizontal polarisation. This ensured that at least one signal path was open at any one time. The signals from the four different paths were recombined in the receiver where a phase corrector removed the phase differences of each signal. Phase differences were caused by the different path lengths of each signal from transmitter to receiver. Once phase corrected, the four signals could be combined additively. Tropospheric scatter communications networks The tropospheric scatter phenomenon has been used to build both civilian and military communication links in a number of parts of the world, including: Allied Command Europe Highband (ACE High), NATO military radiocommunication and early warning system throughout Europe from the Norwegian-Soviet border to the Turkish-Soviet border. BT (British Telecom), United Kingdom - Shetland to Mormond Hill Fernmeldeturm Berlin, Torfhaus-Berlin, Clenze-Berlin at Cold War times Portugal Telecom, Serra de Nogueira (northeastern Portugal) to Artzamendi (southwestern France) CNCP Telecommunications, Tsiigehtchic to Galena Hill, Keno City Hay River - Port Radium - Lady Franklin Point - Guanabo to Florida City Project Offices - AT&T Corporation, Project Offices is the name sometimes used to refer to several structurally dependable facilities maintained by the ATT Corporation in the Mid-Atlantic states since the mid-th century to house an ongoing, non-public, company project. AT&T began constructing Project Offices in the . Since the inception of the Project Offices program, the company has chosen not to disclose the exact nature of business conducted at Project Offices. However, it has described them as central facilities. Pittsboro, North Carolina Buckingham, Virginia Charlottesville, Virginia Leesburg, Virginia Hagerstown, Maryland Texas Towers - Air defence radars, The Texas Towers were a set of three radar facilities off the eastern seaboard of the United States which were used for surveillance by the United States Air Force during the Cold War. Modeled on the offshore oil drilling platforms first employed off the Texas coast, they were in operation from 1958 to 1963. {| class="wikitable" style="text-align: left; height: 100px;" |- !Tower ID !Location !Staffing unit !Mainland station !Notes |- | TT-1 | Cashes Ledge off New Hampshire coast | | |Not built |- | TT-2 | Georges Bank off Cape Cod | 762d Radar Squadron | North Truro Air Force Station |decommissioned 1963 |- | TT-3 | Nantucket Shoals | 773d Radar Squadron | Montauk AFS |decommissioned 1963 |- | TT-4 | off Long Beach Island, New Jersey | 646th Radar Squadron | Highlands Air Force Station | collapsed (1961) |- | TT-5 | Browns Bank south of Nova Scotia | | | Not built |} Mid Canada Line, A series of five stations (070, 060, 050, 415, 410) in Ontario and Quebec around the lower Hudson Bay. Pinetree Line, Pole Vault, A series of fourteen stations providing communications for Eastern seaboard radar stations of the US/Canadian Pinetree line, running from N-31 Frobisher Bay, Baffin Island to St. John's, Newfoundland and Labrador. White Alice/DEW Line/DEW Training (Cold War era), / A former military and civil communications network with eighty stations stretching up the western seaboard from Port Hardy, Vancouver Island north to Barter Island (BAR), west to Shemya, Alaska (SYA) in the Aleutian Islands (just a few hundred miles from the Soviet Union) and east across arctic Canada to Greenland. Note that not all station were troposcatter, but many were. It also included a training facility for White Alice/DEW line tropo-scatter network located between Pecatonica, Illinois to Streator, Illinois. DEW Line (Post Cold War era), / Several tropo-scatter networks providing communications for the extensive air-defence radar chain in the far north of Canada and the US. North Atlantic Radio System (NARS), NATO air-defence network stretching from RAF Fylingdales, via Mormond Hill, UK, Sornfelli (Faroe Islands), Höfn, Iceland to Keflavik DYE-5, Rockville. European Tropospheric Scatter - Army (ET-A), A US Army network from RAF Fylingdales to a network in Germany and a single station in France (Maison Fort). The network became active on 1966. 486L Mediterranean Communications System (MEDCOM), A network covering the European coast of the Mediterranean Sea from San Pablo, Spain in the west to Incirlik Air Base, Turkey in the East, with headquarters at Ringstead in Dorset, England. Commissioned by the US Air Force in . Royal Air Force, Communications to British Forces Germany, running from Swingate in Kent to Lammersdorf in Germany. Troposphären-Nachrichtensystem Bars, Warsaw Pact A Warsaw Pact tropo-scatter network stretching from near Rostok in the DDR (Deutsches Demokratisches Republik), Czechoslovakia, Hungary, Poland, Byelorussia USSR, Ukraine USSR, Romania and Bulgaria. TRRL SEVER, A Soviet network stretching across the USSR. - A single section from Srinigar, Kashmir, India to Dangara, Tajikistan, USSR. Indian Air Force, Part of an Air Defence Network covering major air bases, radar installations and missile sites in Northern and central India. The network is being phased out to be replaced with more modern fiber-optic based communication systems. Peace Ruby, Spellout, Peace Net, An air-defence network set up by the United States prior to the 1979 Islamic Revolution. Spellout built a radar and communications network in the north of Iran. PeaceRuby built another air-defence network in the south and PeaceNet integrated the two networks. - A tropo-scatter system linking Al Manamah, Bahrain to Dubai, United Arab Emirates. Royal Air Force of Oman, A tropo-scatter communications system providing military comms to the former SOAF - Sultan of Oman's Air Force, (now RAFO - Royal Air Force of Oman), across the Sultanate of Oman. Royal Saudi Air Force, A Royal Saudi Air Force tropo-scatter network linking major airbases and population centres in Saudi Arabia. Yemen, A single system linking Sana'a with Sa'dah. BACK PORCH and IWCS, Two networks run by the United States linking military bases in Thailand and South Vietnam. Phil-Tai-Oki, A system linking the Taiwan with the Philippines and Okinawa. Cable & Wireless Caribbean network A troposcatter link was established by Cable & Wireless in 1960, linking Barbados with Port of Spain, Trinidad. The network was extended further south to Georgetown, Guyana in 1965. Japanese Troposcatter Networks, Two networks linking Japanese islands from North to South. Tactical Troposcatter Communication systems As well as the permanent networks detailed above, there have been many tactical transportable systems produced by several countries: Soviet / Russian Troposcatter Systems MNIRTI R-423-1 Brig-1/R-423-2A Brig-2A/R-423-1KF MNIRTI R-444 Eshelon / R-444-7,5 Eshelon D MNIRTI R-420 Atlet-D NIRTI R-417 Baget/R-417S Baget S NPP Radiosvyaz R-412 A/B/F/S TORF MNIRTI R-410/R-410-5,5/R-410-7,5 Atlet / Albatros MNIRTI R-408/R-408M Baklan People's Republic of China (PRoC), People's Liberation Army (PLA) Troposcatter Systems CETC TS-504 Troposcatter Communication System CETC TS-510/GS-510 Troposcatter Communication System Western Troposcatter Systems AN/TRC-97 Troposcatter Communication System AN/TRC-170 Tropospheric Scatter Microwave Radio Terminal AN/GRC-201 Troposcatter Communication System The U.S. Army and Air Force use tactical tropospheric scatter systems developed by Raytheon for long haul communications. The systems come in two configurations, the original "heavy tropo", and a newer "light tropo" configuration exist. The systems provide four multiplexed group channels and trunk encryption, and 16 or 32 local analog phone extensions. The U.S. Marine Corps also uses the same device, albeit an older version. See also Radio propagation Non-line-of-sight propagation Microwave ACE High - Cold war era NATO European troposcatter network White Alice Communications System - Cold war era Alaskan tropospheric communications link List of White Alice Communications System sites TV-FM DX Distant Early Warning Line References Citations Bibliography External links Russian tropospheric relay communication network Troposcatter communication network maps Tropospheric Scatter Communications - the essentials Radio frequency propagation Atmospheric optical phenomena
230338
https://en.wikipedia.org/wiki/Robert%20Morris%20%28cryptographer%29
Robert Morris (cryptographer)
Robert H. Morris Sr. (July 25, 1932 – June 26, 2011) was an American cryptographer and computer scientist. Family and education Morris was born in Boston, Massachusetts. His parents were Walter W. Morris, a salesman, and Helen Kelly Morris, a homemaker. He received a bachelor's degree in mathematics from Harvard University in 1957 and a master's degree in applied mathematics from Harvard in 1958. He married Anne Farlow, and they had three children together: Robert Tappan Morris (author of the 1988 Morris worm), Meredith Morris, and Benjamin Morris. Bell Labs From 1960 until 1986, Morris was a researcher at Bell Labs and worked on Multics and later Unix. Together with Douglas McIlroy, he created M6 macro processor in FORTRAN IV, which was later ported to Unix. Using the TMG compiler-compiler, Morris, together with McIlroy, developed the early implementation of PL/I compiler called EPL for Multics project. The pair also contributed a version of runoff text-formatting program for Multics. Morris's contributions to early versions of Unix include the math library, the bc programming language, the program crypt, and the password encryption scheme used for user authentication. The encryption scheme (invented by Roger Needham), was based on using a trapdoor function (now called a key derivation function) to compute hashes of user passwords which were stored in the file /etc/passwd; analogous techniques, relying on different functions, are still in use today. National Security Agency In 1986, Morris began work at the National Security Agency (NSA). He served as chief scientist of the NSA's National Computer Security Center, where he was involved in the production of the Rainbow Series of computer security standards, and retired from the NSA in 1994. He once told a reporter that, while at the NSA, he helped the FBI decode encrypted evidence. There is a description of Morris in Clifford Stoll's book The Cuckoo's Egg. Many readers of Stoll's book remember Morris for giving Stoll a challenging mathematical puzzle (originally due to John H. Conway) in the course of their discussions on computer security: What is the next number in the sequence 1 11 21 1211 111221? (known as the look-and-say sequence). Stoll chose not to include the answer to this puzzle in The Cuckoo's Egg, to the frustration of many readers. Robert Morris died in Lebanon, New Hampshire. Quotes Rule 1 of cryptanalysis: check for plaintext. The three golden rules to ensure computer security are: do not own a computer; do not power it on; and do not use it. Selected publications (with Fred T. Grampp) UNIX Operating System Security, AT&T Bell Laboratories Technical Journal, 63, part 2, #8 (October 1984), pp. 1649–1672. References External links Dennis Ritchie: "Dabbling in the Cryptographic World" tells the story of cryptographic research he performed with Morris and why that research was never published. Modern cryptographers 1932 births 2011 deaths Scientists at Bell Labs Computer security academics Harvard University alumni National Security Agency cryptographers People from Boston Multics people Unix people
230355
https://en.wikipedia.org/wiki/4DTV
4DTV
4DTV is a proprietary broadcasting standard and technology for digital cable broadcasting and C-band/Ku-band satellite dishes from Motorola, using General Instrument's DigiCipher II for encryption. It can tune in both analog VideoCipher 2 and digital DCII satellite channels. History 4DTV technology was originally developed in 1997 (the same year that DigiCipher was developed) by General Instrument/NextLevel and Motorola, now a division of ARRIS. The 4DTV format is contemporary to the DVB-based digital television broadcast standard but its completion came before that of DVB and thus it is similar but incompatible with the DVB standard. The DigiCipher 2 encryption system is used in digital channels much like the VideoCipher and VideoCipher II systems were used for analog encrypted transmissions. By the time when analogue VideoCipher II channels are switched to digital, all of the remaining VCII-encrypted channels (excluding in the clear) are transitioned to DigiCipher II on all satellites that carries either in the clear or VideoCipher II/II+/RS-encrypted channels. On December 31, 2010, Motorola abandoned support for 4DTV after 13 years when it was developed. This made all of the receivers to redirect to AMC-18 (also known as W5/X4 on the 4DTV system) instead of its of other satellites that carries analog/VCII channels. On August 24, 2016, at 9:18 AM EST, Headend In The Sky (the provider for 4DTV/DigiCipher II programming) transitioned to DVB-S2 (MPEG-2/256 QAM), meaning that support for 4DTV ended on that date. Technical specifications Usage 4DTV is designed for C-band/Ku-band based satellite dishes (both TVRO/direct-broadcast) in conjunction with the DigiCipher II system (for digital standard definition/high definition signals) and the VideoCipher II system (for analog signals). It is also used on Canada's Shaw Direct (previously known as Star Choice) until 2017 when standard definition broadcasting ended in favour of HDTV exclusively, making the receivers obsolete. Receiver/Decoders 4DTV receivers were designed to receive analog NTSC (except the DSR-905) in the clear or VideoCipherII channels and feeds, as well as digital Digicipher 2 channels as a TVRO satellite system on both C and Ku band-powered satellite dishes. Four models are available, either new or refurbished: DSR-920 (discontinued as of 2003) DSR-921 (discontinued as of 2003) DSR-922 (made available in Fall 2000, discontinued) DSR-905 (designed to work with analog 4DTV receivers and it can only receive DigiCipher II channels) (discontinued) High definition access The HDD-200 is a peripheral for 4DTV, it is used to access high definition channels via the Mult-Media Access Port. This peripheral is no longer in production. Programming providers In the United States, National Programming Service, LLC (NPS) was the primary provider of subscription programming to 4DTV and C band/Ku band users. They ceased operations as of December 26, 2010 after making a controversial attempt of converting all of their customers over to Dish Network which failed. The largest providers are now Satellite Receivers, Ltd. (SRL) and Skyvision who sell programming from the Headend In The Sky (HITS) service by Comcast and will continue to do so in 2011 and beyond. The HITS services use the Comcast Subscription Authorization Center (SAC) for the channels being broadcast on the AMC 18 satellite located at 105 degrees West (W5 or X4 tile on 4DTV). In Canada, Dr. Sat is now the primary provider for HITS subscription services offered on C-Band after Satellite Communications Source ceased operations. Due to the removal of 4DTV/DigiCipher II channels on August 24, 2016, there are no more programming providers for the 4DTV in the United States and Canada. However, Shaw Direct still offers DigiCipher II programming in Canada, but not HITS programming. Advantages The 4DTV makes use of first-generation digital master feeds on several satellites and hundreds of channels. Therefore, a high quality signal is received, compared to other programming options that are typically compressed and re-uplinked. Being a C-band system, the 4DTV has the advantage of signal stability, great satellite footprint and no rainfade. This is a problem with services such as Dish Network and DirecTV satellite providers since they re-uplink on Ku and Ka bands. Disadvantages The master feeds for the many channels available can be scattered amongst multiple satellites. The actuator must slowly rotate the large dish into the desired satellite's signal path, and then a further short delay for signal acquisition and lock. This procedure makes rapid "channel surfing" impossible outside the HITS provided channels. References Digital television Motorola products
230489
https://en.wikipedia.org/wiki/Lorentz%20group
Lorentz group
In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz. For example, the following laws, equations, and theories respect Lorentz symmetry: The kinematical laws of special relativity Maxwell's field equations in the theory of electromagnetism The Dirac equation in the theory of the electron The Standard Model of particle physics The Lorentz group expresses the fundamental symmetry of space and time of all known fundamental laws of nature. In general relativity physics, in cases involving small enough regions of spacetime where gravitational variances are negligible, physical laws are Lorentz invariant in the same manner as that of special relativity physics. Basic properties The Lorentz group is a subgroup of the Poincaré group—the group of all isometries of Minkowski spacetime. Lorentz transformations are, precisely, isometries that leave the origin fixed. Thus, the Lorentz group is an isotropy subgroup of the isometry group of Minkowski spacetime. For this reason, the Lorentz group is sometimes called the homogeneous Lorentz group while the Poincaré group is sometimes called the inhomogeneous Lorentz group. Lorentz transformations are examples of linear transformations; general isometries of Minkowski spacetime are affine transformations. Mathematically, the Lorentz group may be described as the indefinite orthogonal group O(1,3), the matrix Lie group that preserves the quadratic form on R4. This quadratic form is, when put on matrix form (see classical orthogonal group), interpreted in physics as the metric tensor of Minkowski spacetime. The Lorentz group is a six-dimensional noncompact non-abelian real Lie group that is not connected. The four connected components are not simply connected. The identity component (i.e., the component containing the identity element) of the Lorentz group is itself a group, and is often called the restricted Lorentz group, and is denoted SO+(1,3). The restricted Lorentz group consists of those Lorentz transformations that preserve the orientation of space and direction of time. Its fundamental group has order 2, and its universal cover, the indefinite spin group Spin(1,3), is isomorphic to both the special linear group SL(2, C) and to the symplectic group Sp(2, C). These isomorphisms allow the Lorentz group to act on a large number of mathematical structures important to physics, most notably the spinors. Thus, in relativistic quantum mechanics and in quantum field theory, it is very common to call SL(2, C) the Lorentz group, with the understanding that SO+(1,3) is a specific representation (the vector representation) of it. The biquaternions, popular in geometric algebra, are also isomorphic to SL(2, C). The restricted Lorentz group also arises as the point symmetry group of a certain ordinary differential equation. Connected components Because it is a Lie group, the Lorentz group O(1,3) is both a group and admits a topological description as a smooth manifold. As a manifold, it has four connected components. Intuitively, this means that it consists of four topologically separated pieces. The four connected components can be categorized by two transformation properties its elements have: Some elements are reversed under time-inverting Lorentz transformations, for example, a future-pointing timelike vector would be inverted to a past-pointing vector Some elements have orientation reversed by improper Lorentz transformations, for example, certain vierbein (tetrads) Lorentz transformations that preserve the direction of time are called . The subgroup of orthochronous transformations is often denoted O+(1,3). Those that preserve orientation are called proper, and as linear transformations they have determinant +1. (The improper Lorentz transformations have determinant −1.) The subgroup of proper Lorentz transformations is denoted SO(1,3). The subgroup of all Lorentz transformations preserving both orientation and direction of time is called the proper, orthochronous Lorentz group or restricted Lorentz group, and is denoted by SO+(1, 3). (Note that some authors refer to SO(1,3) or even O(1,3) when they actually mean SO+(1, 3).) The set of the four connected components can be given a group structure as the quotient group O(1,3)/SO+(1,3), which is isomorphic to the Klein four-group. Every element in O(1,3) can be written as the semidirect product of a proper, orthochronous transformation and an element of the discrete group {1, P, T, PT} where P and T are the parity and time reversal operators: P = diag(1, −1, −1, −1) T = diag(−1, 1, 1, 1). Thus an arbitrary Lorentz transformation can be specified as a proper, orthochronous Lorentz transformation along with a further two bits of information, which pick out one of the four connected components. This pattern is typical of finite-dimensional Lie groups. Restricted Lorentz group The restricted Lorentz group is the identity component of the Lorentz group, which means that it consists of all Lorentz transformations that can be connected to the identity by a continuous curve lying in the group. The restricted Lorentz group is a connected normal subgroup of the full Lorentz group with the same dimension, in this case with dimension six. The restricted Lorentz group is generated by ordinary spatial rotations and Lorentz boosts (which are rotations in a hyperbolic space that includes a time-like direction ). Since every proper, orthochronous Lorentz transformation can be written as a product of a rotation (specified by 3 real parameters) and a boost (also specified by 3 real parameters), it takes 6 real parameters to specify an arbitrary proper orthochronous Lorentz transformation. This is one way to understand why the restricted Lorentz group is six-dimensional. (See also the Lie algebra of the Lorentz group.) The set of all rotations forms a Lie subgroup isomorphic to the ordinary rotation group SO(3). The set of all boosts, however, does not form a subgroup, since composing two boosts does not, in general, result in another boost. (Rather, a pair of non-colinear boosts is equivalent to a boost and a rotation, and this relates to Thomas rotation.) A boost in some direction, or a rotation about some axis, generates a one-parameter subgroup. Surfaces of transitivity If a group acts on a space , then a surface is a surface of transitivity if is invariant under , i.e., , and for any two points there is a such that . By definition of the Lorentz group, it preserves the quadratic form The surfaces of transitivity of the orthochronous Lorentz group , of spacetime are the following: is the upper branch of a hyperboloid of two sheets. Points on this sheet are separated from the origin by a future time-like vector. is the lower branch of this hyperboloid. Points on this sheet are the past time-like vectors. is the upper branch of the light cone, the future light cone. is the lower branch of the light cone, the past light cone. is a hyperboloid of one sheet. Points on this sheet are space-like separated from the origin. The origin . These surfaces are , so the images are not faithful, but they are faithful for the corresponding facts about . For the full Lorentz group, the surfaces of transitivity are only four since the transformation takes an upper branch of a hyperboloid (cone) to a lower one and vice versa. These observations constitute a good starting point for finding all infinite-dimensional unitary representations of the Lorentz group, in fact, of the Poincaré group, using the method of induced representations. One begins with a "standard vector", one for each surface of transitivity, and then ask which subgroup preserves these vectors. These subgroups are called little groups by physicists. The problem is then essentially reduced to the easier problem of finding representations of the little groups. For example, a standard vector in one of the hyperbolas of two sheets could be suitably chosen as . For each , the vector pierces exactly one sheet. In this case the little group is , the rotation group, all of whose representations are known. The precise infinite-dimensional unitary representation under which a particle transforms is part of its classification. Not all representations can correspond to physical particles (as far as is known). Standard vectors on the one-sheeted hyperbolas would correspond to tachyons. Particles on the light cone are photons, and more hypothetically, gravitons. The "particle" corresponding to the origin is the vacuum. Homomorphisms and isomorphisms Several other groups are either homomorphic or isomorphic to the restricted Lorentz group SO+(1, 3). These homomorphisms play a key role in explaining various phenomena in physics. The special linear group SL(2,C) is a double covering of the restricted Lorentz group. This relationship is widely used to express the Lorentz invariance of the Dirac equation and the covariance of spinors. The symplectic group Sp(2,C) is isomorphic to SL(2,C); it is used to construct Weyl spinors, as well as to explain how spinors can have a mass. The spin group Spin(1,3) is isomorphic to SL(2,C); it is used to explain spin and spinors in terms of the Clifford algebra, thus making it clear how to generalize the Lorentz group to general settings in Riemannian geometry, including theories of supergravity and string theory. The restricted Lorentz group is isomorphic to the projective special linear group PSL(2,C) which is, in turn, isomorphic to the Möbius group, the symmetry group of conformal geometry on the Riemann sphere. This relationship is central to the classification of the subgroups of the Lorentz group according to an earlier classification scheme developed for the Möbius group. The Weyl representation The Weyl representation or spinor map is a pair of surjective homomorphisms from SL(2,C) to SO+(1,3). They form a matched pair under parity transformations, corresponding to left and right chiral spinors. One may define an action of SL(2,C) on Minkowski spacetime by writing a point of spacetime as a two-by-two Hermitian matrix in the form in terms of Pauli matrices. This presentation, the Weyl presentation, satisfies Therefore, one has identified the space of Hermitian matrices (which is four-dimensional, as a real vector space) with Minkowski spacetime, in such a way that the determinant of a Hermitian matrix is the squared length of the corresponding vector in Minkowski spacetime. An element acts on the space of Hermitian matrices via where is the Hermitian transpose of . This action preserves the determinant and so SL(2,C) acts on Minkowski spacetime by (linear) isometries. The parity-inverted form of the above is which transforms as That this is the correct transformation follows by noting that remains invariant under the above pair of transformations. These maps are surjective, and kernel of either map is the two element subgroup ±I. By the first isomorphism theorem, the quotient group PSL(2,C) = SL(2,C) / {±I} is isomorphic to SO+(1,3). The parity map swaps these two coverings. It corresponds to Hermitian conjugation being an automorphism of These two distinct coverings corresponds to the two distinct chiral actions of the Lorentz group on spinors. The non-overlined form corresponds to right-handed spinors transforming as while the overline form corresponds to left-handed spinors transforming as It is important to observe that this pair of coverings does not survive quantization; when quantized, this leads to the peculiar phenomenon of the chiral anomaly. The classical (i.e. non-quantized) symmetries of the Lorentz group are broken by quantization; this is the content of the Atiyah–Singer index theorem. Notational conventions In physics, it is conventional to denote a Lorentz transformation as thus showing the matrix with spacetime indexes A four-vector can be created from the Pauli matrices in two different ways: as and as The two forms are related by a parity transformation. Note that Given a Lorentz transformation the double-covering of the orthochronous Lorentz group by given above can be written as Dropping the this takes the form The parity conjugate form is Proof That the above is the correct form for indexed notation is not immediately obvious, partly because, when working in indexed notation, it is quite easy to accidentally confuse a Lorentz transform with its inverse, or its transpose. This confusion arises due to the identity being difficult to recognize when written in indexed form. Lorentz transforms are not tensors under Lorentz transformations! Thus a direct proof of this identity is useful, for establishing its correctness. It can be demonstrated by starting with the identity where so that the above are just the usual Pauli matrices, and is the matrix transpose, and is complex conjugation. The matrix is Written as the four-vector, the relationship is This transforms as Taking one more transpose, one gets The symplectic group The symplectic group Sp(2,C) is isomorphic to SL(2,C). This isomorphism is constructed so as to preserve a symplectic bilinear form on that is, to leave the form invariant under Lorentz transformations. This may be articulated as follows. The symplectic group is defined as where Other common notations are for this element; sometimes is used, but this invites confusion with the idea of almost complex structures, which are not the same, as they transform differently. Given a pair of Weyl spinors (two-component spinors) the invariant bilinear form is conventionally written as This form is invariant under the Lorentz group, so that for one has This defines a kind of "scalar product" of spinors, and is commonly used to defined a Lorentz-invariant mass term in Lagrangians. There are several notable properties to be called out that are important to physics. One is that and so The defining relation can be written as which closely resembles the defining relation for the Lorentz group where is the metric tensor for Minkowski space and of course, as before. Covering groups Since is simply connected, it is the universal covering group of the restricted Lorentz group . By restriction, there is a homomorphism . Here, the special unitary group SU(2), which is isomorphic to the group of unit norm quaternions, is also simply connected, so it is the covering group of the rotation group SO(3). Each of these covering maps are twofold covers in the sense that precisely two elements of the covering group map to each element of the quotient. One often says that the restricted Lorentz group and the rotation group are doubly connected. This means that the fundamental group of the each group is isomorphic to the two-element cyclic group Z2. Twofold coverings are characteristic of spin groups. Indeed, in addition to the double coverings Spin+(1, 3) = SL(2, C) → SO+(1, 3) Spin(3) = SU(2) → SO(3) we have the double coverings Pin(1, 3) → O(1, 3) Spin(1, 3) → SO(1, 3) Spin+(1, 2) = SU(1, 1) → SO(1, 2) These spinorial double coverings are constructed from Clifford algebras. Topology The left and right groups in the double covering SU(2) → SO(3) are deformation retracts of the left and right groups, respectively, in the double covering SL(2,C) → SO+(1,3). But the homogeneous space SO+(1,3)/SO(3) is homeomorphic to hyperbolic 3-space H3, so we have exhibited the restricted Lorentz group as a principal fiber bundle with fibers SO(3) and base H3. Since the latter is homeomorphic to R3, while SO(3) is homeomorphic to three-dimensional real projective space RP3, we see that the restricted Lorentz group is locally homeomorphic to the product of RP3 with R3. Since the base space is contractible, this can be extended to a global homeomorphism. Generators of boosts and rotations The Lorentz group can be thought of as a subgroup of the diffeomorphism group of R4 and therefore its Lie algebra can be identified with vector fields on R4. In particular, the vectors that generate isometries on a space are its Killing vectors, which provides a convenient alternative to the left-invariant vector field for calculating the Lie algebra. We can write down a set of six generators: Vector fields on R4 generating three rotations i J, Vector fields on R4 generating three boosts i K, (TODO: where is ... ) It may be helpful to briefly recall here how to obtain a one-parameter group from a vector field, written in the form of a first order linear partial differential operator such as The corresponding initial value problem (consider a function of a scalar and solve with some initial conditions) is The solution can be written or where we easily recognize the one-parameter matrix group of rotations exp(i λ Jz) about the z axis. Differentiating with respect to the group parameter and setting it λ=0 in that result, we recover the standard matrix, which corresponds to the vector field we started with. This illustrates how to pass between matrix and vector field representations of elements of the Lie algebra. The exponential map plays this special role not only for the Lorentz group but for Lie groups in general. Reversing the procedure in the previous section, we see that the Möbius transformations that correspond to our six generators arise from exponentiating respectively η/2 (for the three boosts) or iθ/2 (for the three rotations) times the three Pauli matrices Conjugacy classes Because the restricted Lorentz group SO+(1, 3) is isomorphic to the Möbius group PSL(2,C), its conjugacy classes also fall into five classes: Elliptic transformations Hyperbolic transformations Loxodromic transformations Parabolic transformations The trivial identity transformation In the article on Möbius transformations, it is explained how this classification arises by considering the fixed points of Möbius transformations in their action on the Riemann sphere, which corresponds here to null eigenspaces of restricted Lorentz transformations in their action on Minkowski spacetime. An example of each type is given in the subsections below, along with the effect of the one-parameter subgroup it generates (e.g., on the appearance of the night sky). The Möbius transformations are the conformal transformations of the Riemann sphere (or celestial sphere). Then conjugating with an arbitrary element of SL(2,C) obtains the following examples of arbitrary elliptic, hyperbolic, loxodromic, and parabolic (restricted) Lorentz transformations, respectively. The effect on the flow lines of the corresponding one-parameter subgroups is to transform the pattern seen in the examples by some conformal transformation. For example, an elliptic Lorentz transformation can have any two distinct fixed points on the celestial sphere, but points still flow along circular arcs from one fixed point toward the other. The other cases are similar. Elliptic An elliptic element of SL(2,C) is and has fixed points = 0, ∞. Writing the action as and collecting terms, the spinor map converts this to the (restricted) Lorentz transformation This transformation then represents a rotation about the axis, exp(). The one-parameter subgroup it generates is obtained by taking to be a real variable, the rotation angle, instead of a constant. The corresponding continuous transformations of the celestial sphere (except for the identity) all share the same two fixed points, the North and South poles. The transformations move all other points around latitude circles so that this group yields a continuous counterclockwise rotation about the axis as increases. The angle doubling evident in the spinor map is a characteristic feature of spinorial double coverings. Hyperbolic A hyperbolic element of SL(2,C) is and has fixed points = 0, ∞. Under stereographic projection from the Riemann sphere to the Euclidean plane, the effect of this Möbius transformation is a dilation from the origin. The spinor map converts this to the Lorentz transformation This transformation represents a boost along the axis with rapidity . The one-parameter subgroup it generates is obtained by taking to be a real variable, instead of a constant. The corresponding continuous transformations of the celestial sphere (except for the identity) all share the same fixed points (the North and South poles), and they move all other points along longitudes away from the South pole and toward the North pole. Loxodromic A loxodromic element of SL(2,C) is and has fixed points = 0, ∞. The spinor map converts this to the Lorentz transformation The one-parameter subgroup this generates is obtained by replacing η+iθ with any real multiple of this complex constant. (If η, θ vary independently, then a two-dimensional abelian subgroup is obtained, consisting of simultaneous rotations about the axis and boosts along the -axis; in contrast, the one-dimensional subgroup discussed here consists of those elements of this two-dimensional subgroup such that the rapidity of the boost and angle of the rotation have a fixed ratio.) The corresponding continuous transformations of the celestial sphere (excepting the identity) all share the same two fixed points (the North and South poles). They move all other points away from the South pole and toward the North pole (or vice versa), along a family of curves called loxodromes. Each loxodrome spirals infinitely often around each pole. Parabolic A parabolic element of SL(2,C) is and has the single fixed point = ∞ on the Riemann sphere. Under stereographic projection, it appears as an ordinary translation along the real axis. The spinor map converts this to the matrix (representing a Lorentz transformation) This generates a two-parameter abelian subgroup, which is obtained by considering a complex variable rather than a constant. The corresponding continuous transformations of the celestial sphere (except for the identity transformation) move points along a family of circles that are all tangent at the North pole to a certain great circle. All points other than the North pole itself move along these circles. Parabolic Lorentz transformations are often called null rotations. Since these are likely to be the least familiar of the four types of nonidentity Lorentz transformations (elliptic, hyperbolic, loxodromic, parabolic), it is illustrated here how to determine the effect of an example of a parabolic Lorentz transformation on Minkowski spacetime. The matrix given above yields the transformation Now, without loss of generality, pick . Differentiating this transformation with respect to the now real group parameter and evaluating at α=0 produces the corresponding vector field (first order linear partial differential operator), Apply this to a function , and demand that it stays invariant, i.e., it is annihilated by this transformation. The solution of the resulting first order linear partial differential equation can be expressed in the form where is an arbitrary smooth function. The arguments of give three rational invariants describing how points (events) move under this parabolic transformation, as they themselves do not move, Choosing real values for the constants on the right hand sides yields three conditions, and thus specifies a curve in Minkowski spacetime. This curve is an orbit of the transformation. The form of the rational invariants shows that these flowlines (orbits) have a simple description: suppressing the inessential coordinate , each orbit is the intersection of a null plane, , with a hyperboloid, . The case 3 = 0 has the hyperboloid degenerate to a light cone with the orbits becoming parabolas lying in corresponding null planes. A particular null line lying on the light cone is left invariant; this corresponds to the unique (double) fixed point on the Riemann sphere mentioned above. The other null lines through the origin are "swung around the cone" by the transformation. Following the motion of one such null line as increases corresponds to following the motion of a point along one of the circular flow lines on the celestial sphere, as described above. A choice instead, produces similar orbits, now with the roles of and interchanged. Parabolic transformations lead to the gauge symmetry of massless particles (like photons) with helicity || ≥ 1. In the above explicit example, a massless particle moving in the direction, so with 4-momentum P=(p, 0, 0, p), is not affected at all by the -boost and -rotation combination defined below, in the "little group" of its motion. This is evident from the explicit transformation law discussed: like any light-like vector, P itself is now invariant, i.e., all traces or effects of have disappeared. 1 = 2 = 3 = 0, in the special case discussed. (The other similar generator, as well as it and z comprise altogether the little group of the lightlike vector, isomorphic to (2).) Appearance of the night sky This isomorphism has the consequence that Möbius transformations of the Riemann sphere represent the way that Lorentz transformations change the appearance of the night sky, as seen by an observer who is maneuvering at relativistic velocities relative to the "fixed stars". Suppose the "fixed stars" live in Minkowski spacetime and are modeled by points on the celestial sphere. Then a given point on the celestial sphere can be associated with , a complex number that corresponds to the point on the Riemann sphere, and can be identified with a null vector (a light-like vector) in Minkowski space or, in the Weyl representation (the spinor map), the Hermitian matrix The set of real scalar multiples of this null vector, called a null line through the origin, represents a line of sight from an observer at a particular place and time (an arbitrary event we can identify with the origin of Minkowski spacetime) to various distant objects, such as stars. Then the points of the celestial sphere (equivalently, lines of sight) are identified with certain Hermitian matrices. Lie algebra As with any Lie group, a useful way to study many aspects of the Lorentz group is via its Lie algebra. Since the Lorentz group SO(1,3) is a matrix Lie group, its Lie algebra so(1,3) is an algebra of matrices, which may be computed as . If is the diagonal matrix with diagonal entries , then the Lie algebra o(1,3) consists of matrices such that . Explicitly, so(1,3) consists of matrices of the form , where are arbitrary real numbers. This Lie algebra is six dimensional. The subalgebra of so(1,3) consisting of elements in which , , and equal zero is isomorphic to so(3). Note that the full Lorentz group O(1,3), the proper Lorentz group SO(1,3) and the proper orthochronous Lorentz group all have the same Lie algebra, which is typically denoted so(1,3). Since the identity component of the Lorentz group is isomorphic to a finite quotient of SL(2,C) (see the section above on the connection of the Lorentz group to the Möbius group), the Lie algebra of the Lorentz group is isomorphic to the Lie algebra sl(2,C). Note that sl(2,C) is three dimensional when viewed as a complex Lie algebra, but six dimensional when viewed as a real Lie algebra. Generators of the Möbius group Another generating set arises via the isomorphism to the Möbius group. The following table lists the six generators, in which The first column gives a generator of the flow under the Möbius action (after stereographic projection from the Riemann sphere) as a real vector field on the Euclidean plane. The second column gives the corresponding one-parameter subgroup of Möbius transformations. The third column gives the corresponding one-parameter subgroup of Lorentz transformations (the image under our homomorphism of preceding one-parameter subgroup). The fourth column gives the corresponding generator of the flow under the Lorentz action as a real vector field on Minkowski spacetime. Notice that the generators consist of Two parabolics (null rotations) One hyperbolic (boost in the ∂z direction) Three elliptics (rotations about the x, y, z axes, respectively) Let's verify one line in this table. Start with Exponentiate: This element of SL(2,C) represents the one-parameter subgroup of (elliptic) Möbius transformations: Next, The corresponding vector field on C (thought of as the image of S2 under stereographic projection) is Writing , this becomes the vector field on R2 Returning to our element of SL(2,C), writing out the action and collecting terms, we find that the image under the spinor map is the element of SO+(1,3) Differentiating with respect to at =0, yields the corresponding vector field on R4, This is evidently the generator of counterclockwise rotation about the axis. Subgroups of the Lorentz group The subalgebras of the Lie algebra of the Lorentz group can be enumerated, up to conjugacy, from which the closed subgroups of the restricted Lorentz group can be listed, up to conjugacy. (See the book by Hall cited below for the details.) These can be readily expressed in terms of the generators given in the table above. The one-dimensional subalgebras of course correspond to the four conjugacy classes of elements of the Lorentz group: generates a one-parameter subalgebra of parabolics SO(0,1), generates a one-parameter subalgebra of boosts SO(1,1), generates a one-parameter of rotations SO(2), (for any ) generates a one-parameter subalgebra of loxodromic transformations. (Strictly speaking the last corresponds to infinitely many classes, since distinct give different classes.) The two-dimensional subalgebras are: generate an abelian subalgebra consisting entirely of parabolics, generate a nonabelian subalgebra isomorphic to the Lie algebra of the affine group Aff(1), generate an abelian subalgebra consisting of boosts, rotations, and loxodromics all sharing the same pair of fixed points. The three-dimensional subalgebras use the Bianchi classification scheme: generate a Bianchi V subalgebra, isomorphic to the Lie algebra of Hom(2), the group of euclidean homotheties, generate a Bianchi VII_0 subalgebra, isomorphic to the Lie algebra of E(2), the euclidean group, , where , generate a Bianchi VII_a subalgebra, generate a Bianchi VIII subalgebra, isomorphic to the Lie algebra of SL(2,R), the group of isometries of the hyperbolic plane, generate a Bianchi IX subalgebra, isomorphic to the Lie algebra of SO(3), the rotation group. The Bianchi types refer to the classification of three-dimensional Lie algebras by the Italian mathematician Luigi Bianchi. The four-dimensional subalgebras are all conjugate to generate a subalgebra isomorphic to the Lie algebra of Sim(2), the group of Euclidean similitudes. The subalgebras form a lattice (see the figure), and each subalgebra generates by exponentiation a closed subgroup of the restricted Lie group. From these, all subgroups of the Lorentz group can be constructed, up to conjugation, by multiplying by one of the elements of the Klein four-group. As with any connected Lie group, the coset spaces of the closed subgroups of the restricted Lorentz group, or homogeneous spaces, have considerable mathematical interest. A few, brief descriptions: The group Sim(2) is the stabilizer of a null line, i.e., of a point on the Riemann sphere—so the homogeneous space SO+(1,3)/Sim(2) is the Kleinian geometry that represents conformal geometry on the sphere S2. The (identity component of the) Euclidean group SE(2) is the stabilizer of a null vector, so the homogeneous space SO+(1,3)/SE(2) is the momentum space of a massless particle; geometrically, this Kleinian geometry represents the degenerate geometry of the light cone in Minkowski spacetime. The rotation group SO(3) is the stabilizer of a timelike vector, so the homogeneous space SO+(1,3)/SO(3) is the momentum space of a massive particle; geometrically, this space is none other than three-dimensional hyperbolic space H3. Generalization to higher dimensions The concept of the Lorentz group has a natural generalization to spacetime of any number of dimensions. Mathematically, the Lorentz group of n+1-dimensional Minkowski space is the indefinite orthogonal group O(n,1) of linear transformations of Rn+1 that preserves the quadratic form The group O(1, n) preserves the quadratic form It is isomorphic to O(n,1) but enjoys greater popularity in mathematical physics, primarily because the algebra of the Dirac equation, and more generally, spinors and Clifford algebras, are "more natural" with this signature. Many of the properties of the Lorentz group in four dimensions (where ) generalize straightforwardly to arbitrary n. For instance, the Lorentz group O(n,1) has four connected components, and it acts by conformal transformations on the celestial (n−1)-sphere in n+1-dimensional Minkowski space. The identity component SO+(n,1) is an SO(n)-bundle over hyperbolic n-space Hn. The low-dimensional cases and are often useful as "toy models" for the physical case , while higher-dimensional Lorentz groups are used in physical theories such as string theory that posit the existence of hidden dimensions. The Lorentz group O(n,1) is also the isometry group of n-dimensional de Sitter space dSn, which may be realized as the homogeneous space O(n,1)/O(n−1,1). In particular O(4,1) is the isometry group of the de Sitter universe dS4, a cosmological model. See also Notes References Reading List Emil Artin (1957) Geometric Algebra, chapter III: Symplectic and Orthogonal Geometry via Internet Archive, covers orthogonal groups O(p,q) A canonical reference; see chapters 1–6 for representations of the Lorentz group. An excellent resource for Lie theory, fiber bundles, spinorial coverings, and many other topics. See Lecture 11 for the irreducible representations of SL(2,C). . See Chapter 6 for the subalgebras of the Lie algebra of the Lorentz group. See also the See Section 1.3 for a beautifully illustrated discussion of covering spaces. See Section 3D for the topology of rotation groups. §41.3 (Dover reprint edition.) An excellent reference on Minkowski spacetime and the Lorentz group. See Chapter 3 for a superbly illustrated discussion of Möbius transformations. . Lie groups Special relativity Group theory Hendrik Lorentz
230657
https://en.wikipedia.org/wiki/DigiCipher%202
DigiCipher 2
DigiCipher 2, or simply DCII, is a proprietary standard format of digital signal transmission and it doubles as an encryption standard with MPEG-2/MPEG-4 signal video compression used on many communications satellite television and audio signals. The DCII standard was originally developed in 1997 by General Instrument, which then became the Home and Network Mobility division of Motorola, then bought by Google in Aug 2011, and lastly became the Home portion of the division to Arris. The original attempt for a North American digital signal encryption and compression standard was DigiCipher 1, which was used most notably in the now-defunct PrimeStar medium-power direct broadcast satellite (DBS) system during the early 1990s. The DCII standard predates wide acceptance of DVB-based digital terrestrial television compression (although not cable or satellite DVB) and therefore is incompatible with the DVB standard. Approximately 70% of newer first-generation digital cable networks in North America use the 4DTV/DigiCipher 2 format. The use of DCII is most prevalent in North American digital cable television set-top boxes. DCII is also used on Motorola's 4DTV digital satellite television tuner and Shaw Direct's DBS receiver. The DigiCipher 2 encryption standard was reverse engineered in 2016. Technical specifications DigiCipher II uses QPSK and BPSK at the same time. The primary difference between DigiCipher 2 and DVB lies in how each standard handles SI metadata, or System Information, where DVB reserves packet identifiers from 16 to 31 for metadata, DigiCipher reserves only packet identifier 8187 for its master guide table which acts as a look-up table for all other metadata tables. DigiCipher 2 also extends the MPEG program number that is assigned for each service in a transport stream with the concept of a virtual channel number, whereas the DVB system never defined this type of remapping preferring to use a registry of network identifiers to further differentiate program numbers from those used in other transport streams. There are also private non-standard additions to DVB that add virtual channel remapping using logical channel numbers. Also unlike DVB, all text used in descriptors can be compressed using standard Huffman coding which saves on broadcast bandwidth and loading times. DigiCipher II uses Dolby Digital AC-3 audio for all channels, although MPEG-1 Level 2 audio is not supported. External links Technical page on digital satellite signals Historical Perspective: HBO Implements Scrambling References Cryptographic protocols Digital television History of television Television terminology Conditional-access television broadcasting
230777
https://en.wikipedia.org/wiki/Security%20protocol%20notation
Security protocol notation
In cryptography, security (engineering) protocol notation, also known as protocol narrations and Alice & Bob notation, is a way of expressing a protocol of correspondence between entities of a dynamic system, such as a computer network. In the context of a formal model, it allows reasoning about the properties of such a system. The standard notation consists of a set of principals (traditionally named Alice, Bob, Charlie, and so on) who wish to communicate. They may have access to a server S, shared keys K, timestamps T, and can generate nonces N for authentication purposes. A simple example might be the following: This states that Alice intends a message for Bob consisting of a plaintext X encrypted under shared key KA,B. Another example might be the following: This states that Bob intends a message for Alice consisting of a nonce NB encrypted using public key of Alice. A key with two subscripts, KA,B, is a symmetric key shared by the two corresponding individuals. A key with one subscript, KA, is the public key of the corresponding individual. A private key is represented as the inverse of the public key. The notation specifies only the operation and not its semantics — for instance, private key encryption and signature are represented identically. We can express more complicated protocols in such a fashion. See Kerberos as an example. Some sources refer to this notation as Kerberos Notation. Some authors consider the notation used by Steiner, Neuman, & Schiller as a notable reference. Several models exist to reason about security protocols in this way, one of which is BAN logic. Security protocol notation inspired many of the programming languages used in choreographic programming. References Cryptography
231088
https://en.wikipedia.org/wiki/Multichannel%20Multipoint%20Distribution%20Service
Multichannel Multipoint Distribution Service
Multichannel Multipoint Distribution Service (MMDS), formerly known as Broadband Radio Service (BRS) and also known as Wireless Cable, is a wireless telecommunications technology, used for general-purpose broadband networking or, more commonly, as an alternative method of cable television programming reception. MMDS is used in Australia, Barbados, Belarus, Bolivia, Brazil, Cambodia, Canada, Czech Republic, Dominican Republic, Iceland, India, Kazakhstan, Kyrgyzstan, Lebanon, Mexico, Nepal, Nigeria, Pakistan, Panama, Portugal (including Madeira), Russia, Slovakia, Sri Lanka, Sudan, Thailand, Ukraine, United States, Uruguay and Vietnam. It is most commonly used in sparsely populated rural areas, where laying cables is not economically viable, although some companies have also offered MMDS services in urban areas, most notably in Ireland, until they were phased out in 2016. Technology The BRS band uses microwave frequencies from 2.3 GHz to 2.5 GHz. Reception of BRS-delivered television and data signals is done with a rooftop microwave antenna. The antenna is attached to a down-converter or transceiver to receive and transmit the microwave signal and convert them to frequencies compatible with standard TV tuners (much like on satellite dishes where the signals are converted down to frequencies more compatible with standard TV coaxial cabling), some antennas use an integrated down-converter or transceiver. Digital TV channels can then be decoded with a standard cable set-top box or directly for TVs with integrated digital tuners. Internet data can be received with a standard DOCSIS Cable Modem connected to the same antenna and transceiver. The MMDS band is separated into 33 6 MHz "channels" (31 in USA) which may be licensed to cable companies offering service in different areas of a country. The concept was to allow entities to own several channels and multiplex several television, radio, and later Internet data onto each channel using digital technology. Just like with Digital Cable channels, each channel is capable of 30.34 Mbit/s with 64QAM modulation, and 42.88 Mbit/s with 256QAM modulation. Due to forward error correction and other overhead, actual throughput is around 27 Mbit/s for 64QAM and 38 Mbit/s for 256QAM. The newer BRS Band Plan makes changes to channel size and licensing in order to accommodate new WIMAX TDD fixed and mobile equipment, and reallocated frequencies from 2150–2162 MHz to the AWS band. These changes may not be compatible with the frequencies and channel sizes required for operating traditional MMDS or DOCSIS based equipment. MMDS and DOCSIS+ Local Multipoint Distribution Service (LMDS) and BRS have adapted the DOCSIS (Data Over Cable Service Interface Specification) from the cable modem world. The version of DOCSIS modified for wireless broadband is known as DOCSIS+. Data-transport security is accomplished under BRS by encrypting traffic flows between the broadband wireless modem and the WMTS (Wireless Modem Termination System) located in the base station of the provider's network using Triple DES. DOCSIS+ reduces theft-of-service vulnerabilities under BRS by requiring that the WMTS enforce encryption, and by employing an authenticated client/server key-management protocol in which the WMTS controls distribution of keying material to broadband wireless modems. LMDS and BRS wireless modems utilize the DOCSIS+ key-management protocol to obtain authorization and traffic encryption material from a WMTS, and to support periodic reauthorization and key refresh. The key-management protocol uses X.509 digital certificates, RSA public key encryption, and Triple DES encryption to secure key exchanges between the wireless modem and the WMTS. MMDS provided significantly greater range than LMDS. MMDS may be obsoleted by the newer 802.16 WiMAX standard approved since 2004. MMDS was sometimes expanded to Multipoint Microwave Distribution System or Multi-channel Multi-point Distribution System. All three phrases refer to the same technology. Current status In the United States, WATCH Communications (based in Lima, Ohio), Eagle Vision (based in Kirksville, MO), and several other companies offer MMDS-based wireless cable television, Internet access, and IP-based telephone services. In certain areas, BRS is being deployed for use as wireless high-speed Internet access, mostly in rural areas where other types of high-speed internet are either unavailable (such as cable or DSL) or prohibitively expensive (such as satellite internet). CommSPEED is a major vendor in the US market for BRS-based internet. AWI Networks (formerly Sky-View Technologies) operates a number of MMDS sites delivering high-speed Internet, VoIP telephone, and Digital TV services in the Southwestern U.S. In 2010, AWI began upgrading its infrastructure to DOCSIS 3.0 hardware, along with new microwave transmission equipment, allowing higher modulation rates like 256QAM. This has enabled download speeds in excess of 100 Mbit/s, over distances up to 35 miles from the transmission site. In the early days of MMDS, it was known as "Wireless Cable" and was used in a variety of investment scams that still surface today. Frequent solicitations of Wireless Cable fraud schemes were often heard on talk radio shows like The Sonny Bloch Show in the mid-1990s. Several US telephone companies attempted television services via this system in the mid-1990sthe Tele-TV venture of Bell Atlantic, NYNEX and Pacific Bell; and the rival Americast consortium of Ameritech, BellSouth, SBC, SNET and GTE. The Tele-TV operation was only launched from 1999 to 2001 by Pacific Bell (the merged Bell Atlantic/NYNEX never launched a service), while Americast also petered out by that time, albeit mainly in GTE and BellSouth areas; the systems operated by Ameritech utilized standard wired cable. In the Canadian provinces of Manitoba and British Columbia, Craig Wireless operates a wireless cable and internet service (MMDS) for rural and remote customers. In Saskatchewan, Sasktel operated an MMDS system under the name Wireless Broadband Internet (WBBI) for rural internet access until it was shut down in 2014 and replaced with an LTE-TDD system due to reallocation of the radio spectrum by Industry Canada. In Mexico, the 2.5 GHz band spectrum was reclaimed by the government in order to allow newer and better wireless data services. Hence, MAS TV (formerly known as MVS Multivision) had to relinquish the concessions for TV broadcast and shut down its MMDS pay TV services in 2014 after 25 years of service. In Ireland, since 1990, UPC Ireland (previously Chorus and NTL Ireland) offered MMDS TV services almost nationwide. The frequency band initially allocated was 2500–2690 MHz (the "2.6 GHz band") consisting of 22–23 8 MHz analogue channels; digital TV was restricted to 2524–2668 MHz, consisting of 18 8 MHz digital channels. Two digital TV standards were used: DVB-T/MPEG-2 in the old Chorus franchise area and DVB-C/MPEG-2 in the old NTL franchise area. The existing licences were to expire 18 April 2014 but Comreg, the Irish communications regulator, extended the licences for a further 2 years to 18 April 2016 at which date they expired together with all associated spectrum rights of use. The 2.6 GHz band spectrum will be auctioned off so that when the existing MMDS licences expire new rights of use can issue on a service and technology neutral basis (by means of new licences). As a result, holders of the new rights of use may choose to provide any service capable of being delivered using 2.6 GHz spectrum. For instance, they could distribute television programming content, subject to complying with the relevant technical conditions and with any necessary broadcasting content authorisations, or they could adopt some other use. In Iceland, since November 2006, Vodafone Iceland runs Digital Ísland (Digital Iceland)the broadcasting system for 365 (media corporation), (previously operated by 365 Broadcast Media). Digital Ísland offers digital MMDS television services using DVB-T technology alongside a few analogue channels. The MMDS frequency range extends from 2500–2684 MHz for a total of 23 8 MHz channels, of which 21 are considered usable for broadcasting in Iceland. Analogue MMDS broadcasting began in 1993, moving to digital in 2004. In Brazil, the shutdown of the MMDS technology started in 2012 to release the frequency for the 2500–2600 MHz LTE-UTRAN band, which would make the service infeasible. The national shutdown was planned to be finished at the end of 2012; as of 2013, the service had already been shut down in most cities. In the Dominican Republic, Wind Telecom started operations using MMDS technology in 2008; at that time and ever since it became a pioneer taking advantage of such implementations. The company uses the DVB standard for its digital television transmissions. See also Federal Communications Commission (FCC) References External links FCC BRS EBS Homepage What is MMDS? Vodafone Digital Ísland MMDS Íslenska Fjarskiptahandbókin Digital Ísland info Network access Microwave bands Educational television ca:Banda Ku
231284
https://en.wikipedia.org/wiki/CAN%20bus
CAN bus
A Controller Area Network (CAN bus) is a robust vehicle bus standard designed to allow microcontrollers and devices to communicate with each other's applications without a host computer. It is a message-based protocol, designed originally for multiplex electrical wiring within automobiles to save on copper, but it can also be used in many other contexts. For each device, the data in a frame is transmitted sequentially but in such a way that if more than one device transmits at the same time, the highest priority device can continue while the others back off. Frames are received by all devices, including by the transmitting device. History Development of the CAN bus started in 1983 at Robert Bosch GmbH. The protocol was officially released in 1986 at the Society of Automotive Engineers (SAE) conference in Detroit, Michigan. The first CAN controller chips were introduced by Intel in 1987, and shortly thereafter by Philips. Released in 1991, the Mercedes-Benz W140 was the first production vehicle to feature a CAN-based multiplex wiring system. Bosch published several versions of the CAN specification and the latest is CAN 2.0 published in 1991. This specification has two parts; part A is for the standard format with an 11-bit identifier, and part B is for the extended format with a 29-bit identifier. A CAN device that uses 11-bit identifiers is commonly called CAN 2.0A and a CAN device that uses 29-bit identifiers is commonly called CAN 2.0B. These standards are freely available from Bosch along with other specifications and white papers. In 1993, the International Organization for Standardization (ISO) released the CAN standard ISO 11898 which was later restructured into two parts; ISO 11898-1 which covers the data link layer, and ISO 11898-2 which covers the CAN physical layer for high-speed CAN. ISO 11898-3 was released later and covers the CAN physical layer for low-speed, fault-tolerant CAN. The physical layer standards ISO 11898-2 and ISO 11898-3 are not part of the Bosch CAN 2.0 specification. These standards may be purchased from the ISO. Bosch is still active in extending the CAN standards. In 2012, Bosch released CAN FD 1.0 or CAN with Flexible Data-Rate. This specification uses a different frame format that allows a different data length as well as optionally switching to a faster bit rate after the arbitration is decided. CAN FD is compatible with existing CAN 2.0 networks so new CAN FD devices can coexist on the same network with existing CAN devices. CAN bus is one of five protocols used in the on-board diagnostics (OBD)-II vehicle diagnostics standard. The OBD-II standard has been mandatory for all cars and light trucks sold in the United States since 1996. The EOBD standard has been mandatory for all petrol vehicles sold in the European Union since 2001 and all diesel vehicles since 2004. Applications Passenger vehicles, trucks, buses (combustion vehicles and electric vehicles) Agricultural equipment Electronic equipment for aviation and navigation Industrial automation and mechanical control Elevators, escalators Building automation Medical instruments and equipment Pedelecs Model railways/railroads Ships and other maritime applications Lighting control systems 3D Printers Automotive The modern automobile may have as many as 70 electronic control units (ECU) for various subsystems. Traditionally, the biggest processor is the engine control unit. Others are used for Autonomous Driving, Advanced Driver Assistance System (ADAS), transmission, airbags, antilock braking/ABS, cruise control, electric power steering, audio systems, power windows, doors, mirror adjustment, battery and recharging systems for hybrid/electric cars, etc. Some of these form independent subsystems, but communications among others are essential. A subsystem may need to control actuators or receive feedback from sensors. The CAN standard was devised to fill this need. One key advantage is that interconnection between different vehicle systems can allow a wide range of safety, economy and convenience features to be implemented using software alone - functionality which would add cost and complexity if such features were "hard wired" using traditional automotive electrics. Examples include: Auto start/stop: Various sensor inputs from around the vehicle (speed sensors, steering angle, air conditioning on/off, engine temperature) are collated via the CAN bus to determine whether the engine can be shut down when stationary for improved fuel economy and emissions. Electric park brakes: The "hill hold" functionality takes input from the vehicle's tilt sensor (also used by the burglar alarm) and the road speed sensors (also used by the ABS, engine control and traction control) via the CAN bus to determine if the vehicle is stopped on an incline. Similarly, inputs from seat belt sensors (part of the airbag controls) are fed from the CAN bus to determine if the seat belts are fastened, so that the parking brake will automatically release upon moving off. Parking assist systems: when the driver engages reverse gear, the transmission control unit can send a signal via the CAN bus to activate both the parking sensor system and the door control module for the passenger side door mirror to tilt downward to show the position of the curb. The CAN bus also takes inputs from the rain sensor to trigger the rear windscreen wiper when reversing. Auto lane assist/collision avoidance systems: The inputs from the parking sensors are also used by the CAN bus to feed outside proximity data to driver assist systems such as Lane Departure warning, and more recently, these signals travel through the CAN bus to actuate brake by wire in active collision avoidance systems. Auto brake wiping: Input is taken from the rain sensor (used primarily for the automatic windscreen wipers) via the CAN bus to the ABS module to initiate an imperceptible application of the brakes whilst driving to clear moisture from the brake rotors. Some high performance Audi and BMW models incorporate this feature. Sensors can be placed at the most suitable place, and their data used by several ECUs. For example, outdoor temperature sensors (traditionally placed in the front) can be placed in the outside mirrors, avoiding heating by the engine, and data used by the engine, the climate control, and the driver display. In recent years, the LIN bus (Local Interconnect Network) standard has been introduced to complement CAN for non-critical subsystems such as air-conditioning and infotainment, where data transmission speed and reliability are less critical. Other The CAN bus protocol has been used on the Shimano DI2 electronic gear shift system for road bicycles since 2009, and is also used by the Ansmann and BionX systems in their direct drive motor. The CAN bus is also used as a fieldbus in general automation environments, primarily due to the low cost of some CAN controllers and processors. Manufacturers including NISMO aim to use CAN bus data to recreate real-life racing laps in the videogame Gran Turismo 6 using the game's GPS Data Logger function, which would then allow players to race against real laps. Johns Hopkins University's Applied Physics Laboratory's Modular Prosthetic Limb (MPL) uses a local CAN bus to facilitate communication between servos and microcontrollers in the prosthetic arm. Teams in the FIRST Robotics Competition widely use CAN bus to communicate between the roboRIO and other robot control modules. The CueScript teleprompter range uses CAN bus protocol over coaxial cable, to connect its CSSC – Desktop Scroll Control to the main unit The CAN bus protocol is widely implemented due to its fault tolerance in electrically noisy environments such as model railroad sensor feedback systems by major commercial Digital Command Control system manufacturers and various open source digital model railroad control projects. Architecture Physical organization CAN is a multi-master serial bus standard for connecting electronic control units (ECUs) also known as nodes (automotive electronics is a major application domain). Two or more nodes are required on the CAN network to communicate. A node may interface to devices from simple digital logic e.g. PLD, via FPGA up to an embedded computer running extensive software. Such a computer may also be a gateway allowing a general purpose computer (like a laptop) to communicate over a USB or Ethernet port to the devices on a CAN network. All nodes are connected to each other through a physically conventional two wire bus. The wires are a twisted pair with a 120 Ω (nominal) characteristic impedance. This bus uses differential wired-AND signals. Two signals, CAN high (CANH) and CAN low (CANL) are either driven to a "dominant" state with CANH > CANL, or not driven and pulled by passive resistors to a "recessive" state with CANH ≤ CANL. A 0 data bit encodes a dominant state, while a 1 data bit encodes a recessive state, supporting a wired-AND convention, which gives nodes with lower ID numbers priority on the bus. ISO 11898-2, also called high-speed CAN (bit speeds up to 1 Mbit/s on CAN, 5 Mbit/s on CAN-FD), uses a linear bus terminated at each end with 120 Ω resistors. High-speed CAN signaling drives the CANH wire towards 3.5 V and the CANL wire towards 1.5 V when any device is transmitting a dominant (0), while if no device is transmitting a dominant, the terminating resistors passively return the two wires to the recessive (1) state with a nominal differential voltage of 0 V. (Receivers consider any differential voltage of less than 0.5 V to be recessive.) The dominant differential voltage is a nominal 2 V. The dominant common mode voltage (CANH+CANL)/2 must be within 1.5 to 3.5 V of common, while the recessive common mode voltage must be within ±12  of common. ISO 11898-3, also called low-speed or fault-tolerant CAN (up to 125 kbit/s), uses a linear bus, star bus or multiple star buses connected by a linear bus and is terminated at each node by a fraction of the overall termination resistance. The overall termination resistance should be close to, but not less than, 100 Ω. Low-speed fault-tolerant CAN signaling operates similarly to high-speed CAN, but with larger voltage swings. The dominant state is transmitted by driving CANH towards the device power supply voltage (5 V or 3.3 V), and CANL towards 0 V when transmitting a dominant (0), while the termination resistors pull the bus to a recessive state with CANH at 0 V and CANL at 5 V. This allows a simpler receiver which just considers the sign of CANH−CANL. Both wires must be able to handle −27 to +40 V without damage. Electrical properties With both high-speed and low-speed CAN, the speed of the transition is faster when a recessive to dominant transition occurs since the CAN wires are being actively driven. The speed of the dominant to recessive transition depends primarily on the length of the CAN network and the capacitance of the wire used. High-speed CAN is usually used in automotive and industrial applications where the bus runs from one end of the environment to the other. Fault-tolerant CAN is often used where groups of nodes need to be connected together. The specifications require the bus be kept within a minimum and maximum common mode bus voltage, but do not define how to keep the bus within this range. The CAN bus must be terminated. The termination resistors are needed to suppress reflections as well as return the bus to its recessive or idle state. High-speed CAN uses a 120 Ω resistor at each end of a linear bus. Low-speed CAN uses resistors at each node. Other types of terminations may be used such as the Terminating Bias Circuit defined in ISO11783. A provides power and ground in addition to the CAN signaling on a four-wire cable. This provides automatic electrical bias and termination at each end of each bus segment. An ISO11783 network is designed for hot plug-in and removal of bus segments and ECUs. Nodes Each node requires a Central processing unit, microprocessor, or host processor The host processor decides what the received messages mean and what messages it wants to transmit. Sensors, actuators and control devices can be connected to the host processor. CAN controller; often an integral part of the microcontroller Receiving: the CAN controller stores the received serial bits from the bus until an entire message is available, which can then be fetched by the host processor (usually by the CAN controller triggering an interrupt). Sending: the host processor sends the transmit message(s) to a CAN controller, which transmits the bits serially onto the bus when the bus is free. Transceiver Defined by ISO 11898-2/3 Medium Access Unit [MAU] standards Receiving: it converts the data stream from CAN bus levels to levels that the CAN controller uses. It usually has protective circuitry to protect the CAN controller. Transmitting: it converts the data stream from the CAN controller to CAN bus levels. Each node is able to send and receive messages, but not simultaneously. A message or Frame consists primarily of the ID (identifier), which represents the priority of the message, and up to eight data bytes. A CRC, acknowledge slot [ACK] and other overhead are also part of the message. The improved CAN FD extends the length of the data section to up to 64 bytes per frame. The message is transmitted serially onto the bus using a non-return-to-zero (NRZ) format and may be received by all nodes. The devices that are connected by a CAN network are typically sensors, actuators, and other control devices. These devices are connected to the bus through a host processor, a CAN controller, and a CAN transceiver. Data transmission CAN data transmission uses a lossless bitwise arbitration method of contention resolution. This arbitration method requires all nodes on the CAN network to be synchronized to sample every bit on the CAN network at the same time. This is why some call CAN synchronous. Unfortunately the term synchronous is imprecise since the data is transmitted in an asynchronous format, namely without a clock signal. The CAN specifications use the terms "dominant" bits and "recessive" bits, where dominant is a logical 0 (actively driven to a voltage by the transmitter) and recessive is a logical 1 (passively returned to a voltage by a resistor). The idle state is represented by the recessive level (Logical 1). If one node transmits a dominant bit and another node transmits a recessive bit then there is a collision and the dominant bit "wins". This means there is no delay to the higher-priority message, and the node transmitting the lower priority message automatically attempts to re-transmit six bit clocks after the end of the dominant message. This makes CAN very suitable as a real-time prioritized communications system. The exact voltages for a logical 0 or 1 depend on the physical layer used, but the basic principle of CAN requires that each node listen to the data on the CAN network including the transmitting node(s) itself (themselves). If a logical 1 is transmitted by all transmitting nodes at the same time, then a logical 1 is seen by all of the nodes, including both the transmitting node(s) and receiving node(s). If a logical 0 is transmitted by all transmitting node(s) at the same time, then a logical 0 is seen by all nodes. If a logical 0 is being transmitted by one or more nodes, and a logical 1 is being transmitted by one or more nodes, then a logical 0 is seen by all nodes including the node(s) transmitting the logical 1. When a node transmits a logical 1 but sees a logical 0, it realizes that there is a contention and it quits transmitting. By using this process, any node that transmits a logical 1 when another node transmits a logical 0 "drops out" or loses the arbitration. A node that loses arbitration re-queues its message for later transmission and the CAN frame bit-stream continues without error until only one node is left transmitting. This means that the node that transmits the first 1 loses arbitration. Since the 11 (or 29 for CAN 2.0B) bit identifier is transmitted by all nodes at the start of the CAN frame, the node with the lowest identifier transmits more zeros at the start of the frame, and that is the node that wins the arbitration or has the highest priority. For example, consider an 11-bit ID CAN network, with two nodes with IDs of 15 (binary representation, 00000001111) and 16 (binary representation, 00000010000). If these two nodes transmit at the same time, each will first transmit the start bit then transmit the first six zeros of their ID with no arbitration decision being made. When the 7th ID bit is transmitted, the node with the ID of 16 transmits a 1 (recessive) for its ID, and the node with the ID of 15 transmits a 0 (dominant) for its ID. When this happens, the node with the ID of 16 knows it transmitted a 1, but sees a 0 and realizes that there is a collision and it lost arbitration. Node 16 stops transmitting which allows the node with ID of 15 to continue its transmission without any loss of data. The node with the lowest ID will always win the arbitration, and therefore has the highest priority. Bit rates up to 1 Mbit/s are possible at network lengths below 40 m. Decreasing the bit rate allows longer network distances (e.g.,500 m at 125 kbit/s). The improved CAN FD standard allows increasing the bit rate after arbitration and can increase the speed of the data section by a factor of up to ten or more of the arbitration bit rate. ID allocation Message IDs must be unique on a single CAN bus, otherwise two nodes would continue transmission beyond the end of the arbitration field (ID) causing an error. In the early 1990s, the choice of IDs for messages was done simply on the basis of identifying the type of data and the sending node; however, as the ID is also used as the message priority, this led to poor real-time performance. In those scenarios, a low CAN bus use of around 30% was commonly required to ensure that all messages would meet their deadlines. However, if IDs are instead determined based on the deadline of the message, the lower the numerical ID and hence the higher the message priority, then bus use of 70 to 80% can typically be achieved before any message deadlines are missed . Bit timing All nodes on the CAN network must operate at the same nominal bit rate, but noise, phase shifts, oscillator tolerance and oscillator drift mean that the actual bit rate might not be the nominal bit rate. Since a separate clock signal is not used, a means of synchronizing the nodes is necessary. Synchronization is important during arbitration since the nodes in arbitration must be able to see both their transmitted data and the other nodes' transmitted data at the same time. Synchronization is also important to ensure that variations in oscillator timing between nodes do not cause errors. Synchronization starts with a hard synchronization on the first recessive to dominant transition after a period of bus idle (the start bit). Resynchronization occurs on every recessive to dominant transition during the frame. The CAN controller expects the transition to occur at a multiple of the nominal bit time. If the transition does not occur at the exact time the controller expects it, the controller adjusts the nominal bit time accordingly. The adjustment is accomplished by dividing each bit into a number of time slices called quanta, and assigning some number of quanta to each of the four segments within the bit: synchronization, propagation, phase segment 1 and phase segment 2. The number of quanta the bit is divided into can vary by controller, and the number of quanta assigned to each segment can be varied depending on bit rate and network conditions. A transition that occurs before or after it is expected causes the controller to calculate the time difference and lengthen phase segment 1 or shorten phase segment 2 by this time. This effectively adjusts the timing of the receiver to the transmitter to synchronize them. This resynchronization process is done continuously at every recessive to dominant transition to ensure the transmitter and receiver stay in sync. Continuously resynchronizing reduces errors induced by noise, and allows a receiving node that was synchronized to a node which lost arbitration to resynchronize to the node which won arbitration. Layers The CAN protocol, like many networking protocols, can be decomposed into the following abstraction layers: Application layer Object layer Message filtering Message and status handling Transfer layer Most of the CAN standard applies to the transfer layer. The transfer layer receives messages from the physical layer and transmits those messages to the object layer. The transfer layer is responsible for bit timing and synchronization, message framing, arbitration, acknowledgement, error detection and signaling, and fault confinement. It performs: Fault confinement Error detection Message validation Acknowledgement Arbitration Message framing Transfer rate and timing Information routing Physical layer CAN bus (ISO 11898-1:2003) originally specified the link layer protocol with only abstract requirements for the physical layer, e.g., asserting the use of a medium with multiple-access at the bit level through the use of dominant and recessive states. The electrical aspects of the physical layer (voltage, current, number of conductors) were specified in ISO 11898-2:2003, which is now widely accepted. However, the mechanical aspects of the physical layer (connector type and number, colors, labels, pin-outs) have yet to be formally specified. As a result, an automotive ECU will typically have a particular—often custom—connector with various sorts of cables, of which two are the CAN bus lines. Nonetheless, several de facto standards for mechanical implementation have emerged, the most common being the 9-pin D-sub type male connector with the following pin-out: pin 2: CAN-Low (CAN−) pin 3: GND (ground) pin 7: CAN-High (CAN+) pin 9: CAN V+ (power) This de facto mechanical standard for CAN could be implemented with the node having both male and female 9-pin D-sub connectors electrically wired to each other in parallel within the node. Bus power is fed to a node's male connector and the bus draws power from the node's female connector. This follows the electrical engineering convention that power sources are terminated at female connectors. Adoption of this standard avoids the need to fabricate custom splitters to connect two sets of bus wires to a single D connector at each node. Such nonstandard (custom) wire harnesses (splitters) that join conductors outside the node reduce bus reliability, eliminate cable interchangeability, reduce compatibility of wiring harnesses, and increase cost. The absence of a complete physical layer specification (mechanical in addition to electrical) freed the CAN bus specification from the constraints and complexity of physical implementation. However it left CAN bus implementations open to interoperability issues due to mechanical incompatibility. In order to improve interoperability, many vehicle makers have generated specifications describing a set of allowed CAN transceivers in combination with requirements on the parasitic capacitance on the line. The allowed parasitic capacitance includes both capacitors as well as ESD protection (ESD against ISO 7637-3). In addition to parasitic capacitance, 12V and 24V systems do not have the same requirements in terms of line maximum voltage. Indeed, during jump start events light vehicles lines can go up to 24V while truck systems can go as high as 36V. New solutions are coming on the market allowing to use same component for CAN as well as CAN FD (see ). Noise immunity on ISO 11898-2:2003 is achieved by maintaining the differential impedance of the bus at a low level with low-value resistors (120 ohms) at each end of the bus. However, when dormant, a low-impedance bus such as CAN draws more current (and power) than other voltage-based signaling busses. On CAN bus systems, balanced line operation, where current in one signal line is exactly balanced by current in the opposite direction in the other signal provides an independent, stable 0 V reference for the receivers. Best practice determines that CAN bus balanced pair signals be carried in twisted pair wires in a shielded cable to minimize RF emission and reduce interference susceptibility in the already noisy RF environment of an automobile. ISO 11898-2 provides some immunity to common mode voltage between transmitter and receiver by having a 0 V rail running along the bus to maintain a high degree of voltage association between the nodes. Also, in the de facto mechanical configuration mentioned above, a supply rail is included to distribute power to each of the transceiver nodes. The design provides a common supply for all the transceivers. The actual voltage to be applied by the bus and which nodes apply to it are application-specific and not formally specified. Common practice node design provides each node with transceivers which are optically isolated from their node host and derive a 5 V linearly regulated supply voltage for the transceivers from the universal supply rail provided by the bus. This usually allows operating margin on the supply rail sufficient to allow interoperability across many node types. Typical values of supply voltage on such networks are 7 to 30 V. However, the lack of a formal standard means that system designers are responsible for supply rail compatibility. ISO 11898-2 describes the electrical implementation formed from a multi-dropped single-ended balanced line configuration with resistor termination at each end of the bus. In this configuration a dominant state is asserted by one or more transmitters switching the CAN− to supply 0 V and (simultaneously) switching CAN+ to the +5 V bus voltage thereby forming a current path through the resistors that terminate the bus. As such the terminating resistors form an essential component of the signalling system and are included not just to limit wave reflection at high frequency. During a recessive state the signal lines and resistor(s) remain in a high impedances state with respect to both rails. Voltages on both CAN+ and CAN− tend (weakly) towards a voltage midway between the rails. A recessive state is present on the bus only when none of the transmitters on the bus is asserting a dominant state. During a dominant state the signal lines and resistor(s) move to a low impedance state with respect to the rails so that current flows through the resistor. CAN+ voltage tends to +5 V and CAN− tends to 0 V. Irrespective of signal state the signal lines are always in low impedance state with respect to one another by virtue of the terminating resistors at the end of the bus. This signalling strategy differs significantly from other balanced line transmission technologies such as RS-422/3, RS-485, etc. which employ differential line drivers/ receivers and use a signalling system based on the differential mode voltage of the balanced line crossing a notional 0 V. Multiple access on such systems normally relies on the media supporting three states (active high, active low and inactive tri-state) and is dealt with in the time domain. Multiple access on CAN bus is achieved by the electrical logic of the system supporting just two states that are conceptually analogous to a ‘wired AND’ network. Frames A CAN network can be configured to work with two different message (or "frame") formats: the standard or base frame format (described in CAN 2.0 A and CAN 2.0 B), and the extended frame format (described only by CAN 2.0 B). The only difference between the two formats is that the "CAN base frame" supports a length of 11 bits for the identifier, and the "CAN extended frame" supports a length of 29 bits for the identifier, made up of the 11-bit identifier ("base identifier") and an 18-bit extension ("identifier extension"). The distinction between CAN base frame format and CAN extended frame format is made by using the IDE bit, which is transmitted as dominant in case of an 11-bit frame, and transmitted as recessive in case of a 29-bit frame. CAN controllers that support extended frame format messages are also able to send and receive messages in CAN base frame format. All frames begin with a start-of-frame (SOF) bit that denotes the start of the frame transmission. CAN has four frame types: Data frame: a frame containing node data for transmission Remote frame: a frame requesting the transmission of a specific identifier Error frame: a frame transmitted by any node detecting an error Overload frame: a frame to inject a delay between data or remote frame Data frame The data frame is the only frame for actual data transmission. There are two message formats: Base frame format: with 11 identifier bits Extended frame format: with 29 identifier bits The CAN standard requires that the implementation must accept the base frame format and may accept the extended frame format, but must tolerate the extended frame format. Base frame format The frame format is as follows: The bit values are described for CAN-LO signal. Extended frame format The frame format is as follows on from here in the table below: The two identifier fields (A & B) combine to form a 29-bit identifier. Remote frame Generally data transmission is performed on an autonomous basis with the data source node (e.g., a sensor) sending out a data frame. It is also possible, however, for a destination node to request the data from the source by sending a remote frame. There are two differences between a data frame and a remote frame. Firstly the RTR-bit is transmitted as a dominant bit in the data frame and secondly in the remote frame there is no data field. The DLC field indicates the data length of the requested message (not the transmitted one). I.e., RTR = 0 ; DOMINANT in data frame RTR = 1 ; RECESSIVE in remote frame In the event of a data frame and a remote frame with the same identifier being transmitted at the same time, the data frame wins arbitration due to the dominant RTR bit following the identifier. Error frame The error frame consists of two different fields: The first field is given by the superposition of ERROR FLAGS (6–12 dominant/recessive bits) contributed from different stations. The following second field is the ERROR DELIMITER (8 recessive bits). There are two types of error flags: Active Error Flag six dominant bits – Transmitted by a node detecting an error on the network that is in error state "error active". Passive Error Flag six recessive bits – Transmitted by a node detecting an active error frame on the network that is in error state "error passive". There are two error counters in CAN: Transmit error counter (TEC) Receive error counter (REC) When TEC or REC is greater than 127 and less than 255, a Passive Error frame will be transmitted on the bus. When TEC and REC is less than 128, an Active Error frame will be transmitted on the bus. When TEC is greater than 255, then the node enters into Bus Off state, where no frames will be transmitted. Overload frame The overload frame contains the two bit fields Overload Flag and Overload Delimiter. There are two kinds of overload conditions that can lead to the transmission of an overload flag: The internal conditions of a receiver, which requires a delay of the next data frame or remote frame. Detection of a dominant bit during intermission. The start of an overload frame due to case 1 is only allowed to be started at the first bit time of an expected intermission, whereas overload frames due to case 2 start one bit after detecting the dominant bit. Overload Flag consists of six dominant bits. The overall form corresponds to that of the active error flag. The overload flag's form destroys the fixed form of the intermission field. As a consequence, all other stations also detect an overload condition and on their part start transmission of an overload flag. Overload Delimiter consists of eight recessive bits. The overload delimiter is of the same form as the error delimiter. ACK slot The acknowledge slot is used to acknowledge the receipt of a valid CAN frame. Each node that receives the frame, without finding an error, transmits a dominant level in the ACK slot and thus overrides the recessive level of the transmitter. If a transmitter detects a recessive level in the ACK slot, it knows that no receiver found a valid frame. A receiving node may transmit a recessive to indicate that it did not receive a valid frame, but another node that did receive a valid frame may override this with a dominant. The transmitting node cannot know that the message has been received by all of the nodes on the CAN network. Often, the mode of operation of the device is to re-transmit unacknowledged frames over and over. This may lead to eventually entering the "error passive" state. Interframe spacing Data frames and remote frames are separated from preceding frames by a bit field called interframe space. Interframe space consists of at least three consecutive recessive (1) bits. Following that, if a dominant bit is detected, it will be regarded as the "Start of frame" bit of the next frame. Overload frames and error frames are not preceded by an interframe space and multiple overload frames are not separated by an interframe space. Interframe space contains the bit fields intermission and bus idle, and suspend transmission for error passive stations, which have been transmitter of the previous message. Bit stuffing To ensure enough transitions to maintain synchronization, a bit of opposite polarity is inserted after five consecutive bits of the same polarity. This practice is called bit stuffing, and is necessary due to the non-return to zero (NRZ) coding used with CAN. The stuffed data frames are destuffed by the receiver. All fields in the frame are stuffed with the exception of the CRC delimiter, ACK field and end of frame which are a fixed size and are not stuffed. In the fields where bit stuffing is used, six consecutive bits of the same polarity (111111 or 000000) are considered an error. An active error flag can be transmitted by a node when an error has been detected. The active error flag consists of six consecutive dominant bits and violates the rule of bit stuffing. Bit stuffing means that data frames may be larger than one would expect by simply enumerating the bits shown in the tables above. The maximum increase in size of a CAN frame (base format) after bit stuffing is in the case 11111000011110000... which is stuffed as (stuffing bits in bold): 111110000011111000001... The stuffing bit itself may be the first of the five consecutive identical bits, so in the worst case there is one stuffing bit per four original bits. The size of a base frame is bounded by since is the size of the frame before stuffing, in the worst case one bit will be added every four original bits after the first one (hence the −1 at the numerator) and, because of the layout of the bits of the header, only 34 out of 44 of them can be subject to bit stuffing. An undesirable side effect of the bit stuffing scheme is that a small number of bit errors in a received message may corrupt the destuffing process, causing a larger number of errors to propagate through the destuffed message. This reduces the level of protection that would otherwise be offered by the CRC against the original errors. This deficiency of the protocol has been addressed in CAN FD frames by the use of a combination of fixed stuff bits and a counter that records the number of stuff bits inserted. CAN lower-layer standards ISO 11898 series specifies physical and data link layer (levels 1 and 2 of the ISO/OSI model) of serial communication category called Controller Area Network that supports distributed real-time control and multiplexing for use within road vehicles. There are several CAN physical layer and other standards: ISO 11898-1:2015 specifies the data link layer (DLL) and physical signalling of the controller area network (CAN). This document describes the general architecture of CAN in terms of hierarchical layers according to the ISO reference model for open systems interconnection (OSI) established in ISO/IEC 7498-1 and provides the characteristics for setting up an interchange of digital information between modules implementing the CAN DLL with detailed specification of the logical link control (LLC) sublayer and medium access control (MAC) sublayer. ISO 11898-2:2016 specifies the high-speed (transmission rates of up to 1 Mbit/s) medium access unit (MAU), and some medium dependent interface (MDI) features (according to ISO 8802-3), which comprise the physical layer of the controller area network. ISO 11898-2 uses a two-wire balanced signalling scheme. It is the most used physical layer in vehicle powertrain applications and industrial control networks. ISO 11898-3:2006 specifies low-speed, fault-tolerant, medium-dependent interface for setting up an interchange of digital information between electronic control units of road vehicles equipped with the CAN at transmission rates above 40 kbit/s up to 125 kbit/s. ISO 11898-4:2004 specifies time-triggered communication in the CAN (TTCAN). It is applicable to setting up a time-triggered interchange of digital information between electronic control units (ECU) of road vehicles equipped with CAN, and specifies the frame synchronisation entity that coordinates the operation of both logical link and media access controls in accordance with ISO 11898-1, to provide the time-triggered communication schedule. ISO 11898-5:2007 specifies the CAN physical layer for transmission rates up to 1 Mbit/s for use within road vehicles. It describes the medium access unit functions as well as some medium dependent interface features according to ISO 8802-2. This represents an extension of ISO 11898-2, dealing with new functionality for systems requiring low-power consumption features while there is no active bus communication. ISO 11898-6:2013 specifies the CAN physical layer for transmission rates up to 1 Mbit/s for use within road vehicles. It describes the medium access unit functions as well as some medium dependent interface features according to ISO 8802-2. This represents an extension of ISO 11898-2 and ISO 11898-5, specifying a selective wake-up mechanism using configurable CAN frames. ISO 16845-1:2016 provides the methodology and abstract test suite necessary for checking the conformance of any CAN implementation of the CAN specified in ISO 11898-1. ISO 16845-2:2018 establishes test cases and test requirements to realize a test plan verifying if the CAN transceiver with implemented selective wake-up functions conform to the specified functionalities. The kind of testing defined in ISO 16845-2:2018 is named as conformance testing. CAN-based higher-layer protocols As the CAN standard does not include tasks of application layer protocols, such as flow control, device addressing, and transportation of data blocks larger than one message, and above all, application data, many implementations of higher layer protocols were created. Several are standardized for a business area, although all can be extended by each manufacturer. For passenger cars, each manufacturer has its own standard. CAN in Automation (CiA) is the international users' and manufacturers' organization that develops and supports CAN-based higher-layer protocols and their international standardization. Among these specifications are: Standardized approaches ARINC 812 or ARINC 825 (aviation industry) CANopen - CiA 301/302-2 and EN 50325-4 (industrial automation) IEC 61375-3-3 (use of CANopen in rail vehicles) DeviceNet (industrial automation) EnergyBus - CiA 454 and IEC 61851-3 (battery–charger communication) ISOBUS - ISO 11783 (agriculture) ISO-TP - ISO 15765-2 (transport protocol for automotive diagnostics) MilCAN (military vehicles) NMEA 2000 - IEC 61162-3 (marine industry) SAE J1939 (in-vehicle network for buses and trucks) SAE J2284 (in-vehicle networks for passenger cars) Unified Diagnostic Services (UDS) - ISO 14229 (automotive diagnostics) LeisureCAN - open standard for the leisure craft/vehicle industry Other approaches CANaerospace - Stock (for the aviation industry) CAN Kingdom - Kvaser (embedded control system) CCP/XCP (automotive ECU calibration) GMLAN - General Motors (for General Motors) RV-C - RVIA (used for recreational vehicles) SafetyBUS p - Pilz (used for industrial automation) UAVCAN (aerospace and robotics) CSP (CubeSat Space Protocol) VSCP (Very Simple Control Protocol) a free automation protocol suitable for all sorts of automation tasks CANopen Lift The CANopen Special Interest Group (SIG) "Lift Control", which was founded in 2001, develops the CANopen application profile CiA 417 for lift control systems. It works on extending the features, improves technical content and ensures that the current legal standards for lift control systems are met. The first version of CiA 417 was published (available for CiA members) in summer 2003, version 2.0 in February 2010, version 2.1.0 in July 2012, version 2.2.0 in December 2015, and version 2.3.1 in February 2020. Jörg Hellmich (ELFIN GmbH) is the chairman of this SIG and manages a wiki of the CANopen lift community with content about CANopen lift. Security CAN is a low-level protocol and does not support any security features intrinsically. There is also no encryption in standard CAN implementations, which leaves these networks open to man-in-the-middle frame interception. In most implementations, applications are expected to deploy their own security mechanisms; e.g., to authenticate incoming commands or the presence of certain devices on the network. Failure to implement adequate security measures may result in various sorts of attacks if the opponent manages to insert messages on the bus. While passwords exist for some safety-critical functions, such as modifying firmware, programming keys, or controlling antilock brake actuators, these systems are not implemented universally and have a limited number of seed/key pairs. Development tools When developing or troubleshooting the CAN bus, examination of hardware signals can be very important. Logic analyzers and bus analyzers are tools which collect, analyse, decode and store signals so people can view the high-speed waveforms at their leisure. There are also specialist tools as well as CAN bus monitors. A is an analysis tool, often a combination of hardware and software, used during development of hardware making use of the CAN bus. Typically the CAN bus monitor will listen to the traffic on the CAN bus in order to display it in a user interface. Often the CAN bus monitor offers the possibility to simulate CAN bus activity by sending CAN frames to the bus. The CAN bus monitor can therefore be used to validate expected CAN traffic from a given device or to simulate CAN traffic in order to validate the reaction from a given device connected to the CAN bus. The python-can library provides both passive, monitoring, and active, control, access to CAN bus on a wide range of platforms. Licensing Bosch holds patents on the technology, though those related to the original protocol have now expired. Manufacturers of CAN-compatible microprocessors pay license fees to Bosch for use of the CAN trademark and any of the newer patents related to CAN FD, and these are normally passed on to the customer in the price of the chip. Manufacturers of products with custom ASICs or FPGAs containing CAN-compatible modules need to pay a fee for the CAN Protocol License if they wish to use the CAN trademark or CAN FD capabilities. See also CANopen - Communication protocol for embedded systems CANpie – Open source device driver for CAN CAN FD – New implementation of CAN with a faster transmission can4linux – Open source Linux device driver for CAN FlexCAN – An alternative implementation. FlexRay – High-speed alternative to CAN Local Interconnect Network – A low cost alternative OBD-II PIDs – List of Parameter IDs SAE J1939 - Communication protocol for trucks and busses SocketCAN – A set of open source CAN drivers and a networking stack contributed by Volkswagen Research to the Linux kernel. References External links Bosch specification (old document — slightly ambiguous/unclear in some points, superseded by the standard ) Bosch CAN FD Specification Version 1.0 Controller Area Network (CAN) Schedulability Analysis: Refuted, Revisited and Revised Pinouts for common CAN bus connectors A webpage about CAN in automotive Controller Area Network (CAN) Schedulability Analysis with FIFO Queues Controller Area Network (CAN) Implementation Guide Freeware Bit-Timing calculator for Windows, supports a lot of microcontrollers, e.g. Atmel, STM32, Microchip, Renesas, ... (ZIPfile) Free e-learning module "Introduction to CAN" ARINC-825 Tutorial (video) from Excalibur Systems Inc. Website of CiA CAN Newsletter Online Understanding and Using the Controller Area Network from UC Berkeley CAN Protocol Tutorial ESD protection for CAN bus and CAN FD Computer networks Serial buses Industrial computing Industrial automation Robert Bosch GmbH
234921
https://en.wikipedia.org/wiki/Mobile%20payment
Mobile payment
Mobile payment (also referred to as mobile money, mobile money transfer, and mobile wallet) generally refer to payment services operated under financial regulation and performed from or via a mobile device. Instead of paying with cash, cheque, or credit cards, a consumer can use a mobile to pay for a wide range of services and digital or hard goods. Although the concept of using non-coin-based currency systems has a long history, it is only in the 21st century that the technology to support such systems has become widely available. Mobile payment is being adopted all over the world in different ways. The first patent exclusively defined "Mobile Payment System" was filed in 2000. In developing countries mobile payment solutions have been deployed as a means of extending financial services to the community known as the "unbanked" or "underbanked", which is estimated to be as much as 50% of the world's adult population, according to Financial Access' 2009 Report "Half the World is Unbanked". These payment networks are often used for micropayments. The use of mobile payments in developing countries has attracted public and private funding by organizations such as the Bill & Melinda Gates Foundation, United States Agency for International Development and Mercy Corps. Mobile payments are becoming a key instrument for payment service providers (PSPs) and other market participants, in order to achieve new growth opportunities, according to the European Payments Council (EPC). The EPC states that "new technology solutions provide a direct improvement to the operations efficiency, ultimately resulting in cost savings and in an increase in business volume". Models There are four primary models for mobile payments: Bank-centric model Operator-centric model Collaborative model Independent service provider (ISP) model In bank- or operator-centric models, a bank or the operator is the central node of the model, manages the transactions and distributes the property rights. In collaborative model, the financial intermediaries and telephonic operators collaborate in the managing tasks and share cooperatively the proprietary rights. In ISP model, a third party of confidence operates as an independent and “neutral” intermediary between financial agents and operators. Apple Pay or PayPal are the ISP the most frequently associated to this model in these last months. There can also be combinations of two models. Operator/bank co-operation, emerging in Haiti. Financial institutions and credit card companies as well as Internet companies such as Google and a number of mobile communication companies, such as mobile network operators and major telecommunications infrastructure such as w-HA from Orange and smartphone multinationals such as Ericsson and BlackBerry have implemented mobile payment solutions. Mobile wallets A mobile wallet is an app that contains the user's debit and credit card information, letting them pay for goods and services digitally with their mobile devices. Notable mobile wallets include: Alipay Apple Pay BHIM Cloud QuickPass Google Pay Gyft LG Pay Mi Pay Line Pay Samsung Pay Venmo WeChat Pay Touch 'n Go eWallet PhonePe Paytm Amazon Pay Credit card A simple mobile web payment system can also include a credit card payment flow allowing a consumer to enter their card details to make purchases. This process is familiar but any entry of details on a mobile phone is known to reduce the success rate (conversion) of payments. In addition, if the payment vendor can automatically and securely identify customers then card details can be recalled for future purchases turning credit card payments into simple single click-to-buy giving higher conversion rates for additional purchases. However, there are concerns regarding information and payment privacy when cards are used during online transactions. If a website is not secure, for example, then personal credit card info can leak online. Carrier billing The consumer uses the mobile billing option during checkout at an e-commerce site—such as an online gaming site—to make a payment. After two-factor authentication involving the consumer's mobile number and a PIN or one-time password (often abbreviated as OTP), the consumer's mobile account is charged for the purchase. It is a true alternative payment method that does not require the use of credit/debit cards or pre-registration at an online payment solution such as PayPal, thus bypassing banks and credit card companies altogether. This type of mobile payment method, which is prevalent in Asia, provides the following benefits: Security – two-factor authentication and a risk management engine prevents fraud. Convenience – no pre-registration and no new mobile software is required. Easy – It is just another option during the checkout process. Fast – most transactions are completed in less than 10 seconds. Proven – 70% of all digital content purchased online in some parts of Asia uses the direct mobile billing method Remote payment by SMS and credit card tokenization Even as the volume of Premium SMS transactions have flattened, many cloud-based payment systems continue to use SMS for presentment, authorization, and authentication, while the payment itself is processed through existing payment networks such as credit and debit card networks. These solutions combine the ubiquity of the SMS channel, with the security and reliability of existing payment infrastructure. Since SMS lacks end-to-end encryption, such solutions employ a higher-level security strategies known as 'tokenization' and 'target removal' whereby payment occurs without transmitting any sensitive account details, username, password, or PIN. To date, point-of-sales mobile payment solutions have not relied on SMS-based authentication as a payment mechanism, but remote payments such as bill payments, seat upgrades on flights, and membership or subscription renewals are commonplace. In comparison to premium short code programs which often exist in isolation, relationship marketing and payment systems are often integrated with CRM, ERP, marketing-automation platforms, and reservation systems. Many of the problems inherent with premium SMS have been addressed by solution providers. Remembering keywords is not required since sessions are initiated by the enterprise to establish a transaction specific context. Reply messages are linked to the proper session and authenticated either synchronously through a very short expiry period (every reply is assumed to be to the last message sent) or by tracking session according to varying reply addresses and/or reply options. Direct operator billing Direct operator billing, also known as mobile content billing, WAP billing, and carrier billing, requires integration with the mobile network operator. It provides certain benefits: Mobile network operators already have a billing relationship with consumers, the payment will be added to their bill. Provides instantaneous payment Protects payment details and consumer identity Better conversion rates Reduced customer support costs for merchants Alternative monetization option in countries where credit card usage is low One of the drawbacks is that the payout rate will often be much lower than with other mobile payments options. Examples from a popular provider: 92% with PayPal 85 to 86% with credit card 45 to 91.7% with operator billing in the US, UK and some smaller European countries, but usually around 60% More recently, direct operator billing is being deployed in an in-app environment, where mobile application developers are taking advantage of the one-click payment option that direct operator billing provides for monetising mobile applications. This is a logical alternative to credit card and Premium SMS billing. In 2012, Ericsson and Western Union partnered to expand the direct operator billing market, making it possible for mobile operators to include Western Union mobile money transfers as part of their mobile financial service offerings. Given the international reach of both companies, the partnership is meant to accelerate the interconnection between the m-commerce market and the existing financial world. Contactless near-field communication Near-field communication (NFC) is used mostly in paying for purchases made in physical stores or transportation services. A consumer using a special mobile phone equipped with a smartcard waves his/her phone near a reader module. Most transactions do not require authentication, but some require authentication using PIN, before transaction is completed. The payment could be deducted from a pre-paid account or charged to a mobile or bank account directly. Mobile payment method via NFC faces significant challenges for wide and fast adoption, due to lack of supporting infrastructure, complex ecosystem of stakeholders, and standards. Some phone manufacturers and banks, however, are enthusiastic. Ericsson and Aconite are examples of businesses that make it possible for banks to create consumer mobile payment applications that take advantage of NFC technology. NFC vendors in Japan are closely related to mass-transit networks, like the Mobile Suica used since 28 January 2006 on the JR East rail network. The mobile wallet Osaifu-Keitai system, used since 2004 for Mobile Suica and many others including Edy and nanaco, has become the de facto standard method for mobile payments in Japan. Its core technology, Mobile FeliCa IC, is partially owned by Sony, NTT DoCoMo and JR East. Mobile FeliCa utilize Sony's FeliCa technology, which itself is the de facto standard for contactless smart cards in the country. NFC was used in transports for the first time in the world by China Unicom and Yucheng Transportation Card in the tramways and bus of Chongqing on 19 January 2009, in those of Nice on 21 May 2010, then in Seoul after its introduction in Korea by the discount retailer Homeplus in March 2010 and it was tested then adopted or added to the existing systems in Tokyo from May 2010 to end of 2012. After an experimentation in the metro of Rennes in 2007, the NFC standard was implemented for the first time in a metro network, by China Unicom in Beijing on 31 December 2010. Other NFC vendors mostly in Europe use contactless payment over mobile phones to pay for on- and off-street parking in specially demarcated areas. Parking wardens may enforce the parking by license plate, transponder tags, or barcode stickers. In Europe, the first experimentations of mobile payment took place in Germany during 6 months, from May 2005, with a deferred payment at the end of each month on the tramways and bus of Hanau with the Nokia 3220 using the NFC standard of Philips and Sony. In France, the immediate contactless payment was experimented during 6 months, from October 2005, in some Cofinoga shops (Galeries Lafayette, Monoprix) and Vinci parkings of Caen with a Samsung NFC smartphone provided by Orange in collaboration with Philips Semiconductors (for the first time, thanks to "Fly Tag", the system allowed to receive as well audiovisual informations, like bus timetables or cinema trailers from the concerned services). From 19 November 2007 to 2009, this experimentation was extended in Caen to more services and three additional mobile phone operators (Bouygues Telecom, SFR and NRJ Mobile) and in Strasbourg and on 5 November 2007, Orange and the transport societies SNCF and Keolis associated themselves for a 2 months experimentation on smartphones in the metro, bus and TER trains in Rennes. After a test conducted from October 2005 to November 2006 with 27 users, on 21 May 2010, the transport authority of Nice Régie Lignes d'Azur was the first public transport provider in Europe to add definitely to its own offer a contactless payment on its tramways and bus network either with a NFC bank card or smartphone application notably on Samsung Player One (with the same mobile phone operators than in Caen and Strasbourg), as well as the validation aboard with them of the transport titles and the loading of these titles onto the smartphone, in addition to the season tickets contactless card. This service was as well experimented then respectively implemented for NFC smartphones on 18 and 25 June 2013 in the tramways and bus of Caen and Strasbourg. In Paris transport network, after a 4 months testing from November 2006 with Bouygues Telecom and 43 persons and finally with users from July 2018, the contactless mobile payment and direct validation on the turnstile readers with a smartphone was adopted on 25 September 2019Archived at Ghostarchive and the Wayback Machine: in collaboration with the societies Orange, Samsung, Wizway Solutions, Worldline and Conduent. First conceptualized in the early 2010s, the technology has seen as well commercial use in this century in Scandinavia and Estonia. End users benefit from the convenience of being able to pay for parking from the comfort of their car with their mobile phone, and parking operators are not obliged to invest in either existing or new street-based parking infrastructures. Parking wardens maintain order in these systems by license plate, transponder tags or barcode stickers or they read a digital display in the same way as they read a pay and display receipt. Other vendors use a combination of both NFC and a barcode on the mobile device for mobile payment, because many mobile devices in the market do not yet support NFC. Others QR code payments QR code is a square two-dimensional bar code. QR codes have been in use since 1994. Originally used to track products in warehouses, QR codes were designed to replace the older one-dimensional bar codes. The older bar codes just represent numbers, which can be looked up in a database and translated into something meaningful. QR, or "quick response", bar codes were designed to contain the meaningful information directly in the bar code. QR codes can be of two main categories: The QR code is presented on the mobile device of the person paying and scanned by a POS or another mobile device of the payee The QR code is presented by the payee, in a static or one time generated fashion and it is scanned by the person executing the payment Mobile self-checkout allows for one to scan a QR code or barcode of a product inside a brick-and-mortar establishment in order to purchase the product on the spot. This theoretically eliminates or reduces the incidence of long checkout lines, even at self-checkout kiosks. Cloud-based mobile payments Google, PayPal, GlobalPay and GoPago use a cloud-based approach to in-store mobile payment. The cloud based approach places the mobile payment provider in the middle of the transaction, which involves two separate steps. First, a cloud-linked payment method is selected and payment is authorized via NFC or an alternative method. During this step, the payment provider automatically covers the cost of the purchase with issuer linked funds. Second, in a separate transaction, the payment provider charges the purchaser's selected, cloud-linked account in a card-not-present environment to recoup its losses on the first transaction. Audio signal-based payments The audio channel of the mobile phone is another wireless interface that is used to make payments. Several companies have created technology to use the acoustic features of cell phones to support mobile payments and other applications that are not chip-based. The technologies like near sound data transfer (NSDT), data over voice and NFC 2.0 produce audio signatures that the microphone of the cell phone can pick up to enable electronic transactions. Direct carrier/bank co-operation In the T-Cash model, the mobile phone and the phone carrier is the front-end interface to the consumers. The consumer can purchase goods, transfer money to a peer, cash out, and cash in. A 'mini wallet' account can be opened as simply as entering *700# on the mobile phone, presumably by depositing money at a participating local merchant and the mobile phone number. Presumably, other transactions are similarly accomplished by entering special codes and the phone number of the other party on the consumer's mobile phone. Magnetic secure transmission In magnetic secure transmission (MST), a smartphone emits a magnetic signal that resembles the one created by swiping a magnetic credit card through a traditional credit card terminal. No changes to the terminal or a new terminal are required. Bank transfer systems Swish is the name of a system established in Sweden. It was established through a collaboration from major banks in 2012 and has been very successful, with 66 percent of the population as users in 2017. It is mainly used for peer-to-peer payments between private people, but is also used by church collect, street vendors and small businesses. A person's account is tied to his or her phone number and the connection between the phone number and the actual bank account number is registered in the internet bank. The electronic identification system mobile BankID, issued by several Swedish banks, is used to verify the payment. Users with a simple phone or without the app can still receive money if the phone number is registered in the internet bank. Like many other mobile payment system, its main obstacle is getting people to register and download the app, but it has managed to reach a critical mass and it has become part of everyday life for many Swedes. Swedish payments company Trustly also enables mobile bank transfers, but is used mainly for business-to-consumer transactions that occur solely online. If an e-tailer integrates with Trustly, its customers can pay directly from their bank account. As opposed to Swish, users don't need to register a Trustly account or download software to pay with it. The Danish MobilePay and Norwegian Vipps are also popular in their countries. They use direct and instant bank transfers, but also for users not connected to a participating bank, credit card billing. In India, a new direct bank transfer system has emerged called as Unified Payments Interface. This system enables users to transfer money to other users and businesses in real-time directly from their bank accounts. Users download UPI supporting app from app stores on their Android or iOS device, link and verify their mobile number with the bank account by sending one outgoing SMS to app provider, create a virtual payment address (VPA) which auto generates a QR code and then set a banking PIN by generating OTP for secure transactions. VPA and QR codes are to ensure easy to use & privacy which can help in peer-to-peer (P2P) transactions without giving any user details. Fund transfer can then be initiated to other users or businesses. Settlement of funds happen in real-time, i.e. money is debited from payer's bank account and credited in recipient's bank account in real-time. UPI service works 24x7, including weekends and holidays. This is slowly becoming a very popular service in India and is processing monthly payments worth approximately $10 billion as in October 2018. In Poland, Blik - mobile payment system created in February 2015 by the Polish Payment Standard (PSP) company. To pay with Blik, you need a smartphone, a personal account and a mobile application of one of the banks that cooperate with it. The principle of operation is to generate a 6-digit code in the bank's mobile application. The Blik code is used only to connect the parties to the transaction. It is an identifier that associates the user and a specific bank at a given moment. For two minutes, it points to a specific mobile application to which - through a string of numbers - a request to accept a transaction in a specific store or ATM is sent. Blik allows you to pay in online and stationary stores. By the Blik, we can also make transfers to the phone or withdraw money from ATMs. Mobile payment service provider model There are four potential mobile payment models:Operator-centric model: The mobile operator acts independently to deploy mobile payment service. The operator could provide an independent mobile wallet from the user mobile account (airtime). A large deployment of the operator-centric model is severely challenged by the lack of connection to existing payment networks. Mobile network operator should handle the interfacing with the banking network to provide advanced mobile payment service in banked and under banked environment. Pilot projects using this model have been launched in emerging countries, but they did not cover most of the mobile payment service use cases. Payments were limited to remittance and airtime top up.Bank-centric model: A bank deploys mobile payment applications or devices to customers and ensures merchants have the required point-of-sale (POS) acceptance capability. Mobile network operator are used as a simple carrier, they bring their experience to provide quality of service (QOS) assurance.Collaboration model: This model involves collaboration among banks, mobile operators and a trusted third party.Peer-to-peer model'': The mobile payment service provider acts independently from financial institutions and mobile network operators to provide mobile payment. See also Contactless payment Cryptocurrency wallet Diem (digital currency) Digital wallets Electronic money Financial cryptography Mobile ticketing Point of sale Point-of-sale malware SMS banking Universal credit card References Financial technology ja:非接触型決済
235110
https://en.wikipedia.org/wiki/Multilayer%20switch
Multilayer switch
A multilayer switch (MLS) is a computer networking device that switches on OSI layer 2 like an ordinary network switch and provides extra functions on higher OSI layers. The MLS was invented by engineers at Digital Equipment Corporation. Switching technologies are crucial to network design, as they allow traffic to be sent only where it is needed in most cases, using fast, hardware-based methods. Switching uses different kinds of network switches. A standard switch is known as a layer 2 switch and is commonly found in nearly any LAN. Layer 3 or layer 4 switches require advanced technology (see managed switch) and are more expensive, and thus are usually only found in larger LANs or in special network environments. Multilayer switch Multi-layer switching combines layer 2, 3 and 4 switching technologies and provides high-speed scalability with low latency. Multi-layer switching can move traffic at wire speed and also provide layer 3 routing. There is no performance difference between forwarding at different layers because the routing and switching is all hardware basedrouting decisions are made by specialized ASIC with the help of content-addressable memory. Multi-layer switching can make routing and switching decisions based on the following MAC address in a data link frame Protocol field in the data link frame IP address in the network layer header Protocol field in the network layer header Port numbers in the transport layer header MLSs implement QoS in hardware. A multilayer switch can prioritize packets by the 6 bit differentiated services code point (DSCP). These 6 bits were originally used for type of service. The following 4 mappings are normally available in an MLS: From OSI layer 2, 3 or 4 to IP DSCP (for IP packets) or IEEE 802.1p From IEEE 802.1p to IP DSCP From IP DSCP to IEEE 802.1p From VLAN IEEE 802.1p to port egress queue. MLSs are also able to route IP traffic between VLANs like a common router. The routing is normally as quick as switching (at wire speed). Layer-2 switching Layer-2 switching uses the MAC address of the host's network interface cards (NICs) to decide where to forward frames. Layer 2 switching is hardware-based, which means switches use application-specific integrated circuit (ASICs) to build and maintain the Forwarding information base and to perform packet forwarding at wire speed. One way to think of a layer-2 switch is as multiport bridge. Layer-2 switching is highly efficient because there is no modification to the frame required. Encapsulation of the packet changes only when the data packet passes through dissimilar media (such as from Ethernet to FDDI). Layer-2 switching is used for workgroup connectivity and network segmentation (breaking up collision domains). This allows a flatter network design with more network segments than traditional networks joined by repeater hubs and routers. Layer-2 switches have the same limitations as bridges. Bridges break up collision domains, but the network remains one large broadcast domain which can cause performance issues and limits the size of a network. Broadcast and multicasts, along with the slow convergence of spanning tree, can cause major problems as the network grows. Because of these problems, layer-2 switches cannot completely replace routers. Bridges are good if a network is designed by the 80/20 rule: users spend 80 percent of their time on their local segment. Layer-3 switching A layer-3 switch can perform some or all of the functions normally performed by a router. Most network switches, however, are limited to supporting a single type of physical network, typically Ethernet, whereas a router may support different kinds of physical networks on different ports. Layer-3 switching is solely based on (destination) IP address stored in the header of IP datagram (layer-4 switching may use other information in the header). The difference between a layer-3 switch and a router is the way the device is making the routing decision. Traditionally, routers use microprocessors to make forwarding decisions in software, while the switch performs only hardware-based packet switching (by specialized ASIC with the help of content-addressable memory). However, many routers now also have advanced hardware functions to assist with forwarding. The main advantage of layer-3 switches is the potential for lower network latency as a packet can be routed without making extra network hops to a router. For example, connecting two distinct segments (e.g. VLANs) with a router to a standard layer-2 switch requires passing the frame to the switch (first L2 hop), then to the router (second L2 hop) where the packet inside the frame is routed (L3 hop) and then passed back to the switch (third L2 hop). A layer-3 switch accomplishes the same task without the need for a router (and therefore additional hops) by making the routing decision itself, i.e. the packet is routed to another subnet and switched to the destination network port simultaneously. Because many layer-3 switches offer the same functionality as traditional routers they can be used as cheaper, lower latency replacements in some networks. Layer 3 switches can perform the following actions that can also be performed by routers: determine paths based on logical addressing check and recompute layer-3 header checksums examine and update time to live (TTL) field process and respond to any option information update Simple Network Management Protocol (SNMP) managers with Management Information Base (MIB) information The benefits of layer 3 switching include the following: fast hardware-based packet forwarding with low latency lower per-port cost compared to pure routers flow accounting Quality of service (QoS) IEEE has developed hierarchical terminology that is useful in describing forwarding and switching processes. Network devices without the capability to forward packets between subnetworks are called end systems (ESs, singular ES), whereas network devices with these capabilities are called intermediate systems (ISs). ISs are further divided into those that communicate only within their routing domain (intradomain IS) and those that communicate both within and between routing domains (interdomains IS). A routing domain is generally considered as portion of an internetwork under common administrative authority and is regulated by a particular set of administrative guidelines. Routing domains are also called autonomous systems. A common layer-3 capability is an awareness of IP multicast through IGMP snooping. With this awareness, a layer-3 switch can increase efficiency by delivering the traffic of a multicast group only to ports where the attached device has signaled that it wants to listen to that group. Layer-3 switches typically support IP routing between VLANs configured on the switch. Some layer-3 switches support the routing protocols that routers use to exchange information about routes between networks. Layer 4 switching Layer 4 switching means hardware-based layer 3 switching technology that can also consider the type of network traffic (for example, distinguishing between UDP and TCP). Layer 4 switching provides additional datagram inspection by reading the port numbers found in the transport layer header to make routing decisions (i.e. ports used by HTTP, FTP and VoIP). These port numbers are found in RFC 1700 and reference the upper-layer protocol, program, or application. Using layer-4 switching, the network administrator can configure a layer-4 switch to prioritize data traffic by application. Layer-4 information can also be used to help make routing decisions. For example, extended access lists can filter packets based on layer-4 port numbers. Another example is accounting information gathered by open standards using sFlow. A layer-4 switch can use information in the transport-layer protocols to make forwarding decisions. Principally this refers to an ability to use source and destination port numbers in TCP and UDP communications to allow, block and prioritize communications. Layer 4–7 switch, web switch, or content switch Some switches can use packet information up to OSI layer 7; these may be called layer 4–7 switches, , , web switches or application switches. Content switches are typically used for load balancing among groups of servers. Load balancing can be performed on HTTP, HTTPS, VPN, or any TCP/IP traffic using a specific port. Load balancing often involves destination network address translation so that the client of the load-balanced service is not fully aware of which server is handling its requests. Some layer 4–7 switches can perform Network address translation (NAT) at wire speed. Content switches can often be used to perform standard operations such as SSL encryption and decryption to reduce the load on the servers receiving the traffic, or to centralize the management of digital certificates. Layer 7 switching is a technology used in a content delivery network. Some applications require that repeated requests from a client are directed at the same application server. Since the client isn't generally aware of which server it spoke to earlier, content switches define a notion of stickiness. For example, requests from the same source IP address are directed to the same application server each time. Stickiness can also be based on SSL IDs, and some content switches can use cookies to provide this functionality. Layer 4 load balancer The router operates on the transport layer and makes decisions on where to send the packets. Modern load balancing routers can use different rules to make decisions on where to route traffic. This can be based on least load, or fastest response times, or simply balancing requests out to multiple destinations providing the same services. This is also a redundancy method, so if one machine is not up, the router will not send traffic to it. The router may also have NAT capability with port and transaction awareness and performs a form of port translation for sending incoming packets to one or more machines that are hidden behind a single IP address. Layer 7 Layer-7 switches may distribute the load based on uniform resource locators (URLs), or by using some installation-specific technique to recognize application-level transactions. A layer-7 switch may include a web cache and participate in a content delivery network (CDN). See also Application delivery controller Bridge router Multiprotocol Label Switching (MPLS) Residential gateway References External links What is the difference between a Layer-3 switch and a router? Multilayer Switching Networking hardware
235124
https://en.wikipedia.org/wiki/Index%20of%20cryptography%20articles
Index of cryptography articles
Articles related to cryptography include: A A5/1 • A5/2 • ABA digital signature guidelines • ABC (stream cipher) • Abraham Sinkov • Acoustic cryptanalysis • Adaptive chosen-ciphertext attack • Adaptive chosen plaintext and chosen ciphertext attack • Advantage (cryptography) • ADFGVX cipher • Adi Shamir • Advanced Access Content System • Advanced Encryption Standard • Advanced Encryption Standard process • Adversary • AEAD block cipher modes of operation • Affine cipher • Agnes Meyer Driscoll • AKA (security) • Akelarre (cipher) • Alan Turing • Alastair Denniston • Al Bhed language • Alex Biryukov • Alfred Menezes • Algebraic Eraser • Algorithmically random sequence • Alice and Bob • All-or-nothing transform • Alphabetum Kaldeorum • Alternating step generator • American Cryptogram Association • AN/CYZ-10 • Anonymous publication • Anonymous remailer • Antoni Palluth • Anubis (cipher) • Argon2 • ARIA (cipher) • Arlington Hall • Arne Beurling • Arnold Cipher • Array controller based encryption • Arthur Scherbius • Arvid Gerhard Damm • Asiacrypt • Atbash • Attack model • Auguste Kerckhoffs • Authenticated encryption • Authentication • Authorization certificate • Autokey cipher • Avalanche effect B B-Dienst • Babington Plot • Baby-step giant-step • Bacon's cipher • Banburismus • Bart Preneel • BaseKing • BassOmatic • BATON • BB84 • Beale ciphers • BEAR and LION ciphers • Beaufort cipher • Beaumanor Hall • Bent function • Berlekamp–Massey algorithm • Bernstein v. United States • BestCrypt • Biclique attack • BID/60 • BID 770 • Bifid cipher • Bill Weisband • Binary Goppa code • Biometric word list • Birthday attack • Bit-flipping attack • BitTorrent protocol encryption • Biuro Szyfrów • Black Chamber • Blaise de Vigenère • Bletchley Park • Blind credential • Blinding (cryptography) • Blind signature • Block cipher • Block cipher mode of operation • Block size (cryptography) • Blowfish (cipher) • Blum Blum Shub • Blum–Goldwasser cryptosystem • Bomba (cryptography) • Bombe • Book cipher • Books on cryptography • Boomerang attack • Boris Hagelin • Bouncy Castle (cryptography) • Broadcast encryption • Bruce Schneier • Brute-force attack • Brute Force: Cracking the Data Encryption Standard • Burrows–Abadi–Needham logic • Burt Kaliski C C2Net • C-36 (cipher machine) • C-52 (cipher machine) • Caesar cipher • Camellia (cipher) • CAPICOM • Capstone (cryptography) • Cardan grille • Card catalog (cryptology) • Carlisle Adams • CAST-128 • CAST-256 • Cayley–Purser algorithm • CBC-MAC • CCM mode • CCMP • CD-57 • CDMF • Cellular Message Encryption Algorithm • Centiban • Central Security Service • Centre for Applied Cryptographic Research • Central Bureau • Certicom • Certificate authority • Certificate-based encryption • Certificateless cryptography • Certificate revocation list • Certificate signing request • Certification path validation algorithm • Chaffing and winnowing • Challenge-Handshake Authentication Protocol • Challenge–response authentication • Chosen-ciphertext attack • Chosen-plaintext attack • CIKS-1 • Cipher disk • Cipher runes • Cipher security summary • CipherSaber • Ciphertext expansion • Ciphertext indistinguishability • Ciphertext-only attack • Ciphertext stealing • CIPHERUNICORN-A • CIPHERUNICORN-E • Classical cipher • Claude Shannon • Claw-free permutation • Cleartext • CLEFIA • Clifford Cocks • Clipper chip • Clock (cryptography) • Clock drift • CMVP • COCONUT98 • Codebook • Code (cryptography) • Code talker • Codress message • Cold boot attack • Collision attack • Collision resistance • Colossus computer • Combined Cipher Machine • Commitment scheme • Common Scrambling Algorithm • Communications security • Communications Security Establishment • Communication Theory of Secrecy Systems • Comparison of disk encryption software • Comparison of SSH clients • Completeness (cryptography) • Complexity trap • Computational Diffie–Hellman assumption • Computational hardness assumption • Computer insecurity • Computer and network surveillance • COMSEC equipment • Conch (SSH) • Concrete security • Conel Hugh O'Donel Alexander • Confidentiality • Confusion and diffusion • Content-scrambling system • Controlled Cryptographic Item • Corkscrew (program) • Correlation immunity • COSIC • Covert channel • Cover (telecommunications) • Crab (cipher) • Cramer–Shoup cryptosystem • CRAM-MD5 • CRHF • Crib (cryptanalysis) • CrossCrypt • Crowds (anonymity network) • Crypt (C) • Cryptanalysis • Cryptanalysis of the Enigma • Cryptanalysis of the Lorenz cipher • Cryptanalytic computer • Cryptex • Cryptico • Crypto AG • Crypto-anarchism • Crypto API (Linux) • Microsoft CryptoAPI • CryptoBuddy • Cryptochannel • CRYPTO (conference) • Cryptogram • Cryptographically Generated Address • Cryptographically secure pseudorandom number generator • Cryptographically strong • Cryptographic Application Programming Interface • Cryptographic hash function • Cryptographic key types • Cryptographic Message Syntax • Cryptographic primitive • Cryptographic protocol • Cryptographic Service Provider • Cryptographie indéchiffrable • Cryptography • Cryptography in Japan • Cryptography newsgroups • Cryptography standards • Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age • Cryptologia • Cryptology ePrint Archive • Cryptology Research Society of India • Cryptomathic • Cryptome • Cryptomeria cipher • Cryptonomicon • CrypTool • Crypto phone • Crypto-society • Cryptosystem • Cryptovirology • CRYPTREC • CS-Cipher • Curve25519 • Curve448 • Custom hardware attack • Cycles per byte • Cyclometer • Cypherpunk • Cyrillic Projector D D'Agapeyeff cipher • Daniel J. Bernstein • Data Authentication Algorithm • Data Encryption Standard • Datagram Transport Layer Security • David Chaum • David Kahn • David Naccache • David Wagner • David Wheeler (computer scientist) • Davies attack • Davies–Meyer hash • DEAL • Decipherment • Decisional Diffie–Hellman assumption • Decorrelation theory • Decrypt • DeCSS • Defence Signals Directorate • Degree of anonymity • Delegated Path Discovery • Delegated Path Validation • Deniable encryption • Derek Taunt • Derived unique key per transaction • DES Challenges • DES supplementary material • DES-X • Deterministic encryption • DFC (cipher) • Dictionary attack • Differential cryptanalysis • Differential-linear attack • Differential power analysis • Diffie–Hellman key exchange • Diffie–Hellman problem • DigiCipher 2 • Digital Fortress • Digital rights management • Digital signature • Digital Signature Algorithm • Digital signature forgery • Digital timestamping • Digital watermarking • Dilly Knox • Dining cryptographers problem • Diplomatic bag • Direct Anonymous Attestation • Discrete logarithm • Disk encryption • Disk encryption hardware • Disk encryption software • Distance-bounding protocol • Distinguishing attack • Distributed.net • DMA attack • dm-crypt • Dmitry Sklyarov • DomainKeys • Don Coppersmith • Dorabella Cipher • Double Ratchet Algorithm • Doug Stinson • Dragon (cipher) • DRYAD • Dual_EC_DRBG • E E0 (cipher) • E2 (cipher) • E4M • EAP-AKA • EAP-SIM • EAX mode • ECC patents • ECHELON • ECRYPT • Edouard Fleissner von Wostrowitz • Edward Hebern • Edward Scheidt • Edward Travis • EFF DES cracker • Efficient Probabilistic Public-Key Encryption Scheme • EKMS • Electronic Communications Act 2000 • Electronic money • Electronic signature • Electronic voting • ElGamal encryption • ElGamal signature scheme • Eli Biham • Elizebeth Friedman • Elliptic-curve cryptography • Elliptic-curve Diffie–Hellman • Elliptic Curve DSA • EdDSA • Elliptic curve only hash • Elonka Dunin • Encrypted function • Encrypted key exchange • Encrypting File System • Encryption • Encryption software • Enigmail • Enigma machine • Enigma rotor details • Entrust • Ernst Fetterlein • eSTREAM • Étienne Bazeries • Eurocrypt • EuroCrypt • Export of cryptography • Extensible Authentication Protocol F Fast Software Encryption • Fast syndrome-based hash • FEA-M • FEAL • Feige–Fiat–Shamir identification scheme • Feistel cipher • Félix Delastelle • Fialka • Filesystem-level encryption • FileVault • Fill device • Financial cryptography • FIPS 140 • FIPS 140-2 • Firefly (key exchange protocol) • FISH (cipher) • Fish (cryptography) • Floradora • Fluhrer, Mantin and Shamir attack • Format-preserving encryption • Fortezza • Fort George G. Meade • Fortuna (PRNG) • Four-square cipher • Franciszek Pokorny • Frank A. Stevenson • Frank Rowlett • Freenet • FreeOTFE • FreeS/WAN • Frequency analysis • Friedrich Kasiski • Fritz-chip • FROG • FROSTBURG • FTP over SSH • Full disk encryption • Full Domain Hash • F. W. Winterbotham G Galois/Counter Mode • Gardening (cryptanalysis) • GCHQ Bude • GCHQ CSO Morwenstow • GDES • Generic Security Services Application Program Interface • George Blakley • George Scovell • GGH encryption scheme • GGH signature scheme • Gilbert Vernam • GMR (cryptography) • GNU Privacy Guard • GnuTLS • Goldwasser–Micali cryptosystem • Gordon Welchman • GOST (block cipher) • GOST (hash function) • Government Communications Headquarters • Government Communications Security Bureau • Grain (cipher) • Grand Cru (cipher) • Great Cipher • Grill (cryptology) • Grille (cryptography) • Group-based cryptography • Group signature • Grover's algorithm • Gustave Bertrand • Gwido Langer H H.235 • HAIFA construction • HAIPE • Hans Dobbertin • Hans-Thilo Schmidt • Hard-core predicate • Hardware random number generator • Hardware security module • Harold Keen • Harry Hinsley • Harvest (computer) • HAS-160 • Hash-based cryptography • Hashcash • Hash chain • Hash function security summary • Hash list • Hasty Pudding cipher • HAVAL • HC-256 • HC-9 • Heath Robinson (codebreaking machine) • Hebern rotor machine • Henri Braquenié • Henryk Zygalski • Herbert Yardley • Hidden Field Equations • Hideki Imai • Hierocrypt • High-bandwidth Digital Content Protection • Higher-order differential cryptanalysis • Hill cipher • History of cryptography • HMAC • HMAC-based One-time Password algorithm (HOTP) • Horst Feistel • Howard Heys • Https • Hugo Hadwiger • Hugo Koch • Hushmail • Hut 6 • Hut 8 • HX-63 • Hybrid cryptosystem • Hyperelliptic curve cryptography • Hyper-encryption I Ian Goldberg • IBM 4758 • ICE (cipher) • ID-based cryptography • IDEA NXT • Identification friend or foe • IEEE 802.11i • IEEE P1363 • I. J. Good • Illegal prime • Impossible differential cryptanalysis • Index of coincidence • Indifferent chosen-ciphertext attack • Indistinguishability obfuscation • Indocrypt • Information leakage • Information Security Group • Information-theoretic security • Initialization vector • Integer factorization • Integral cryptanalysis • Integrated Encryption Scheme • Integrated Windows Authentication • Interlock protocol • Intermediate certificate authorities • International Association for Cryptologic Research • International Data Encryption Algorithm • Internet Key Exchange • Internet Security Association and Key Management Protocol • Interpolation attack • Invisible ink • IPsec • Iraqi block cipher • ISAAC (cipher) • ISO 19092-2 • ISO/IEC 9797 • Ivan Damgård J Jacques Stern • JADE (cypher machine) • James Gillogly • James H. Ellis • James Massey • Jan Graliński • Jan Kowalewski • Japanese naval codes • Java Cryptography Architecture • Jefferson disk • Jennifer Seberry • Jerzy Różycki • Joan Daemen • Johannes Trithemius • John Herivel • John Kelsey (cryptanalyst) • John R. F. Jeffreys • John Tiltman • Jon Lech Johansen • Josef Pieprzyk • Joseph Desch • Joseph Finnegan (cryptographer) • Joseph Mauborgne • Joseph Rochefort • Journal of Cryptology • Junger v. Daley K Kaisa Nyberg • Kalyna (cipher) • Kasiski examination • KASUMI • KCDSA • KeePass • Kerberos (protocol) • Kerckhoffs's principle • Kevin McCurley (cryptographer) • Key-agreement protocol • Key authentication • Key clustering • Key (cryptography) • Key derivation function • Key distribution center • Key escrow • Key exchange • Keyfile • Key generation • Key generator • Key management • Key-recovery attack • Key schedule • Key server (cryptographic) • Key signature (cryptography) • Keysigning • Key signing party • Key size • Key space (cryptography) • Keystream • Key stretching • Key whitening • KG-84 • KHAZAD • Khufu and Khafre • Kiss (cryptanalysis) • KL-43 • KL-51 • KL-7 • Kleptography • KN-Cipher • Knapsack problem • Known-key distinguishing attack • Known-plaintext attack • KnownSafe • KOI-18 • KOV-14 • Kryha • Kryptos • KSD-64 • Kupyna • Kuznyechik • KW-26 • KW-37 • KY-3 • KY-57 • KY-58 • KY-68 • KYK-13 L Lacida • Ladder-DES • Lamport signature • Lars Knudsen • Lattice-based cryptography • Laurance Safford • Lawrie Brown • LCS35 • Leo Marks • Leonard Adleman • Leon Battista Alberti • Leo Rosen • Leslie Yoxall • LEVIATHAN (cipher) • LEX (cipher) • Libelle (cipher) • Linear cryptanalysis • Linear-feedback shift register • Link encryption • List of ciphertexts • List of cryptographers • List of cryptographic file systems • List of cryptographic key types • List of cryptology conferences • List of telecommunications encryption terms • List of people associated with Bletchley Park • List of SFTP clients • List of SFTP server software • LOKI • LOKI97 • Lorenz cipher • Louis W. Tordella • Lsh • Lucifer (cipher) • Lyra2 M M6 (cipher) • M8 (cipher) • M-209 • M-325 • M-94 • MacGuffin (cipher) • Madryga • MAGENTA • Magic (cryptography) • Maksymilian Ciężki • Malcolm J. Williamson • Malleability (cryptography) • Man-in-the-middle attack • Marian Rejewski • MARS (cryptography) • Martin Hellman • MaruTukku • Massey–Omura cryptosystem • Matt Blaze • Matt Robshaw • Max Newman • McEliece cryptosystem • mcrypt • MD2 (cryptography) • MD4 • MD5 • MD5CRK • MDC-2 • MDS matrix • Mean shortest distance • Meet-in-the-middle attack • Mental poker • Mercury (cipher machine) • Mercy (cipher) • Meredith Gardner • Merkle signature scheme • Merkle–Damgård construction • Merkle–Hellman knapsack cryptosystem • Merkle's Puzzles • Merkle tree • MESH (cipher) • Message authentication • Message authentication code • Message forgery • MI8 • Michael Luby • MICKEY • Microdot • Military Cryptanalysis (book) (William F. Friedman) • Military Cryptanalytics • Mimic function • Mirror writing • MISTY1 • Mitsuru Matsui • MMB (cipher) • Mod n cryptanalysis • MQV • MS-CHAP • MUGI • MULTI-S01 • MultiSwap • Multivariate cryptography N National Communications Centre • National Cryptologic Museum • National Security Agency • National Cipher Challenge • Navajo I • Neal Koblitz • Needham–Schroeder protocol • Negligible function • NEMA (machine) • NESSIE • Network Security Services • Neural cryptography • New Data Seal • NewDES • N-Hash • Nicolas Courtois • Niederreiter cryptosystem • Niels Ferguson • Nigel de Grey • Nihilist cipher • Nikita Borisov • Nimbus (cipher) • NIST hash function competition • Nonlinear-feedback shift register • NOEKEON • Non-malleable codes • Noreen • Nothing up my sleeve number • NSA cryptography • NSA encryption systems • NSA in fiction • NSAKEY • NSA Suite A Cryptography • NSA Suite B Cryptography • NT LAN Manager • NTLMSSP • NTRUEncrypt • NTRUSign • Null cipher • Numbers station • NUSH • NTRU O Oblivious transfer • OCB mode • Oded Goldreich • Off-the-Record Messaging • Okamoto–Uchiyama cryptosystem • OMI cryptograph • OMNI (SCIP) • One-key MAC • One-time pad • One-time password • One-way compression function • One-way function • Onion routing • Online Certificate Status Protocol • OP-20-G • OpenPGP card • OpenSSH • OpenSSL • Openswan • OpenVPN • Operation Ruthless • Optimal asymmetric encryption padding • Over the Air Rekeying (OTAR) • OTFE • Otway–Rees protocol P Padding (cryptography) • Padding oracle attack • Paillier cryptosystem • Pairing-based cryptography • Panama (cryptography) • Partitioning cryptanalysis • Passive attack • Passphrase • Password • Password-authenticated key agreement • Password cracking • Password Hashing Competition • Paul Kocher • Paulo Pancatuccio • Paulo S. L. M. Barreto • Paul van Oorschot • PBKDF2 • PC Bruno • Pepper (cryptography) • Perfect forward secrecy • Perforated sheets • Permutation cipher • Peter Gutmann (computer scientist) • Peter Junger • Peter Twinn • PGP Corporation • PGPDisk • PGPfone • Phelix • Phil Zimmermann • Photuris (protocol) • Physical security • Physical unclonable function • Pig Latin • Pigpen cipher • Pike (cipher) • Piling-up lemma • Pinwheel (cryptography) • Piotr Smoleński • Pirate decryption • PKC (conference) • PKCS • PKCS 11 • PKCS 12 • PKIX • Plaintext • Plaintext-aware encryption • Playfair cipher • Plugboard • PMAC (cryptography) • Poem code • Pohlig–Hellman algorithm • Point-to-Point Tunneling Protocol • Pointcheval–Stern signature algorithm • Poly1305 • Polyalphabetic cipher • Polybius square • Portex • Post-quantum cryptography • Post-Quantum Cryptography Standardization • Power analysis • Preimage attack • Pre-shared key • Pretty Good Privacy • Printer steganography • Privacy-enhanced Electronic Mail • Private Communications Technology • Private information retrieval • Probabilistic encryption • Product cipher • Proof-of-work system • Protected Extensible Authentication Protocol • Provable security • Provably secure cryptographic hash function • Proxy re-encryption • Pseudo-Hadamard transform • Pseudonymity • Pseudorandom function • Pseudorandom number generator • Pseudorandom permutation • Public key certificate • Public-key cryptography • Public key fingerprint • Public key infrastructure • PURPLE • PuTTY • Py (cipher) Q Q (cipher) • Qrpff • QUAD (cipher) • Quadratic sieve • Quantum coin flipping • Quantum cryptography • Quantum digital signature • Quantum fingerprinting • Quantum key distribution R Rabbit (cipher) • Rabin cryptosystem • Rabin–Williams encryption • RadioGatún • Rail fence cipher • Rainbow table • Ralph Merkle • Rambutan (cryptography) • Random function • Randomness tests • Random number generator attack • Random oracle • RC2 • RC4 • RC5 • RC6 • Rebound attack • Reciprocal cipher • Red/black concept • REDOC • Red Pike (cipher) • Reflector (cipher machine) • Regulation of Investigatory Powers Act 2000 • Reihenschieber • Rekeying (cryptography) • Related-key attack • Replay attack • Reservehandverfahren • Residual block termination • Rijndael key schedule • Rijndael S-box • Ring signature • RIPEMD • Rip van Winkle cipher • Robert Morris (cryptographer) • Robot certificate authority • Rockex • Rolf Noskwith • Ron Rivest • Room 40 • Root certificate • Ross J. Anderson • Rossignols • ROT13 • Rotor machine • RSA RSA • RSA-100 • RSA-1024 • RSA-110 • RSA-120 • RSA-129 • RSA-130 • RSA-140 • RSA-150 • RSA-1536 • RSA-155 • RSA-160 • RSA-170 • RSA-180 • RSA-190 • RSA-200 • RSA-2048 • RSA-210 • RSA-220 • RSA-230 • RSA-232 • RSA-240 • RSA-250 • RSA-260 • RSA-270 • RSA-280 • RSA-290 • RSA-300 • RSA-309 • RSA-310 • RSA-320 • RSA-330 • RSA-340 • RSA-350 • RSA-360 • RSA-370 • RSA-380 • RSA-390 • RSA-400 • RSA-410 • RSA-420 • RSA-430 • RSA-440 • RSA-450 • RSA-460 • RSA-470 • RSA-480 • RSA-490 • RSA-500 • RSA-576 • RSA-617 • RSA-640 • RSA-704 • RSA-768 • RSA-896 • RSA-PSS • RSA Factoring Challenge • RSA problem • RSA Secret-Key Challenge • RSA Security • Rubber-hose cryptanalysis • Running key cipher • Russian copulation S S-1 block cipher • SAFER • Salsa20 • Salt (cryptography) • SAM card • Security Support Provider Interface • SAML • SAVILLE • SC2000 • Schnorr group • Schnorr signature • Schoof–Elkies–Atkin algorithm • SCIP • Scott Vanstone • Scrambler • Scramdisk • Scream (cipher) • Scrypt • Scytale • Seahorse (software) • SEAL (cipher) • Sean Murphy (cryptographer) • SECG • Secret broadcast • Secret decoder ring • Secrets and Lies (Schneier) • Secret sharing • Sectéra Secure Module • Secure access module • Secure channel • Secure Communication based on Quantum Cryptography • Secure copy • Secure cryptoprocessor • Secure Electronic Transaction • Secure Hash Algorithms • Secure Hypertext Transfer Protocol • Secure key issuing cryptography • Secure multi-party computation • Secure Neighbor Discovery • Secure Real-time Transport Protocol • Secure remote password protocol • Secure Shell • Secure telephone • Secure Terminal Equipment • Secure voice • SecurID • Security association • Security engineering • Security level • Security parameter • Security protocol notation • Security through obscurity • Security token • SEED • Selected Areas in Cryptography • Self-certifying File System • Self-shrinking generator • Self-signed certificate • Semantic security • Serge Vaudenay • Serpent (cipher) • Session key • SHACAL • Shafi Goldwasser • SHA-1 • SHA-2 • SHA-3 • Shared secret • SHARK • Shaun Wylie • Shor's algorithm • Shrinking generator • Shugborough inscription • Side-channel attack • Siemens and Halske T52 • SIGABA • SIGCUM • SIGINT • Signal Protocol • Signal Intelligence Service • Signcryption • SIGSALY • SILC (protocol) • Silvio Micali • Simple Authentication and Security Layer • Simple public-key infrastructure • Simple XOR cipher • S/KEY • Skein (hash function) • Skipjack (cipher) • Slide attack • Slidex • Small subgroup confinement attack • S/MIME • SM4 algorithm (formerly SMS4) • Snake oil (cryptography) • Snefru • SNOW • Snuffle • SOBER-128 • Solitaire (cipher) • Solomon Kullback • SOSEMANUK • Special Collection Service • Spectr-H64 • SPEKE (cryptography) • Sponge function • SPNEGO • Square (cipher) • Ssh-agent • Ssh-keygen • SSH File Transfer Protocol • SSLeay • Stafford Tavares • Standard model (cryptography) • Station CAST • Station HYPO • Station-to-Station protocol • Statistical cryptanalysis • Stefan Lucks • Steganalysis • Steganography • Straddling checkerboard • Stream cipher • Stream cipher attacks • Strong cryptography • Strong RSA assumption • Stuart Milner-Barry • STU-II • STU-III • Stunnel • Substitution box • Substitution cipher • Substitution–permutation network • Superencryption • Supersingular isogeny key exchange • Swedish National Defence Radio Establishment • SWIFFT • SXAL/MBAL • Symmetric-key algorithm • SYSKEY T Tabula recta • Taher Elgamal • Tamper resistance • Tcpcrypt • Television encryption • TEMPEST • Template:Cryptographic software • Temporal Key Integrity Protocol • Testery • Thawte • The Alphabet Cipher • The Code Book • The Codebreakers • The Gold-Bug • The Magic Words are Squeamish Ossifrage • Theory of Cryptography Conference • The world wonders • Thomas Jakobsen • Three-pass protocol • Threshold shadow scheme • TICOM • Tiger (cryptography) • Timeline of cryptography • Time/memory/data tradeoff attack • Time-based One-time Password algorithm (TOTP) • Timing attack • Tiny Encryption Algorithm • Tom Berson • Tommy Flowers • Topics in cryptography • Tor (anonymity network) • Torus-based cryptography • Traffic analysis • Traffic-flow security • Traitor tracing • Transmission security • Transport Layer Security • Transposition cipher • Trapdoor function • Trench code • Treyfer • Trifid cipher • Triple DES • Trivium (cipher) • TrueCrypt • Truncated differential cryptanalysis • Trusted third party • Turing (cipher) • TWINKLE • TWIRL • Twofish • Two-square cipher • Type 1 encryption • Type 2 encryption • Type 3 encryption • Type 4 encryption • Typex U UES (cipher) • Ultra • UMAC • Unbalanced Oil and Vinegar • Undeniable signature • Unicity distance • Universal composability • Universal one-way hash function (UOWHF) V Venona project • Verifiable secret sharing • Verisign • Very smooth hash • VEST • VIC cipher • VideoCrypt • Vigenère cipher • Vincent Rijmen • VINSON • Virtual private network • Visual cryptography • Voynich manuscript W Wadsworth's cipher • WAKE • WLAN Authentication and Privacy Infrastructure • Watermark (data file) • Watermarking attack • Weak key • Web of trust • Whirlpool (hash function) • Whitfield Diffie • Wide Mouth Frog protocol • Wi-Fi Protected Access • William F. Friedman • William Montgomery (cryptographer) • WinSCP • Wired Equivalent Privacy • Wireless Transport Layer Security • Witness-indistinguishable proof • Workshop on Cryptographic Hardware and Embedded Systems • World War I cryptography • World War II cryptography • W. T. Tutte X X.509 • XDH assumption • Xiaoyun Wang • XML Encryption • XML Signature • xmx • XSL attack • XTEA • XTR • Xuejia Lai • XXTEA 10-00-00-00-00 Y Yarrow algorithm • Y-stations • Yuliang Zheng Z Zeroisation • Zero-knowledge password proof • Zero-knowledge proof • Zfone • Zodiac (cipher) • ZRTP • Zimmermann–Sassaman key-signing protocol • Zimmermann Telegram See also Outline of cryptography – an analytical list of articles and terms. Books on cryptography – an annotated list of suggested readings. List of cryptographers – an annotated list of cryptographers. Important publications in cryptography – some cryptography papers in computer science. WikiProject Cryptography – discussion and resources for editing cryptography articles. Cryptography lists and comparisons Cryptography Cryptography
235585
https://en.wikipedia.org/wiki/Rubber-hose%20cryptanalysis
Rubber-hose cryptanalysis
In cryptography, rubber-hose cryptanalysis is a euphemism for the extraction of cryptographic secrets (e.g. the password to an encrypted file) from a person by coercion or torture—such as beating that person with a rubber hose, hence the name—in contrast to a mathematical or technical cryptanalytic attack. Details According to Amnesty International and the UN, many countries in the world routinely torture people. It is therefore logical to assume that at least some of those countries use (or would be willing to use) some form of rubber-hose cryptanalysis. In practice, psychological coercion can prove as effective as physical torture. Not physically violent but highly intimidating methods include such tactics as the threat of harsh legal penalties. The incentive to cooperate may be some form of plea bargain, such as an offer to drop or reduce criminal charges against a suspect in return for full co-operation with investigators. Alternatively, in some countries threats may be made to prosecute as co-conspirators (or inflict violence upon) close relatives (e.g. spouse, children, or parents) of the person being questioned unless they co-operate. In some contexts, rubber-hose cryptanalysis may not be a viable attack because of a need to decrypt data covertly; information such as a password may lose its value if it is known to have been compromised. It has been argued that one of the purposes of strong cryptography is to force adversaries to resort to less covert attacks. The earliest known use of the term was on the sci.crypt newsgroup, in a message posted 16 October 1990 by Marcus J. Ranum, alluding to corporal punishment: Although the term is used tongue-in-cheek, its implications are serious: in modern cryptosystems, the weakest link is often the human user. A direct attack on a cipher algorithm, or the cryptographic protocols used, is likely to be much more expensive and difficult than targeting the people who use or manage the system. Thus, many cryptosystems and security systems are designed with special emphasis on keeping human vulnerability to a minimum. For example, in public-key cryptography, the defender may hold the key to encrypt the message, but not the decryption key needed to decipher it. The problem here is that the defender may be unable to convince the attacker to stop coercion. In plausibly deniable encryption, a second key is created which unlocks a second convincing but relatively harmless message (for example, apparently personal writings expressing "deviant" thoughts or desires of some type that are lawful but taboo), so the defender can prove to have handed over the keys whilst the attacker remains unaware of the primary hidden message. In this case, the designer's expectation is that the attacker will not realize this, and forego threats or actual torture. The risk, however, is that the attacker may be aware of deniable encryption and will assume the defender knows more than one key, meaning the attacker may refuse to stop coercing the defender even if one or more keys are revealed: on the assumption the defender is still withholding additional keys which hold additional information. In law In some jurisdictions, statutes assume the opposite—that human operators know (or have access to) such things as session keys, an assumption which parallels that made by rubber-hose practitioners. An example is the United Kingdom's Regulation of Investigatory Powers Act, which makes it a crime not to surrender encryption keys on demand from a government official authorized by the act. According to the Home Office, the burden of proof that an accused person is in possession of a key rests on the prosecution; moreover, the act contains a defense for operators who have lost or forgotten a key, and they are not liable if they are judged to have done what they can to recover a key. Possible case In the lead-up to the 2017 Kenyan general election, the head of information, communication, and technology at the Independent Electoral and Boundaries Commission, Christopher Msando, was murdered. He had played a major role in developing the new voting system for the election. His body showed apparent marks of torture, and there were concerns that the murderers had tried to get password information out of him. In popular culture In xkcd comic #538 (Security), in the first panel a crypto nerd imagines that due to his advanced encryption, the crackers will be ultimately defeated. In the second panel the author suggests that in the real world, people with the desire to access this information would simply use torture to coerce the nerd to give them the password. xkcd 538: Security Explain xkcd 538: Security See also (encrypted filesystem) References Cryptographic attacks Espionage techniques Torture Euphemisms
237720
https://en.wikipedia.org/wiki/Malbolge
Malbolge
Malbolge () is a public domain esoteric programming language invented by Ben Olmstead in 1998, named after the eighth circle of hell in Dante's Inferno, the Malebolge. It was specifically designed to be almost impossible to use, via a counter-intuitive 'crazy operation', base-three arithmetic, and self-altering code. It builds on the difficulty of earlier, challenging esoteric languages (such as Brainfuck and Befunge), but takes this aspect to the extreme, playing on the entangled histories of computer science and encryption. Despite this design, it is possible to write useful Malbolge programs. Programming in Malbolge Malbolge was very difficult to understand when it arrived. It took two years for the first Malbolge program to appear. The author himself has never written a Malbolge program. The first program was not written by a human being; it was generated by a beam search algorithm designed by Andrew Cooke and implemented in Lisp. Later, Lou Scheffer posted a cryptanalysis of Malbolge and provided a program to copy its input to its output. He also saved the original interpreter and specification after the original site stopped functioning, and offered a general strategy of writing programs in Malbolge as well as some thoughts on its Turing completeness. Olmstead believed Malbolge to be a linear bounded automaton. There is a discussion about whether one can implement sensible loops in Malbolge—it took many years before the first non-terminating one was introduced. A correct 99 Bottles of Beer program, which deals with non-trivial loops and conditions, was not announced for seven years; the first correct one was by Hisashi Iizawa in 2005. Hisashi Iizawa et al. also proposed a guide for programming in Malbolge for the purpose of obfuscation for software protection. In 2020, GitHub user kspalaiologos made a working Lisp interpreter in Malbolge Unshackled. Example programs Hello, World! This program displays "Hello, World.". (=<`#9]~6ZY327Uv4-QsqpMn&+Ij"'E%e{Ab~w=_:]Kw%o44Uqp0/Q?xNvL:`H%c#DD2^WV>gY;dts76qKJImZkj echo program This program reads a string from a user and prints that string, similar to Unix echo.(=BA#9"=<;:3y7x54-21q/p-,+*)"!h%B0/. ~P< <:(8& 66#"!~}|{zyxwvu gJk Design Malbolge is machine language for a ternary virtual machine, the Malbolge interpreter. The standard interpreter and the official specification do not match perfectly. One difference is that the compiler stops execution with data outside the 33–126 range. Although this was initially considered a bug in the compiler, Ben Olmstead stated that it was intended and there was in fact "a bug in the specification". Registers Malbolge has three registers, a, c, and d. When a program starts, the value of all three registers is zero. a stands for 'accumulator', set to the value written by all write operations on memory and used for standard I/O. c, the code pointer, is special: it points to the current instruction. d is the data pointer. It is automatically incremented after each instruction, but the location it points to is used for the data manipulation commands. Pointer notation d can hold a memory address; [d] is register indirect; the value stored at that address. [c] is similar. Memory The virtual machine has 59,049 (310) memory locations that can each hold a ten-trit ternary number. Each memory location has an address from 0 to 59048 and can hold a value from 0 to 59048. Incrementing past this limit wraps back to zero. The language uses the same memory space for both data and instructions. This was influenced by how hardware such as x86 architecture worked. Before a Malbolge program starts, the first part of memory is filled with the program. All whitespace in the program is ignored and, to make programming more difficult, everything else in the program must start out as one of the instructions below. The rest of memory is filled by using the crazy operation (see below) on the previous two addresses ([m] = crz [m - 2], [m - 1]). Memory filled this way will repeat every twelve addresses (the individual ternary digits will repeat every three or four addresses, so a group of ternary digits is guaranteed to repeat every twelve). In 2007, Ørjan Johansen created Malbolge Unshackled, a version of Malbolge which does not have the arbitrary memory limit. The hope was to create a Turing-complete language while keeping as much in the spirit of Malbolge as possible. No other rules are changed, and all Malbolge programs that do not reach the memory limit are completely functional. Instructions Malbolge has eight instructions. Malbolge figures out which instruction to execute by taking the value [c], adding the value of c to it, and taking the remainder when this is divided by 94. The final result tells the interpreter what to do: After each instruction is executed, the guilty instruction gets encrypted (see below) so that it will not do the same thing next time, unless a jump just happened. Right after a jump, Malbolge will encrypt the innocent instruction just prior to the one it jumped to instead. Then, the values of both c and d are increased by one and the next instruction is executed. Crazy operation For each ternary digit of both inputs, use the following table to get a ternary digit of the result. For example, crz 0001112220, 0120120120 gives 1001022211. Encipherment After an instruction is executed, the value at [c] (without anything added to it) will be replaced with itself mod 94. Then, the result is enciphered with one of the following two equivalent methods. Method 1 Find the result below. Store the ASCII code of the character below it at [c]. 0000000000111111111122222222223333333333444444444455555555556666666666777777777788888888889999 0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123 ---------------------------------------------------------------------------------------------- 9m<.TVac`uY*MK'X~xDl}REokN:#?G"i@5z]&gqtyfr$(we4{WP)H-Zn,[%\3dL+Q;>U!pJS72FhOA1CB6v^=I_0/8|jsb Method 2 Find the result below. Store the encrypted version at [c]. Lou Scheffer's cryptanalysis of Malbolge mentions six different cycles in the permutation. They are listed here: 33 ⇒ 53 ⇒ 45 ⇒ 119 ⇒ 78 ⇒ 49 ⇒ 87 ⇒ 48 ⇒ 123 ⇒ 71 ⇒ 83 ⇒ 94 ⇒ 57 ⇒ 91 ⇒ 106 ⇒ 77 ⇒ 65 ⇒ 59 ⇒ 92 ⇒ 115 ⇒ 82 ⇒ 118 ⇒ 107 ⇒ 75 ⇒ 104 ⇒ 89 ⇒ 56 ⇒ 44 ⇒ 40 ⇒ 121 ⇒ 35 ⇒ 93 ⇒ 98 ⇒ 84 ⇒ 61 ⇒ 100 ⇒ 97 ⇒ 46 ⇒ 101 ⇒ 99 ⇒ 86 ⇒ 95 ⇒ 109 ⇒ 88 ⇒ 47 ⇒ 52 ⇒ 72 ⇒ 55 ⇒ 110 ⇒ 126 ⇒ 64 ⇒ 81 ⇒ 54 ⇒ 90 ⇒ 124 ⇒ 34 ⇒ 122 ⇒ 63 ⇒ 43 ⇒ 36 ⇒ 38 ⇒ 113 ⇒ 108 ⇒ 39 ⇒ 116 ⇒ 69 ⇒ 112 ⇒ 68 ⇒ 33 ... 37 ⇒ 103 ⇒ 117 ⇒ 111 ⇒ 120 ⇒ 58 ⇒ 37 ... 41 ⇒ 102 ⇒ 96 ⇒ 60 ⇒ 51 ⇒ 41 ... 42 ⇒ 114 ⇒ 125 ⇒ 105 ⇒ 42 ... 50 ⇒ 80 ⇒ 66 ⇒ 62 ⇒ 76 ⇒ 79 ⇒ 67 ⇒ 85 ⇒ 73 ⇒ 50 ... 70 ⇒ 74 ⇒ 70 ... These cycles can be used to create loops that do different things each time and that eventually become repetitive. Lou Scheffer used this idea to create a Malbolge program (included in his cryptanalysis linked below) that repeats anything the user inputs. Variants Malbolge is not Turing-complete, due to its memory limits. However, it otherwise has sequential execution, repetition, and conditional-execution. Several attempts have been made to create Turing-complete versions of Malbolge: Malbolge-T is a theoretical version of Malbolge that resets the input/output stream upon reaching the end, allowing for unbounded programs. Malbolge-T would be backward compatible with Malbolge. Malbolge Unshackled is a hopefully Turing-complete variation, allowing for programs of any length. However, due to command variations to allow for values above 257, valid Malbolge programs will not necessarily run correctly in Malbolge Unshackled. Popular culture In the television series Elementary, during the episode "The Leviathan" (season 1, episode 10), a clue written on a coffee order is described as having been written in Malbolge. It appears to be a small modification of the more verbose "Hello World" example shown above. In the Leverage: Redemption episode "The Golf Job" (season 1, episode 12), an SMS auto-reply reads "Breanna is unavailable Thursday through Sunday or until she masters Malbolge code." In the Billions episode "The Limitless Shit" (season 5, episode 7), a programmer analysist at Axe Capital explains that he "... used to dick around with Malbolge, but just for fun." See also INTERCAL Obfuscated code References External links Malbolge interpreter (C source code) Malbolge interpreter, debugger, assembler and example Malbolge Assembly code (Java source code) Treatise on writing Malbolge programs; takes Scheffer's analysis a bit further A project devoted to present programs written in Malbolge Esoteric programming languages Non-English-based programming languages Programming languages created in 1998
238781
https://en.wikipedia.org/wiki/ThinkPad
ThinkPad
ThinkPad is a line of business-oriented laptop computers and tablets designed, developed and marketed by Lenovo, and formerly IBM. The line was originally sold by IBM until 2005, when a part of the company's business was acquired by Lenovo. ThinkPads have a distinct black, boxy design language, inspired by a Japanese bento lunchbox, which originated in 1990 and is still used in some models. Most models also feature a red-colored trackpoint on the keyboard, which has become an iconic and distinctive design characteristic associated with the ThinkPad line. The ThinkPad line was first developed at the IBM Yamato Facility in Japan, and the first ThinkPads were released in October 1992. It has seen significant success in the business market. ThinkPad laptops have been used in outer space and for many years were the only laptops certified for use on the International Space Station. ThinkPads have also for several years been one of the preferred laptops used by the United Nations. History The ThinkPad was developed to compete with Toshiba and Compaq, who had created the first two portable notebooks, with an emphasis on sales to the Harvard Business School. The task of creating a notebook was given to the Yamato Facility in Japan, headed by , a Japanese engineer and product designer who had joined IBM in the 1970s, now known as the "Father of ThinkPad". The name "ThinkPad" was a product of IBM's corporate history and culture. Thomas J. Watson, Sr., first introduced "THINK" as an IBM slogan in the 1920s. With every minicomputer and mainframe, IBM installed (almost all were leased – not sold), a blue plastic sign was placed atop the operator's console, with the text "Think" printed on an aluminium plate. For decades IBM had also distributed small notepads with the word "THINK" emblazoned on a brown leatherette cover to customers and employees. The name "ThinkPad" was suggested by IBM employee Denny Wainwright, who had one such notepad in his pocket. The name was opposed by the IBM corporate naming committee as all the names for IBM computers were numeric at that time, but "ThinkPad" was kept due to praise from journalists and the public. Early models In April 1992, IBM announced the first ThinkPad model, the 700, later renamed the 700T after the release of three newer models, the 300, (new) 700 and 700C in October 1992. The 700T was a tablet computer. This machine was the first product produced under IBM's new "differentiated product personality" strategy, a collaboration between Richard Sapper and Tom Hardy, head of the corporate IBM Design Program. Development of the 700C also involved a close working relationship between Sapper and Kazuhiko Yamazaki, lead notebook designer at IBM's Yamato Design Center in Japan and liaison between Sapper and Yamato engineering. This 1990–1992 "pre-Internet" collaboration between Italy and Japan was facilitated by a special Sony digital communications system that transmitted high-res images over telephone lines. This system was established in several key global Design Centers by Hardy so IBM designers could visually communicate more effectively and interact directly with Sapper for advice on their projects. For his innovative design management leadership during ThinkPad development, Hardy was named "innovator of the Year 1992" by PC Magazine. The first ThinkPad tablet, a PenPoint-based device formally known as the IBM 2521 ThinkPad, was positioned as a developer's release. The ThinkPad tablet became available for purchase by the general public in October of the same year. IBM marketed the ThinkPad creatively, through methods such as early customer pilot programs, numerous pre-launch announcements, and an extensive loaner program designed to showcase the product's strengths and weaknesses, including loaning a machine to archaeologists excavating the ancient Egyptian city of Leontopolis. The resulting report documented the ThinkPad's excellent performance under difficult conditions; "The ThinkPad is an impressive machine, rugged enough to be used without special care in the worst conditions Egypt has to offer." The first ThinkPads were very successful, collecting more than 300+ awards for design and quality. Acquisition by Lenovo In 2005, technology company Lenovo purchased the IBM personal computer business and the ThinkPad as a flagship brand along with it. Speaking about the purchase of IBM's personal computer division, Liu Chuanzhi said, "We benefited in three ways from the IBM acquisition. We got the ThinkPad brand, IBM's more advanced PC manufacturing technology and the company's international resources, such as its global sales channels and operation teams. These three elements have shored up our sales revenue in the past several years." Although Lenovo acquired the right to use the IBM brand name for five years after its acquisition of IBM's personal computer business, Lenovo only used it for three years. Today Lenovo manufactures and markets Think-branded products while IBM is mostly responsible for overseeing servicing and repairs for the Think line of products produced by Lenovo. Both IBM and Lenovo play a key role in the design of their "Think" branded products. Most of the Think line of products are designed at the Yamato Labs in Japan. Manufacturing The majority of ThinkPad computers since the 2005 acquisition of the brand by Lenovo have been made in Mexico, Slovakia, India and China. Lenovo also employs ~300 people at a combined manufacturing and distribution centre near its American headquarters. Each device made in this facility is labelled with a red-white-and-blue sticker proclaiming "Whitsett, North Carolina."|alt=ThinkPad Logos|196x196px]]In 2012, Lenovo produced a short run of special edition anniversary ThinkPads in Yonezawa, Yamagata, in partnership with NEC, as part of a larger goal to move manufacturing from away from China and in to Japan. In 2014, although sales rose 5.6 percent from the previous year, Lenovo lost its position as the top commercial notebook maker. However, the company celebrated a milestone in 2015 with the shipment of the 100 millionth unit of its ThinkPad line. Design The design language of the ThinkPad has remained very similar throughout the entire lifetime of the brand. Almost all models are solid black inside and out, with a boxy, right-angled external case design. Some newer Lenovo models incorporate more curved surfaces in their design. Many ThinkPads have incorporated magnesium, carbon fiber reinforced plastic or titanium into their chassis. The industrial design concept was created in 1990 by Italy-based designer Richard Sapper, a corporate design consultant of IBM and, since 2005, Lenovo. The design was based on the concept of a traditional Japanese bento lunchbox, which revealed its nature only after being opened. According to later interviews with Sapper, he also characterized the simple ThinkPad form to be as elementary as a simple, black cigar box and with similar proportions, with the same observation that it offers a 'surprise' when opened. Since 1992, the ThinkPad design has been regularly updated, developed and refined over the years by Sapper and the respective teams at IBM and later Lenovo. On the occasion of the 20th anniversary of ThinkPad's introduction, David Hill authored and designed a commemorative book about ThinkPad design titled ThinkPad Design: Spirit & Essence. Features and technologies Several unique features have appeared in the ThinkPad line, like drive protection, pointing stick or TPM chips. While few features remain unique to the series, several laptop technologies originated on ThinkPads: Current Lenovo Vantage Early known as "IBM Access", later "ThinkVantage", the Lenovo Vantage is a suite of computer management applications. This software can give additional support for system management (backup, encrypting, system drivers installation and upgrade, system monitoring and other). Currently some old features are replaced by internal Windows 10 features. TPM chips IBM was the first company that supported a TPM module. Modern ThinkPads still have this feature. ThinkShutter ThinkShutter is the branding of a webcam privacy shutter present in some ThinkPad notebook computers. It is a simple mechanical sliding cover that allows the user to obstruct the webcam's view. Some add-on webcams and other laptop brands provide a similar feature. IdeaPad notebooks carry the TrueBlock branding for their privacy shutters. Spill-resistant keyboards Some ThinkPad models have a keyboard membrane and drain holes (P series, classic T series and T###p models), and some have a solid rubber or plastic membrane (like X1 series and current T and X series), without draining holes. UltraNav The first ThinkPad 700 was equipped with the signature TrackPoint red dot pointing stick invented by Ted Selker. By 2000 the trackpad pointer had become more popular for laptops due to innovations by Synaptics so IBM introduced UltraNav as a complementary combination of TrackPoint and TouchPad designed by Dave Sawin, Hiroaki Yasuda, Fusanobo Nakamura, and Mitsuo Horiuchi to please all users. A roll cage frame and stainless steel hinges with 180° or 360° opening angle The "roll cage" is a internal frame, designed to minimize motherboard flex (current P series and T##p series) or magnesium composite case (all other hi-end models). The display modules lacks a magnesium frames, and some 2012-2016 models have a common issue with a cracked plastic lid. The 180° hinges is typical, the 360° hinges is Yoga line basic feature. OLED screens Introduced in 2018 as hi-end display option for some models. The Active Protection System Option for some ThinkPad that still uses the 2.5" drive bay; These systems use an accelerometer sensor to detect when a ThinkPad is falling and shut down the hard disk drive to prevent damage. Biometric fingerprint reader and NFC Smart card reader options The fingerprint reader was introduced as an option by IBM in 2004.ThinkPads were one of the first laptops to include this feature Internal WWAN modules and Wi-Fi 3x3 MIMO The Mobile broadband support is a common feature for most of actual ThinkPad models after 2006; the support of 3x3 MIMO is a common feature for most of hi-end models. The some additional features (dock stations, UltraBay, accessories support) were listed in Accessories section. Past ThinkLight External keyboard light, replaced by internal backlight; is an LED light located at the top of the LCD screen which illuminates the keyboard from above. ThinkBridge Only T, W and X series ThinkPad's feature (for some 2013-2018 models) — internal secondary battery (as succession of secondary UltraBay battery) that support a hot-swapping of primary battery. 7-row Keyboards Original IBM keyboard design (1992-2012) — The original keyboard offered in the ThinkPad line until 2012, when it was swapped out for the chiclet style keyboard now used today. IBM TrackWrite keyboard design — A unique keyboard designed by John Karidis introduced by Lenovo in 1995, used in the ThinkPad 701 series. When the machine is closed the keyboard is folded inwards, making the machine more compact. However when the machine is open and in use, it slides out, giving the user a normal sized keyboard. That keyboard, referred to as a butterfly keyboard, which is widely considered a design masterpiece and is in the permanent collection of the Museum of Modern Art in New York City. The ThinkPad 760 series also included an unusual keyboard design; the keyboard was elevated by two arms riding on small rails on the side of the screen, tilting the keyboard to achieve a more ergonomic design. The keyboard design was replaced by the Chiclet style keyboard (2012-current) — The keyboard adopted by Lenovo in 2012 over the original IBM keyboard design. And does not support the ThinkLight to illuminate the keyboard, instead using a keyboard backlight. (Some ThinkPad models during the intermission period between the classic IBM design and the Lenovo chiclet design could be outfitted with both the backlit chiclet style keyboard and the ThinkLight.) FlexView AFFS or IPS screens The introduced in 2004 line of hi-end displays with wide view angles and optional high resolution (up to 15" 1600x1200 or (rarely) 2048x1536 pixels). Partially dropped in 2008 (after partial defunct of BOE-Hydis display supplier), and reintroduced as ordinary IPS screen option in 2013. Batteries Some Lenovo laptops (such as the X230, W530 and T430) block third-party batteries. Lenovo calls this feature "Battery Safeguard". It was first introduced on some models in May 2012. Laptops with this feature scan for security chips that only ThinkPad-branded batteries contain. Affected Thinkpads flash a message stating "Genuine Lenovo Battery Not Attached" when third-party batteries are used. Operating systems The ThinkPad has shipped with Microsoft Windows from its inception until present day. Alongside MS-DOS, Windows 3.1x was the default operating system on the original ThinkPad 700. IBM and Microsoft's joint operating system, known as OS/2, although not as popular, was also made available as an option from the ThinkPad 700 in 1992, and was officially supported until the T43 in 2005. IBM took its first steps toward ThinkPads with an alternative operating system, when they quietly certified the 390 model for SUSE Linux in November 1998. The company released its first Linux-based unit with the ThinkPad A20m in July 2000. This model, along with the closely-released A21m, T21 and T22 models, came preinstalled with Caldera OpenLinux. IBM shifted away from preinstalled Linux on the ThinkPad after 2002, but continued to support other distributions such as Red Hat Linux, SUSE Linux Enterprise, and Turbolinux by means of customer installations on A30, A30p, A31p models. This continued through the Lenovo transition with the T60p, until September 2007. The following year, ThinkPads began shipping with Linux again, when the R61 and T61 were released with SUSE Linux Enterprise as an option. This was shortlived, as Lenovo discontinued that practice in 2009. ThinkPad hardware continued to be certified for Linux. In 2020, Lenovo shifted into much heavier support of Linux when they announced the ThinkPad X1 Carbon Gen 8, the P1 Gen 2, and the P53 would come with Fedora Linux as an option. This was the first time that Fedora Linux was made available as a preinstalled option from a major hardware vendor. Following that, Lenovo then began making Ubuntu available as a preinstalled option across nearly thirty different notebook and desktop models, and Fedora Linux on all of its P series lineup. A small number of ThinkPads are preinstalled with Google's Chrome OS. On these devices, Chrome OS is the only officially supported operating system where installation of Windows and other operating systems requires putting the device into developer mode. Use in space ThinkPads have been used heavily in space programs. NASA purchased more than 500 ThinkPad 750 laptops for flight qualification, software development, and crew training, and astronaut (and senator) John Glenn used ThinkPad laptops on his spaceflight mission STS-95 in 1998. ThinkPad models used on Shuttle missions include: ThinkPad 750 (first use in December 1993 supporting the Hubble repair mission) ThinkPad 750C ThinkPad 755C ThinkPad 760ED ThinkPad 760XD (ISS Portable Computing System) ThinkPad 770 ThinkPad A31p (ISS Portable Computing System) ThinkPad T61p ThinkPad P52 ThinkPad T490 ThinkPad P15 The ThinkPad 750 flew aboard the Space Shuttle Endeavour during a mission to repair the Hubble Space Telescope on 2 December 1993, running a NASA test program which checked if radiation in the space environment caused memory anomalies or other unexpected problems. ThinkPads were also used in conjunction with a joystick for the Portable In-Flight Landing Operations Trainer (PILOT). ThinkPads have also been used on space stations. At least three ThinkPad 750C were left in the Spektr module of Mir when it depressurized, and the 755C and 760ED were used as part of the Shuttle–Mir Program, the 760ED without modifications. Additionally, for several decades ThinkPads were the only laptops certified for use on the International Space Station. ThinkPads used aboard the space shuttle and International Space Station feature safety and operational improvements for the environment they must operate in. Modifications include Velcro tape to attach to surfaces, upgrades to the CPU and video card cooling fans to accommodate for microgravity (in which warmer air does not rise) and lower density of the cabin air, and an adapter for the station's 28 volt DC power. Throughout 2006, a ThinkPad A31p was being used in the Service Module Central Post of the International Space Station and seven ThinkPad A31p laptops were in service in orbit aboard the International Space Station. As of 2010, the Space Station was equipped with ThinkPad A31 computers and 32 ThinkPad T61p laptops. All laptops aboard the ISS are connected to the station's LAN via Wi-Fi and are connected to the ground at 3 Mbit/s up and 10 Mbit/s down, comparable to home DSL connection speeds. Since a new contract with HP in 2016 provided a small number of modified ZBook laptops for ISS use, ThinkPads are no longer the only laptops flown on the ISS but are the predominant laptop present there. ThinkPads in the United Nations For several years ThinkPads have been one of the preferred laptop brands used by the United Nations. The models found in the UN today include: L480 T480 T480s T14 X1 carbon gen 6, gen 7 and gen 8 X380 Yoga X390 Yoga Certain ThinkVision monitors (T24v) are also used with ThinkPad docking stations. Popularity ThinkPads have enjoyed cult popularity for many years. There are large communities on the Internet dedicated to the line where people have discussions about it, share photos and videos of their own ThinkPads, etc. Older ThinkPad models remain popular among enthusiasts and collectors, who still see them as durable, highly usable machines, despite no longer being modern. They have gained a reputation for being reliable, or "indestructible", even. Newer models are also still popular among consumers and businesses nowadays (as of 2021), though Lenovo has received some backlash in recent years for the apparent declining quality of their ThinkPad line (as well as all their other lines in general), many customers being unhappy with the build quality and reliability, or lack thereof, of their devices. Aftermarket parts have been developed for some models, such as the X60 and X200, for which custom motherboards with more modern processors have been created. In January 2015, Lenovo celebrated one hundred million ThinkPads being sold. They also announced some new ThinkPad products for the occasion. Reviews and awards Laptop Magazine in 2006 called the ThinkPad the highest-quality laptop computer keyboard available. It was ranked first in reliability and support in PC Magazine's 2007 Survey. The ThinkPad was the PC Magazine 2006 Reader's Choice for PC based laptops, and ranked number 1 in Support for PC based laptops. The ThinkPad Series was the first product to receive PC World's Hall of Fame award. The Enderle Group's Rob Enderle said that the constant thing about ThinkPad is that the "brand stands for quality" and that "they build the best keyboard in the business." The ThinkPad X Tablet-series was PC Magazine Editor's Choice for tablet PCs. The ThinkPad X60s was ranked number one in ultraportable laptops by PC World. It lasted 8 hours and 21 minutes on a single charge with its 8-cell battery. The Lenovo ThinkPad X60s Series is on PC World's Top 100 Products of 2006. The 2005 PC World Reliability and Service survey ranked ThinkPad products ahead of all other brands for reliability. In the 2004 survey, they were ranked second (behind eMachines). Lenovo was named the most environment-friendly company in the electronics industry by Greenpeace in 2007 but has since dropped to place 14 of 17 as of October 2010. The IBM/Lenovo ThinkPad T60p received the Editor's Choice award for Mobile Graphic Workstation from PC Magazine. Lenovo ThinkPad X60 is the PC Magazine Editor's Choice among ultra-portable laptops. The Lenovo ThinkPad T400-Series was on PC World's Top 100 Products of 2009. Current model lines Starting Weight ThinkPad Yoga (2013–current) The ThinkPad Yoga is an Ultrabook-class convertible device that functions as both a laptop and tablet computer. The Yoga gets its name from the consumer-oriented IdeaPad Yoga line of computers with the same form factor. The ThinkPad Yoga has a backlit keyboard that flattens when flipped into tablet mode. This was accomplished on 1st generation X1 Yoga with a platform surrounding the keys that rises until level with the keyboard buttons, a locking mechanism that prevents key presses, and feet that pop out to prevent the keyboard from directly resting on flat surfaces. On later X1 Yoga generations, the keys themselves retract in the chassis, so the computer rests on fixed small pads. Touchpad is disabled in this configuration. Lenovo implemented this design in response to complaints about its earlier Yoga 13 and 11 models being awkward to use in tablet mode. A reinforced hinge was required to implement this design. Other than its convertible form factor, the ThinkPad Yoga retains standard ThinkPad features such as a black magnesium-reinforced chassis, island keyboard, a red TrackPoint, and a large touchpad. Tablets ThinkPad Tablet Released in August 2011, the ThinkPad Tablet is the first in Lenovo's line of business-oriented Tablets with the ThinkPad brand. The tablet has been described by Gadget Mix as a premium business tablet. Since the Tablet is primarily business-oriented, it includes features for security, such as anti-theft software, the ability to remotely disable the tablet, SD card encryption, layered data encryption, and Cisco Virtual Private Network (VPN). Additionally, the ThinkPad Tablet is able to run software such as IBM's Lotus Notes Traveler. The stylus could be used to write notes on the Tablet, which also included software to convert this handwritten content to text. Another feature on the Tablet was a drag-and-drop utility designed to take advantage of the Tablet's touch capabilities. This feature could be used to transfer data between USB devices, internal storage, or an SD card. Slashgear summarized the ThinkPad Tablet by saying, "The stylus and the styling add up to a distinctive slate that doesn't merely attempt to ape Apple's iPad." ThinkPad Tablet 2 In order to celebrate the 20th anniversary of the ThinkPad, Lenovo held a large party in New York where it announced several products, including the Tablet 2. Lenovo says that the ThinkPad Tablet 2 will be available on 28 October 2012 when Windows 8 is released. The ThinkPad Tablet 2 runs the Windows 8 Professional operating system. It will be able to run any desktop software compatible with this version of Windows. The Tablet 2 is based on the Clover Trail version of the Intel Atom processor that has been customized for tablets. The Tablet 2 has 2 gigabytes of RAM and a 64GB SSD. The Tablet 2 has a 10.1-inch IPS display with a 16:9 aspect ratio and a resolution of . In a preview, CNET wrote, "Windows 8 looked readable and functional, both in Metro and standard Windows-based interfaces." A mini-HDMI port is included for video output. An 8-megapixel rear camera and a 2-megapixel front camera are included along with a noise-canceling microphone in order to facilitate video conferencing. ThinkPad 8 Announced and released in January 2014, the ThinkPad 8 is based on Intel's Bay Trail Atom Z3770 processor, with 2 GB of RAM and up to 128 GB of built-in storage. ThinkPad 8 has an 8.3-inch IPS display with a 16:10 aspect ratio and a resolution of pixels. Other features include an aluminum chassis, micro-HDMI port, 8-megapixel back camera (with flash), and optional 4G connectivity. It runs Windows 8 as an operating system. ThinkPad 10 Announced in May 2014, Lenovo ThinkPad 10 is a successor to the ThinkPad Tablet 2 and was scheduled to launch in the summer of 2014 along with accessories such as a docking station and external detachable magnetic keyboards. It used Windows 8.1 Pro as its operating system. It was available in 64 and 128GB variants with 1.6GHz quad-core Intel Atom Baytrail processor and 2GB or 4GB of RAM. It optionally supported both 3G and 4G (LTE). Display resolution was announced to be , paired with a stylus pen. ThinkPad X1 Tablet The ThinkPad X1 Tablet is a fanless tablet powered by Core M CPUs. It is available with 4, 8 or 16GB of LPDDR3 RAM and SATA or a PCIe NVMe SSDs with up to 1TB. It has a IPS screen and supports touch and pen input. ThinkPad 11e (2014–current) The ThinkPad 11e is a "budget" laptop computer for schools and students with an 11-inch screen and without trackpoint. 11e Yoga is a convertible version of 11e. E Series (2011–current) The E Series is a low-cost ThinkPad line, designed for small business mass-market requirements, and currently contains only a 14" and 15" sub-lines. The E Series line of laptops replaced Lenovo's Edge Series, but somewhere (in some countries) currently (May 2019) offered as both of "Thinkpad Edge/E series" names. The E series also lack metals like magnesium and carbon fibre in their construction which other members of the ThinkPad family enjoy. L Series (2010–current) The L Series replaced the former R Series, and is positioned as a mid-range ThinkPad offering with mainstream Intel Core i3/i5/i7 CPUs. The L Series have 3 sub-lines, the long-running 14" and 15.6" (and as launched this line had two models, L412 and the L512 in 2010); and as of 2018 there is also a 13" L380 available, which replaces the ThinkPad 13. T series (2000–current) The T series is the most popular and most well-known line of ThinkPad. Being the successor of the 600 series, it historically had high-end features, such as magnesium alloy roll-cages, high-density IPS screens known as FlexView (discontinued after the T60 series), 7-row keyboards, screen latches, the Lenovo UltraBay, and ThinkLight. Models included both 14.1-inch and 15.4-inch displays available in 4:3 and 16:10 aspect ratios. Since 2012, the entire ThinkPad line was given a complete overhaul, with modifications such as the removal of separate buttons for use with the TrackPoint (xx40 series – 2014, then reintroduced xx50 series – 2015), removal of separate audio control buttons, removal of screen latch, and the removal of LED indicator lights. Models starting from the xx40 series featured a Power Bridge battery system, which had a combination of a lower capacity built-in battery and a higher capacity external battery, enabling the user to switch the external without putting the computer into hibernation. However, beginning with the 2019 xx90 series models, the external battery was removed in favor of a single internal battery. Also, non-widescreen displays are no longer available, with 16:9 aspect ratio as the only remaining choice. The Tx20 series ThinkPads came in two editions: 15" (T520) or a 14" (T420). These are the last ThinkPads to use the classic 7-row keyboard, with the exception of the Lenovo ThinkPad 25th anniversary edition released on Oct. 5, 2017, which was based on the ThinkPad T470. As it can be seen above, over time, The T series ThinkPad's purpose has slightly changed. Initially, the T series ThinkPad was meant to have high-end business features and carry a 10–20% markup over the other ThinkPads. Starting with the T400, The ThinkPad T series became a less of a high-end business laptop and became more suited as a mobile workstation, becoming similar to the W-series or P-series ThinkPads. Achieving similar performance to the W-series, but with a 5–10% smaller profile than the W-series ThinkPads. In 2013, the T440 introduced another major shift in The ThinkPad T series. The ThinkPad became more of an overall office machine than a mobile workstation. By today's standards, The ThinkPad T series is thicker than most of its competitors. X Series (2000–current) The X Series is the main high-end ultraportable ThinkPad line, offering a lightweight, highly portable laptop with moderate performance. The current sub-lines for the X series includes: 13" X13 (mainstream); X13 Yoga (convertible sub-line), 14" X1 Carbon (premium sub-line), X1 Yoga (premium convertible sub-line), and 15" X1 Extreme (premium sub-line). The daughter line includes the X1 Tablet (not to be confused with the 2005-2013 X Series tablets). The mainstream current "workhorse" models is a X13 and X13 Yoga, the 13" successors of the classic discontinued 12" line of Lenovo X Series ThinkPads. The premium 14"/15" thin-and-light line were the 13.3" ThinkPad models (the X300/X301) with ultrabay CD-ROM and removable battery, but are now replaced by the modern premium X1-series ultrabook line, such as the X1 Carbon, X1 Yoga, and X1 Extreme sub-series. Discontinued mainstream lines such as the 12" X200(s), X201(s), and X220 models could be ordered with all of the high-end ThinkPad features (like Trackpoint, ThinkLight, a 7-row keyboard, a docking port, hot-swappable HDD, solid magnesium case and optional slice battery). The discontinued 12.5" X220 and X230 still featured a roll cage, a ThinkLight, and an optional premium IPS display (the first IPS display on a non-tablet ThinkPad since the T60p), but the 7-row keyboard was offered only with the X220. However, it lacked the lid latch mechanism which was present on the previous X200 and X201 versions. The discontinued slim 12" line contained only X200s and X201s with low power CPUs and high resolution displays, and X230s with low power CPUs. The 12.5" X series ThinkPads (such as X240 and later) had a more simplified design, and last 12" X280 model had only the Trackpoint feature, partially magnesium case and simplified docking port. The obsolete low-cost 11.6" (netbook line) X100e and X120e were are all plastic, lacking both the latch and the ThinkLight, and using a variant of the island keyboard (known as chiclet keyboard) found on the Edge series. The X100e was also offered in red in addition to blue, and white in some countries. Those were more like high-end netbooks, whereas the X200 series were more like full ultraportables, featuring Intel Core (previously Core 2 and Celeron) series CPUs rather than AMD netbook CPUs. The X Series with "tablet" suffixes is an outdated variant of the 12" X Series models, with low voltage CPUs and a flip-screen tablet resistive touchscreen. These include the traditional ThinkPad features, and have been noted for using a higher quality AFFS-type screen with better viewing angles compared to the screens used on other ThinkPads. P Series (2015–current) The P Series line of laptops replaced Lenovo's W Series and reintroduced 17" screens to the ThinkPad line. The P Series (excluding models with 's' suffix) is designed for engineers, architects, animators, etc. and comes with a variety of "high-end" options. All P Series models come included with fingerprint readers. The ThinkPad P Series includes features such as dedicated magnesium roll cages, more indicator LED lights, and high-resolution displays. Z series (2022) The Z series currently consists of two models: the 13-inch model, Z13, and the 16-inch model, Z16. It was introduced in January 2022 and will be available for purchase in May 2022; the Z13 model will start at $1549, while the Z16 model will start at $2099. The series is marketed towards business customers, as well as a generally younger audience. The Verge wrote: "Lenovo is trying to make ThinkPads cool to the kids. The company has launched the ThinkPad Z-series, a thin and light ThinkPad line with funky colors, eco-friendly packaging, and a distinctly modern look." The series features a new metal sleek, contemporary, thin design, which differs greatly from other recent, more traditional-looking ThinkPad models. The Z13 model was introduced in three new colors—black, silver, and black vegan leather with bronze accents—while the Z16 is only available in one of them, silver. The laptops are equipped with new AMD Ryzen Pro processors. Other notable features include 1080p webcams, OLED displays, new, redesigned touchpads, spill resistant keyboards, Dolby Atmos speaker systems, and Windows 11 with Windows Hello support. Accessories Lenovo also makes a range of accessories meant to complement and enhance the experience of using a ThinkPad device. These include: ThinkPad Stack (2015–current) The ThinkPad Stack line of products includes accessories designed for portability and interoperability. This line includes external hard drives, a wireless router, a power bank, and a Bluetooth 4.0 speaker. Each Stack device includes rubber feet, magnets, and pogo-pin power connections that allow the use of a single cable. The combined weight of all the Stack devices is slightly less than two pounds. The Stack series was announced in January 2015 at the International CES. The Stack series of accessories was expanded at the 2016 International CES to include a 720p resolution projector with 150 lumens of brightness and a wireless charging station. The Stack has a "blocky, black, and rectangular" look with the ThinkPad logo. It shares a common design language with ThinkPad laptop computers. Dock Stations (1993–current) Current docking stations (or docks) add much of the functional abilities of a desktop computer, including multiple display outputs, additional USB ports, and occasionally other features. This allows the ThinkPads to be connected and disconnected from various peripherals quickly and easily. Recent docks connect via a proprietary connector located on the underside of the laptops; or via USB-C. UltraBay (1995–2014) The internal replaceable (hot-swappable) CD-drive bay that supports a list of optional components, such as a CD-/DVD/Blu-ray drives, hard drive caddies, additional batteries, or device cradles. Slice batteries (2000-2012) Some classic models (IBM and early Lenovo T and X series) can support an additional slice battery instead of the UltraBay additional battery. UltraPort (2000–2002) ThinkPad USB 3.0 Secure Hard Drive An external USB 3.0/2.0 hard drive that was designed by Lenovo in 2009. It requires the input of a 4 digit PIN to access data and this can be set by the user. These drives are manufactured for lenovo by Apricorn, Inc. ThinkPad keyboards (external) IBM/Lenovo made several usb/Bluetooth keyboards with integrated UltraNav's and TrackPoints. Notable models include SK-8845 SK-8835 SK-8855 ThinkPad Compact USB Keyboard (current model) ThinkPad Compact Bluetooth Keyboard (current model) ThinkPad TrackPoint Keyboard II (current model) ThinkPad mice ThinkPad mice come in several different varieties ranging from Bluetooth ones through wired ones, to even ones with a trackpoint built-in and labelled as a scroll point. ThinkPad stands Thinkplus laptop stands(asia markets only) ThinkPlus charger GaN charger with a USB-C output. They are mostly sold with the "thinkplus" branding in Asia (notably south-east Asia) and are popular there. Historical models ThinkPad 235 The Japan-only ThinkPad 235 (or Type 2607) was the progeny of the IBM/Ricoh RIOS project. Also known as Clavius or Chandra2, it contains unusual features like the presence of three PCMCIA slots and the use of dual camcorder batteries as a source of power. Features an Intel Pentium MMX 233 MHz CPU, support for up to 160 MB of EDO memory, and a built-in hard drive with UDMA support. Hitachi marketed Chandra2 as the Prius Note 210. ThinkPad 240 The ultraportable ThinkPad 240 (X, Z) started with an Intel Celeron processor and went up to the 600 MHz Intel Pentium III. In models using the Intel 440BX chipset, the RAM was expandable to 320 MB max with a BIOS update. Models had a screen and an key pitch (a standard key pitch is ). They were also one of the first ThinkPad series to contain a built-in Mini PCI card slot (form factor 3b). The 240s have no optical disc drives and an external floppy drive. An optional extended battery sticks out the bottom like a bar and props up the back of the laptop. Weighing in at , these were the smallest and lightest ThinkPads ever made. 300 Series The 300-series (300, 310, 340, 345, 350, 360, 365, 370, 380, 385, 390 (all with various sub-series)) was a long-running value series starting at the 386SL/25 processor, all the way to the Pentium III 450. The 300 series was offered as a slightly lower-price alternative from the 700 series, with a few exceptions. The ThinkPad 360P and 360PE was a low-end version of ThinkPad 750P, and was unique model in the 300 series in that it could be used as a regular laptop, or transform into a tablet by flipping the monitor on top of itself. Retailing for $3,699 in 1995, the 360PE featured a touch sensitive monitor that operated with the stylus; the machine could run operating systems that supported the touch screen such as PenDOS 2.2. 500 Series The 500-series (500, 510, 560 (E, X, Z), 570 (E)) were the main line of the ultraportable ThinkPads. Starting with the 486SLC2-50 Blue Lightning to the Pentium III 500, these machines had only a hard disk on board. Any other drives were external (or in the 570's case in the UltraBase). They weighed in at around . 600 Series The 600-series (600, 600E, and 600X) are the direct predecessors of the T series. The 600-series packed a SVGA or a XGA TFT LCD, Pentium MMX, Pentium II or III processor, full-sized keyboard, and optical bay into a package weighing roughly . IBM was able to create this light, fully featured machine by using lightweight but strong carbon fiber composite plastics. The battery shipped with some 600-series models had a manufacturing defect that left it vulnerable to memory effect and resulted in poor battery life, but this problem can be avoided by use of a third-party battery. 700 Series The 700-series was a hi-end ThinkPad line; The released models (700T, 710T and 730T tablets; 700, 701, 720, 730, 750, 755, 760, 765, 770 laptops with various sub-models) can be configured with the best screens, largest hard drives and fastest processors available in the ThinkPad range; some features can be found only on a 700 series models, and was the first successful ThinkPad introduced in 1992 (that was a tablet PC 700T model without a keyboard and a mouse). 800 Series The ThinkPad 800-series (800/820/821/822/823/850/851/860) were unique as they were based on the PowerPC architecture rather than the Intel x86 architecture. Most of the 800 Series laptops used the PowerPC 603e CPU, at speeds of 100 MHz, or 166 MHz in the 860 model. The PowerPC ThinkPad line was considerably more expensive than the standard x86 ThinkPads — even a modestly configured 850 cost upwards of $12,000. All of the PowerPC ThinkPads could run Windows NT 3.51 and 4.0, AIX 4.1.x, and Solaris Desktop 2.5.1 PowerPC Edition. WorkPad Based on ThinkPad design although branded WorkPad, the IBM WorkPad z50 was a Handheld PC running Windows CE, released in 1999. i Series (1998–2002) The ThinkPad i Series was introduced by IBM in 1999 and was geared towards a multimedia focus with many models featuring independent integrated CD players and multimedia access buttons. The 1400 and 1500 models were designed by Acer for IBM under contract (and are thus nicknamed the AcerPad) and featured similar hardware found in Acer laptops (including ALi chipsets, three way audio jacks and the internal plastics painted with a copper paint). Some of the i Series ThinkPads, particularly the Acer developed models, are prone to broken hinges and stress damage on the chassis. One notable ThinkPad in the i Series lineup are the S3x (S30/S31) models: featuring a unique keyboard and lid design allowing a standard size keyboard to fit in a chassis that otherwise wouldn't be able to support the protruding keyboard. These models were largely only available in Asia Pacific. IBM offered an optional piano black lid on these models (designed by the Yamato Design lab). This is the only ThinkPad since the 701C to feature a special design to accommodate a keyboard that's physically larger than the laptop and also the only ThinkPad (aside from the Z61) to deviate away from the standard matte lid. A Series (2000–2004) The A-series was developed as an all-around productivity machine, equipped with hardware powerful enough to make it a desktop replacement. Hence it was the biggest and heaviest ThinkPad series of its time, but also had features not even found in a T-series of the same age. The A-series was dropped in favor of the G-series and R-series. The A31 was released in 2002 as a desktop replacement system equipped with: A Pentium 4-M processor clocked at 1.6, 1.8, 1.9, or 2.0 GHz (max supported is a 2.6 GHz), An ATI Mobility Radeon 7500, 128 or 256 MB of PC2100 RAM (officially upgradable to 1 GB but can be unofficially upgraded to 2 GB), IBM High Rate Wireless (PRISM 2.5 Based, can be modified to support WPA-TKIP) and equipped with a 20, 30, or 40 GB hard disk drive. R Series (2001–2010, 2018-2019) The R Series was a budget line, beginning with the R30 in 2001 and ending with the R400 and R500 presented in 2008. The successor of a R400 and R500 models is a ThinkPad L series L412 and L512 models. A notable model is the R50p with an optional 15" IPS LCD screen (introduced in 2003). The R series reintroduced in 2018 (for Chinese market only) with the same hardware as E series models, but with aluminum display cover, discrete GPU, TPM chip and fingerprint reader. G Series (2003–2006) The G-series consisted of only three models, the G40, G41 and G50. Being large and heavy machines, equipped with powerful desktop processors, this line of ThinkPads consequently served mainly as replacements for desktop computers. Z Series (2005–2007) The Z series was released as a high-end multimedia laptop; as a result this was the first ThinkPad to feature a widescreen (16:10 aspect ratio) display. The Z-Series was also unique in that certain models featured an (optional) titanium lid. Integrated WWAN and a webcam were also found on some configurations. The series has only ever included the Z60 (Z60m and Z60t) and Z61 (Z61m, Z61t and Z61p); the latter of which is the first Z-Series ThinkPad with Intel "Yonah" Dual-Core Technology. The processor supports Intel VT-x; this is disabled in the BIOS but can be turned on with a BIOS update. Running fully virtualised operating systems via Xen or VMware is therefore possible. Despite the Z61 carrying the same number as the T61, the hardware of the Z61 is closer to a T60 (and likewise the Z60 being closer to a T43). ThinkPad Reserve Edition (2007) The "15-year anniversary" Thinkpad model (based on a X60s laptop). This model was initially known inside of Lenovo as the "Scout". This was the name of the horse ridden by Tonto, the sidekick from the 1950s television series The Lone Ranger. Lenovo envisioned the Scout as a very high-end ThinkPad that would be analogous to a luxury car. Each unit was covered in fine leather embossed with its owners initials. Extensive market research was conducted on how consumers would perceive this form factor. It was determined that they appreciated that it emphasised warmth, nature, and human relations over technology. The Scout was soon renamed the ThinkPad Reserve Edition. It came bundled with premium services including a dedicated 24-hour technical support hotline that would be answered immediately. It was released in 2007 and sold for $5,000 in the United States. SL Series (2008–2010) The SL Series was launched in 2008 as a low-end ThinkPad targeted mainly geared toward small businesses. These lacked several traditional ThinkPad features, such as the ThinkLight, magnesium alloy roll cage, UltraBay, and lid latch, and use a 6-row keyboard with a different layout than the traditional 7-row ThinkPad keyboard; also, SL-series models have IdeaPad-based firmware. Models offered included 13.3" (SL300), 14" (SL400 and SL410) and 15.6" (SL500 and SL510). W Series (2008–2015) The W-series laptops were introduced by Lenovo as workstation-class laptops with their own letter designation, a descendant of prior ThinkPad T series models suffixed with 'p' (e.g. T61p), and are geared towards CAD users, photographers, power users, and others, who need a high-performance system for demanding tasks.. The W-series laptops were launched in 2008, at the same time as the Intel Centrino 2, marking an overhaul of Lenovo's product lineup. The first two W-series laptops introduced were the W500 and the W700. Previously available were the W7xx series (17" widescreen model), the W500 (15.4" 16:10 ratio model), the W510 (15.6" 16:9 ratio model), and W520 (15.6" 16:9 ratio model). The W700DS and the W701DS both had two displays: a 17" main LCD and a 10" slide-out secondary LCD. The W7xx series were also available with a Wacom digitizer built into the palm rest. These high-performance workstation models offered more high-end components, such as quad core CPUs and higher-end workstation graphics compared to the T-series, and were the most powerful ThinkPad laptops available. Until the W540, they retained the ThinkLight, UltraBay, roll cage, and lid latch found on the T-series. The W540 release marked the end of the lid latch, ThinkLight, and hot-swappable UltraBays found in earlier models. The ThinkPad W-series laptops from Lenovo are described by the manufacturer as being "mobile workstations", and suit that description by being physically on the larger side of the laptop spectrum, with screens ranging from 15" to 17" in size. Most W-series laptops offer high-end quad-core Intel processors with an integrated GPU as well as an Nvidia Quadro discrete GPU, utilizing Nvidia Optimus to switch between the two GPUs as required. Notable exceptions are the W500, which has ATI FireGL integrated workstation-class graphics, and the W550s, which is an Ultrabook-specification laptop with only a dual-core processor. The W-series laptops offer ISV certifications from various vendors such as Adobe Systems and Autodesk for CAD and 3D modeling software. The ThinkPad W series has been discontinued and replaced by the P series mobile workstations. Edge Series (2010) The Edge Series was released early in 2010 as small business and consumer-end machines. The design was a radical departure compared to the traditional black boxy ThinkPad design, with glossy surfaces (optional matte finish on later models), rounded corners, and silver trim. They were also offered in red, a first for the traditionally black ThinkPads. Like the SL, this series was targeted towards small businesses and consumers, and lack the roll cage, UltraBay, lid latch, and ThinkLight of traditional ThinkPads (though the 2011 E220s and E420s had ThinkLights). This also introduced an island-style keyboard with a significantly different layout. Models included 13.3" (Edge 13), 14" (Edge 14), and 15.6" (Edge 15) sizes. An 11.6" (Edge 11) model was offered, but not available in the United States. The latest models of E series can be offered with Edge branding, but this naming is optional and uncommon. S Series (2012–2014) The S Series is positioned as a mid-range ThinkPad offering, containing ultrabooks derived from the Edge Series. As of August 2013, the S Series includes S531 and S440 models; their cases are made of aluminum and magnesium alloy, available in silver and gunmetal colors. ThinkPad Twist (2012) The Lenovo ThinkPad Twist (S230u) is a laptop/tablet computer hybrid aimed at high-end users. The Twist gets its name from its screen's ability to twist in a manner that converts the device into a tablet. The Twist has a 12.5" screen and makes use of Intel's Core i7 processor and SSD technology in lieu of a hard drive. In a review for Engadget Dana Wollman wrote, "Lately, we feel like all of our reviews of Windows 8 convertibles end the same way. The ThinkPad Twist has plenty going for it: a bright IPS display, a good port selection, an affordable price and an unrivaled typing experience. Like ThinkPads past, it also offers some useful software features for businesses lacking dedicated IT departments. All good things, but what's a road warrior to do when the battery barely lasts four hours? Something tells us the Twist will still appeal to Lenovo loyalists, folks who trust ThinkPad's build quality and wouldn't be caught dead using any other keyboard. If you're more brand-agnostic, though, there are other Windows 8 convertibles with comfortable keyboards – not to mention, sharper screens, faster performance and longer battery life." ThinkPad Helix (2013–2015) The Helix is a convertible laptop satisfying both tablet and conventional notebook users. It uses a "rip and flip" design that allows the user to detach the display and then replace it facing in a different direction. It sports an 11.6" Full HD (1920 × 1080) display, with support for Windows 8 multi-touch. As all essential processing hardware is contained in the display assembly and it has multitouch capability, the detached monitor can be used as a standalone tablet computer. The Helix's high-end hardware and build quality, including Gorilla Glass, stylus-based input, and Intel vPro hardware-based security features, are designed to appeal to business users. In a review published in Forbes Jason Evangelho wrote, "The first laptop I owned was a ThinkPad T20, and the next one may very likely be the ThinkPad Helix which Lenovo unveiled at CES 2013. In a sea of touch-inspired Windows 8 hardware, it's the first ultrabook convertible with a form factor that gets everything right. The first batch of Windows 8 ultrabooks get high marks for their inspired designs, but aren't quite flexible enough to truly be BYOD (Bring Your Own Device) solutions. Lenovo's own IdeaPad Yoga came close, but the sensation of feeling the keyboard underneath your fingers when transformed into tablet mode was slightly jarring. Dell's XPS 12 solved that problem with its clever rotating hinge design, but I wanted the ability to remove the tablet display entirely from both of those products." ThinkPad 13 (2016–2017) The ThinkPad 13 (Also known as the Thinkpad S2 in Mainland China) is a "budget" model with a 13-inch screen. Versions running Windows 10 and Google's Chrome OS were options. The most powerful configuration had a Skylake Core i7 processor and a 512GB SSD. Connectivity includes HDMI, USB 3.0, OneLink+, USB Type-C, etc. It weighs and is thick. As of 2017, a second generation Ultrabook model has been released with up to a Kaby Lake Core i7 processor and a FHD touchscreen available in certain countries. This lineup was merged into the L-Series in 2018, with the L380 being the successor to the 13 Second Generation. 25th anniversary Retro ThinkPad (2017) Lenovo released the 25th anniversary Retro ThinkPad 25 in October 2017. The model is based on the T470, the difference being it having the 7-Row "Classic" keyboard with the layout found on the −20 Series, and the logo received a splash of colour reminiscent of the IBM era. The last ThinkPad models with the 7-row keyboard were introduced in 2011. A Series (2017–2018) In September 2017, Lenovo announced two ThinkPad models featuring AMD's PRO chipset technology – the A275 and A475. This sees the revival of the A Series nameplate not seen since the early 2000s when ThinkPads were under IBM's ownership, however it is likely the "A" moniker emphasised that it uses AMD technology rather than comparative product segment (workstation class) of the previous line. While this isn't the first time Lenovo had offered an AMD derived ThinkPad, it is the first to be released as an alternative premium offering to the established T Series and X Series ThinkPads, which use Intel chipsets instead. A275 and A475The A275 is a 12.5" ultraportable based on the Intel derived X270 model. Weighing in at 2.9 pounds (1.31 kg) this model features AMD Carrizo or Bristol Ridge APU's, AMD Radeon R7 graphics and AMD DASH (Desktop and mobile Architecture for System Hardware) for enterprise computing. The A475 is a 14" mainstream portable computer based on the Intel derived T470 model. Weighing at 3.48 pounds (1.57 kg), like the A275 it features AMD Carrizo or Bristol Ridge APU's, AMD Radeon R7 graphics and AMD DASH (Desktop and mobile Architecture for System Hardware) for enterprise computing. A285 and A485The A285 is a 12.5" laptop which is an upgraded version of the A275. Weighing in at , this model utilizes an AMD Raven Ridge APU with integrated Vega graphics, specifically the Ryzen 5 Pro 2500U. The laptop also contains a Discrete Trusted Platform Module (dTPM) for data encryption and password protection, supporting TPM 2.0. Optional security features include a fingerprint scanner and smart card reader. The display's native resolution can be either or depending on the configuration. The A485 is a 14" laptop which is an upgraded version of the A475. Weighing , this model utilizes AMD's Raven Ridge APU's with integrated Vega graphics. This model can use multiple models of Raven Ridge APU's, unlike the A285. The laptop also contains a Discrete Trusted Platform Module (dTPM) for data encryption and password protection, supporting TPM 2.0. Optional security features include a fingerprint scanner and smart card reader. The display's native resolution can be either or depending on the configuration. Rivals of ThinkPad There are a lot of companies producing similar laptops to Lenovo/IBM ThinkPads, targeting the same market audience. These laptops often offer similar features to ThinkPad computers like a pointing stick or active hard drive protection. The ThinkPad series' main rivals have been Dell Latitude and HP EliteBook laptops for a long while. Dell: Dell Latitude 7xxx: Rivals ThinkPad T and X series Dell Latitude 5xxx: Rivals ThinkPad L and E series Dell Latitude 3xxx: Rivals ThinkPad E series Dell Vostro 3xxx: Rivals ThinkPad E series Dell XPS 9xxx: Indirectly rivals ThinkPad X1 series Dell Precision 7xxx: Rivals ThinkPad P1 series HP: HP EliteBook 6xx: Rivals ThinkPad L series HP EliteBook 8xx: Rivals ThinkPad T and X series HP EliteBook 1040: Rivals ThinkPad X1 Carbon HP Elite Dragonfly: Rivals ThinkPad X1 Nano HP ZBook Firefly: Rivals ThinkPad P14, P15 and T15p HP ZBook Power: Indirectly rivals ThinkPad P15 and P1 HP ZBook Studio: Rivals ThinkPad P1 HP ZBook Fury: Rivals ThinkPad P15 HP ProBook 4xx and 6xx: Rivals ThinkPad L and E series Acer: Acer TravelMate P6: Rivals ThinkPad X1 Carbon Acer TravelMate P4: Rivals ThinkPad T14 and T14s Acer TravelMate P2: Rivals ThinkPad L series Acer TravelMate Spin B3: Rivals ThinkPad 11e Yoga Acer Swift 7: Indirectly rivals ThinkPad X1 Nano Fujitsu: Fujitsu LifeBook U9xxx: Rivals ThinkPad X1 series Fujitsu LifeBook U7xxx: Rivals ThinkPad T and X series Fujitsu LifeBook U5xxx: Rivals ThinkPad L series Fujitsu LifeBook U3xxx: Rivals ThinkPad E series Dynabook (formerly Toshiba): Dynabook Portégé: Rivals ThinkPad X series Dynabook Tecra Xxx: Rivals ThinkPad T series VAIO (formerly made by Sony): VAIO Z: Rivals ThinkPad X1 Carbon and T14s VAIO SX: Indirectly rivals ThinkPad L13 Apple: Apple MacBook Pro: Indirectly rivals Thinkpad X1 Extreme, X1 Carbon, Z series and T14s Apple MacBook Air: Indirectly rivals ThinkPad Z series, X1 Nano and X1 Carbon Asus: ASUSPRO Pxxx: Rivals ThinkPad T series Asus Zenbook: Rivals ThinkPad X1 series Asus ExpertBook: Rivals ThinkPad L and E series Microsoft: Microsoft Surface Pro: Rivals ThinkPad X1 Tablet Microsoft Surface Laptop: Indirectly rivals ThinkPad X1 Carbon and X1 Nano Huawei: Huawei MateBook X Pro: Rivals ThinkPad X1 Carbon Huawei MateBook X: Indirectly rivals ThinkPad X1 Nano See also ThinkBook IBM/Lenovo ThinkCentre and ThinkStation desktops List of IBM products HP EliteBook Dell Latitude and Precision Fujitsu Lifebook and Celsius Acer TravelMate References External links ThinkPad models on ThinkWiki Withdrawn models Specs Books Think Think Consumer electronics brands Computer-related introductions in 1992 Products introduced in 1992 2005 mergers and acquisitions Divested IBM products
239098
https://en.wikipedia.org/wiki/BitTorrent
BitTorrent
BitTorrent is a communication protocol for peer-to-peer file sharing (P2P), which enables users to distribute data and electronic files over the Internet in a decentralized manner. To send or receive files, a person uses a BitTorrent client on their Internet-connected computer. A BitTorrent client is a computer program that implements the BitTorrent protocol. BitTorrent clients are available for a variety of computing platforms and operating systems, including an official client released by BitTorrent, Inc. Popular clients include μTorrent, Xunlei Thunder, Transmission, qBittorrent, Vuze, Deluge, BitComet and Tixati. BitTorrent trackers provide a list of files available for transfer and allow the client to find peer users, known as "seeds", who may transfer the files. Programmer Bram Cohen designed the protocol in April 2001, and released the first available version on 2 July 2001. On 15 May 2017, BitTorrent, Inc. (later renamed Rainberry, Inc.) released BitTorrent v2 protocol specification. libtorrent was updated to support the new version on 6 September 2020. BitTorrent is one of the most common protocols for transferring large files, such as digital video files containing TV shows and video clips, or digital audio files containing songs. BitTorrent was responsible for 3.35% of all worldwide bandwidth—more than half of the 6% of total bandwidth dedicated to file sharing. In 2019, BitTorrent was a dominant file sharing protocol and generated a substantial amount of Internet traffic, with 2.46% of downstream, and 27.58% of upstream traffic. , BitTorrent had 15–27 million concurrent users at any time. , BitTorrent is utilized by 150 million active users. Based on this figure, the total number of monthly users may be estimated to more than a quarter of a billion (≈ 250 million). The use of BitTorrent may sometimes be limited by Internet Service Providers (ISPs), on legal or copyright grounds. Users may choose to run seedboxes or Virtual Private Networks (VPNs) to circumvent these restrictions. History Programmer Bram Cohen, a University at Buffalo alumnus, designed the protocol in April 2001, and released the first available version on 2 July 2001. The first release of the BitTorrent client had no search engine and no peer exchange. Up until 2005, the only way to share files was by creating a small text file called a "torrent", that they would upload to a torrent index site. The first uploader acted as a seed, and downloaders would initially connect as peers. Those who wish to download the file would download the torrent, which their client would use to connect to a tracker which had a list of the IP addresses of other seeds and peers in the swarm. Once a peer completed a download of the complete file, it could in turn function as a seed. These files contain metadata about the files to be shared and the trackers which keep track of the other seeds and peers. In 2005, first Vuze and then the BitTorrent client introduced distributed tracking using distributed hash tables which allowed clients to exchange data on swarms directly without the need for a torrent file. In 2006, peer exchange functionality was added allowing clients to add peers based on the data found on connected nodes. BitTorrent v2 is intended to work seamlessly with previous versions of the BitTorrent protocol. The main reason for the update was that the old cryptographic hash function, SHA-1 is no longer considered safe from malicious attacks by the developers, and as such, v2 uses SHA-256. To ensure backwards compatibility, the v2 .torrent file format supports a hybrid mode where the torrents are hashed through both the new method and the old method, with the intent that the files will be shared with peers on both v1 and v2 swarms. Another update to the specification is adding a hash tree to speed up time from adding a torrent to downloading files, and to allow more granular checks for file corruption. In addition, each file is now hashed individually, enabling files in the swarm to be deduplicated, so that if multiple torrents include the same files, but seeders are only seeding the file from some, downloaders of the other torrents can still download the file. Magnet links for v2 also support a hybrid mode to ensure support for legacy clients. Design The BitTorrent protocol can be used to reduce the server and network impact of distributing large files. Rather than downloading a file from a single source server, the BitTorrent protocol allows users to join a "swarm" of hosts to upload and download from each other simultaneously. The protocol is an alternative to the older single source, multiple mirror sources technique for distributing data, and can work effectively over networks with lower bandwidth. Using the BitTorrent protocol, several basic computers, such as home computers, can replace large servers while efficiently distributing files to many recipients. This lower bandwidth usage also helps prevent large spikes in internet traffic in a given area, keeping internet speeds higher for all users in general, regardless of whether or not they use the BitTorrent protocol. The file being distributed is divided into segments called pieces. As each peer receives a new piece of the file, it becomes a source (of that piece) for other peers, relieving the original seed from having to send that piece to every computer or user wishing a copy. With BitTorrent, the task of distributing the file is shared by those who want it; it is entirely possible for the seed to send only a single copy of the file itself and eventually distribute to an unlimited number of peers. Each piece is protected by a cryptographic hash contained in the torrent descriptor. This ensures that any modification of the piece can be reliably detected, and thus prevents both accidental and malicious modifications of any of the pieces received at other nodes. If a node starts with an authentic copy of the torrent descriptor, it can verify the authenticity of the entire file it receives. Pieces are typically downloaded non-sequentially, and are rearranged into the correct order by the BitTorrent client, which monitors which pieces it needs, and which pieces it has and can upload to other peers. Pieces are of the same size throughout a single download (for example, a 10 MB file may be transmitted as ten 1 MB pieces or as forty 256 KB pieces). Due to the nature of this approach, the download of any file can be halted at any time and be resumed at a later date, without the loss of previously downloaded information, which in turn makes BitTorrent particularly useful in the transfer of larger files. This also enables the client to seek out readily available pieces and download them immediately, rather than halting the download and waiting for the next (and possibly unavailable) piece in line, which typically reduces the overall time of the download. This eventual transition from peers to seeders determines the overall "health" of the file (as determined by the number of times a file is available in its complete form). The distributed nature of BitTorrent can lead to a flood-like spreading of a file throughout many peer computer nodes. As more peers join the swarm, the likelihood of a successful download by any particular node increases. Relative to traditional Internet distribution schemes, this permits a significant reduction in the original distributor's hardware and bandwidth resource costs. Distributed downloading protocols in general provide redundancy against system problems, reduce dependence on the original distributor, and provide sources for the file which are generally transient and therefore there is no single point of failure as in one way server-client transfers. Though both ultimately transfer files over a network, a BitTorrent download differs from a one way server-client download (as is typical with an HTTP or FTP request, for example) in several fundamental ways: BitTorrent makes many small data requests over different IP connections to different machines, while server-client downloading is typically made via a single TCP connection to a single machine. BitTorrent downloads in a random or in a "rarest-first" approach that ensures high availability, while classic downloads are sequential. Taken together, these differences allow BitTorrent to achieve much lower cost to the content provider, much higher redundancy, and much greater resistance to abuse or to "flash crowds" than regular server software. However, this protection, theoretically, comes at a cost: downloads can take time to rise to full speed because it may take time for enough peer connections to be established, and it may take time for a node to receive sufficient data to become an effective uploader. This contrasts with regular downloads (such as from an HTTP server, for example) that, while more vulnerable to overload and abuse, rise to full speed very quickly, and maintain this speed throughout. In the beginning, BitTorrent's non-contiguous download methods made it harder to support "streaming playback". In 2014, the client Popcorn Time allowed for streaming of BitTorrent video files. Since then, more and more clients are offering streaming options. Searching The BitTorrent protocol provides no way to index torrent files. As a result, a comparatively small number of websites have hosted a large majority of torrents, many linking to copyrighted works without the authorization of copyright holders, rendering those sites especially vulnerable to lawsuits. A BitTorrent index is a "list of .torrent files, which typically includes descriptions" and information about the torrent's content. Several types of websites support the discovery and distribution of data on the BitTorrent network. Public torrent-hosting sites such as The Pirate Bay allow users to search and download from their collection of torrent files. Users can typically also upload torrent files for content they wish to distribute. Often, these sites also run BitTorrent trackers for their hosted torrent files, but these two functions are not mutually dependent: a torrent file could be hosted on one site and tracked by another unrelated site. Private host/tracker sites operate like public ones except that they may restrict access to registered users and may also keep track of the amount of data each user uploads and downloads, in an attempt to reduce "leeching". Web search engines allow the discovery of torrent files that are hosted and tracked on other sites; examples include The Pirate Bay and BTDigg. These sites allow the user to ask for content meeting specific criteria (such as containing a given word or phrase) and retrieve a list of links to torrent files matching those criteria. This list can often be sorted with respect to several criteria, relevance (seeders-leechers ratio) being one of the most popular and useful (due to the way the protocol behaves, the download bandwidth achievable is very sensitive to this value). Metasearch engines allow one to search several BitTorrent indices and search engines at once. The Tribler BitTorrent client was among the first to incorporate built-in search capabilities. With Tribler, users can find .torrent files held by random peers and taste buddies. It adds such an ability to the BitTorrent protocol using a gossip protocol, somewhat similar to the eXeem network which was shut down in 2005. The software includes the ability to recommend content as well. After a dozen downloads, the Tribler software can roughly estimate the download taste of the user, and recommend additional content. In May 2007, researchers at Cornell University published a paper proposing a new approach to searching a peer-to-peer network for inexact strings, which could replace the functionality of a central indexing site. A year later, the same team implemented the system as a plugin for Vuze called Cubit and published a follow-up paper reporting its success. A somewhat similar facility but with a slightly different approach is provided by the BitComet client through its "Torrent Exchange" feature. Whenever two peers using BitComet (with Torrent Exchange enabled) connect to each other they exchange lists of all the torrents (name and info-hash) they have in the Torrent Share storage (torrent files which were previously downloaded and for which the user chose to enable sharing by Torrent Exchange). Thus each client builds up a list of all the torrents shared by the peers it connected to in the current session (or it can even maintain the list between sessions if instructed). At any time the user can search into that Torrent Collection list for a certain torrent and sort the list by categories. When the user chooses to download a torrent from that list, the .torrent file is automatically searched for (by info-hash value) in the DHT Network and when found it is downloaded by the querying client which can after that create and initiate a downloading task. Downloading and sharing Users find a torrent of interest on a torrent index site or by using a search engine built into the client, download it, and open it with a BitTorrent client. The client connects to the tracker(s) or seeds specified in the torrent file, from which it receives a list of seeds and peers currently transferring pieces of the file(s). The client connects to those peers to obtain the various pieces. If the swarm contains only the initial seeder, the client connects directly to it, and begins to request pieces. Clients incorporate mechanisms to optimize their download and upload rates. The effectiveness of this data exchange depends largely on the policies that clients use to determine to whom to send data. Clients may prefer to send data to peers that send data back to them (a "tit for tat" exchange scheme), which encourages fair trading. But strict policies often result in suboptimal situations, such as when newly joined peers are unable to receive any data because they don't have any pieces yet to trade themselves or when two peers with a good connection between them do not exchange data simply because neither of them takes the initiative. To counter these effects, the official BitTorrent client program uses a mechanism called "optimistic unchoking", whereby the client reserves a portion of its available bandwidth for sending pieces to random peers (not necessarily known good partners, so called preferred peers) in hopes of discovering even better partners and to ensure that newcomers get a chance to join the swarm. Although "swarming" scales well to tolerate "flash crowds" for popular content, it is less useful for unpopular or niche market content. Peers arriving after the initial rush might find the content unavailable and need to wait for the arrival of a "seed" in order to complete their downloads. The seed arrival, in turn, may take long to happen (this is termed the "seeder promotion problem"). Since maintaining seeds for unpopular content entails high bandwidth and administrative costs, this runs counter to the goals of publishers that value BitTorrent as a cheap alternative to a client-server approach. This occurs on a huge scale; measurements have shown that 38% of all new torrents become unavailable within the first month. A strategy adopted by many publishers which significantly increases availability of unpopular content consists of bundling multiple files in a single swarm. More sophisticated solutions have also been proposed; generally, these use cross-torrent mechanisms through which multiple torrents can cooperate to better share content. Creating and publishing The peer distributing a data file treats the file as a number of identically sized pieces, usually with byte sizes of a power of 2, and typically between 32 kB and 16 MB each. The peer creates a hash for each piece, using the SHA-1 hash function, and records it in the torrent file. Pieces with sizes greater than 512 kB will reduce the size of a torrent file for a very large payload, but is claimed to reduce the efficiency of the protocol. When another peer later receives a particular piece, the hash of the piece is compared to the recorded hash to test that the piece is error-free. Peers that provide a complete file are called seeders, and the peer providing the initial copy is called the initial seeder. The exact information contained in the torrent file depends on the version of the BitTorrent protocol. By convention, the name of a torrent file has the suffix .torrent. Torrent files have an "announce" section, which specifies the URL of the tracker, and an "info" section, containing (suggested) names for the files, their lengths, the piece length used, and a SHA-1 hash code for each piece, all of which are used by clients to verify the integrity of the data they receive. Though SHA-1 has shown signs of cryptographic weakness, Bram Cohen did not initially consider the risk big enough for a backward incompatible change to, for example, SHA-3. As of BitTorrent v2 the hash function has been updated to SHA-256. In the early days, torrent files were typically published to torrent index websites, and registered with at least one tracker. The tracker maintained lists of the clients currently connected to the swarm. Alternatively, in a trackerless system (decentralized tracking) every peer acts as a tracker. Azureus was the first BitTorrent client to implement such a system through the distributed hash table (DHT) method. An alternative and incompatible DHT system, known as Mainline DHT, was released in the Mainline BitTorrent client three weeks later (though it had been in development since 2002) and subsequently adopted by the μTorrent, Transmission, rTorrent, KTorrent, BitComet, and Deluge clients. After the DHT was adopted, a "private" flag – analogous to the broadcast flag – was unofficially introduced, telling clients to restrict the use of decentralized tracking regardless of the user's desires. The flag is intentionally placed in the info section of the torrent so that it cannot be disabled or removed without changing the identity of the torrent. The purpose of the flag is to prevent torrents from being shared with clients that do not have access to the tracker. The flag was requested for inclusion in the official specification in August 2008, but has not been accepted yet. Clients that have ignored the private flag were banned by many trackers, discouraging the practice. Anonymity BitTorrent does not, on its own, offer its users anonymity. One can usually see the IP addresses of all peers in a swarm in one's own client or firewall program. This may expose users with insecure systems to attacks. In some countries, copyright organizations scrape lists of peers, and send takedown notices to the internet service provider of users participating in the swarms of files that are under copyright. In some jurisdictions, copyright holders may launch lawsuits against uploaders or downloaders for infringement, and police may arrest suspects in such cases. Various means have been used to promote anonymity. For example, the BitTorrent client Tribler makes available a Tor-like onion network, optionally routing transfers through other peers to obscure which client has requested the data. The exit node would be visible to peers in a swarm, but the Tribler organization provides exit nodes. One advantage of Tribler is that clearnet torrents can be downloaded with only a small decrease in download speed from one "hop" of routing. i2p provides a similar anonymity layer although in that case, one can only download torrents that have been uploaded to the i2p network. The bittorrent client Vuze allows users who are not concerned about anonymity to take clearnet torrents, and make them available on the i2p network. Most BitTorrent clients are not designed to provide anonymity when used over Tor, and there is some debate as to whether torrenting over Tor acts as a drag on the network. Private torrent trackers are usually invitation only, and require members to participate in uploading, but have the downside of a single centralized point of failure. Oink's Pink Palace and What.cd are examples of private trackers which have been shut down. Seedbox services download the torrent files first to the company's servers, allowing the user to direct download the file from there. One's IP address would be visible to the Seedbox provider, but not to third parties. Virtual private networks encrypt transfers, and substitute a different IP address for the user's, so that anyone monitoring a torrent swarm will only see that address. Associated technologies Distributed trackers On 2 May 2005, Azureus 2.3.0.0 (now known as Vuze) was released, introducing support for "trackerless" torrents through a system called the "distributed database." This system is a Distributed hash table implementation which allows the client to use torrents that do not have a working BitTorrent tracker. Instead just bootstrapping server is used (router.bittorrent.com, dht.transmissionbt.com or router.utorrent.com). The following month, BitTorrent, Inc. released version 4.2.0 of the Mainline BitTorrent client, which supported an alternative DHT implementation (popularly known as "Mainline DHT", outlined in a draft on their website) that is incompatible with that of Azureus. In 2014, measurement showed concurrent users of Mainline DHT to be from 10 million to 25 million, with a daily churn of at least 10 million. Current versions of the official BitTorrent client, μTorrent, BitComet, Transmission and BitSpirit all share compatibility with Mainline DHT. Both DHT implementations are based on Kademlia. As of version 3.0.5.0, Azureus also supports Mainline DHT in addition to its own distributed database through use of an optional application plugin. This potentially allows the Azureus/Vuze client to reach a bigger swarm. Another idea that has surfaced in Vuze is that of virtual torrents. This idea is based on the distributed tracker approach and is used to describe some web resource. Currently, it is used for instant messaging. It is implemented using a special messaging protocol and requires an appropriate plugin. Anatomic P2P is another approach, which uses a decentralized network of nodes that route traffic to dynamic trackers. Most BitTorrent clients also use Peer exchange (PEX) to gather peers in addition to trackers and DHT. Peer exchange checks with known peers to see if they know of any other peers. With the 3.0.5.0 release of Vuze, all major BitTorrent clients now have compatible peer exchange. Web seeding Web "seeding" was implemented in 2006 as the ability of BitTorrent clients to download torrent pieces from an HTTP source in addition to the "swarm". The advantage of this feature is that a website may distribute a torrent for a particular file or batch of files and make those files available for download from that same web server; this can simplify long-term seeding and load balancing through the use of existing, cheap, web hosting setups. In theory, this would make using BitTorrent almost as easy for a web publisher as creating a direct HTTP download. In addition, it would allow the "web seed" to be disabled if the swarm becomes too popular while still allowing the file to be readily available. This feature has two distinct specifications, both of which are supported by Libtorrent and the 26+ clients that use it. The first was created by John "TheSHAD0W" Hoffman, who created BitTornado. This first specification requires running a web service that serves content by info-hash and piece number, rather than filename. The other specification is created by GetRight authors and can rely on a basic HTTP download space (using byte serving). In September 2010, a new service named Burnbit was launched which generates a torrent from any URL using webseeding. There are server-side solutions that provide initial seeding of the file from the web server via standard BitTorrent protocol and when the number of external seeders reach a limit, they stop serving the file from the original source. RSS feeds A technique called broadcatching combines RSS feeds with the BitTorrent protocol to create a content delivery system, further simplifying and automating content distribution. Steve Gillmor explained the concept in a column for Ziff-Davis in December 2003. The discussion spread quickly among bloggers (Ernest Miller, Chris Pirillo, etc.). In an article entitled Broadcatching with BitTorrent, Scott Raymond explained: The RSS feed will track the content, while BitTorrent ensures content integrity with cryptographic hashing of all data, so feed subscribers will receive uncorrupted content. One of the first and popular software clients (free and open source) for broadcatching is Miro. Other free software clients such as PenguinTV and KatchTV are also now supporting broadcatching. The BitTorrent web-service MoveDigital added the ability to make torrents available to any web application capable of parsing XML through its standard REST-based interface in 2006, though this has since been discontinued. Additionally, Torrenthut is developing a similar torrent API that will provide the same features, and help bring the torrent community to Web 2.0 standards. Alongside this release is a first PHP application built using the API called PEP, which will parse any Really Simple Syndication (RSS 2.0) feed and automatically create and seed a torrent for each enclosure found in that feed. Throttling and encryption Since BitTorrent makes up a large proportion of total traffic, some ISPs have chosen to "throttle" (slow down) BitTorrent transfers. For this reason, methods have been developed to disguise BitTorrent traffic in an attempt to thwart these efforts. Protocol header encrypt (PHE) and Message stream encryption/Protocol encryption (MSE/PE) are features of some BitTorrent clients that attempt to make BitTorrent hard to detect and throttle. As of November 2015, Vuze, Bitcomet, KTorrent, Transmission, Deluge, μTorrent, MooPolice, Halite, qBittorrent, rTorrent, and the latest official BitTorrent client (v6) support MSE/PE encryption. In August 2007, Comcast was preventing BitTorrent seeding by monitoring and interfering with the communication between peers. Protection against these efforts is provided by proxying the client-tracker traffic via an encrypted tunnel to a point outside of the Comcast network. In 2008, Comcast called a "truce" with BitTorrent, Inc. with the intention of shaping traffic in a protocol-agnostic manner. Questions about the ethics and legality of Comcast's behavior have led to renewed debate about net neutrality in the United States. In general, although encryption can make it difficult to determine what is being shared, BitTorrent is vulnerable to traffic analysis. Thus, even with MSE/PE, it may be possible for an ISP to recognize BitTorrent and also to determine that a system is no longer downloading but only uploading data, and terminate its connection by injecting TCP RST (reset flag) packets. Multitrackers Another unofficial feature is an extension to the BitTorrent metadata format proposed by John Hoffman and implemented by several indexing websites. It allows the use of multiple trackers per file, so if one tracker fails, others can continue to support file transfer. It is implemented in several clients, such as BitComet, BitTornado, BitTorrent, KTorrent, Transmission, Deluge, μTorrent, rtorrent, Vuze, and Frostwire. Trackers are placed in groups, or tiers, with a tracker randomly chosen from the top tier and tried, moving to the next tier if all the trackers in the top tier fail. Torrents with multiple trackers can decrease the time it takes to download a file, but also have a few consequences: Poorly implemented clients may contact multiple trackers, leading to more overhead-traffic. Torrents from closed trackers suddenly become downloadable by non-members, as they can connect to a seed via an open tracker. Peer selection BitTorrent, Inc. was working with Oversi on new Policy Discover Protocols that query the ISP for capabilities and network architecture information. Oversi's ISP hosted NetEnhancer box is designed to "improve peer selection" by helping peers find local nodes, improving download speeds while reducing the loads into and out of the ISP's network. Implementations The BitTorrent specification is free to use and many clients are open source, so BitTorrent clients have been created for all common operating systems using a variety of programming languages. The official BitTorrent client, μTorrent, qBittorrent, Transmission, Vuze, and BitComet are some of the most popular clients. Some BitTorrent implementations such as MLDonkey and Torrentflux are designed to run as servers. For example, this can be used to centralize file sharing on a single dedicated server which users share access to on the network. Server-oriented BitTorrent implementations can also be hosted by hosting providers at co-located facilities with high bandwidth Internet connectivity (e.g., a datacenter) which can provide dramatic speed benefits over using BitTorrent from a regular home broadband connection. Services such as ImageShack can download files on BitTorrent for the user, allowing them to download the entire file by HTTP once it is finished. The Opera web browser supports BitTorrent natively. Brave web browser ships with an extension which supports WebTorrent, a BitTorrent-like protocol based on WebRTC instead of UDP and TCP. BitLet allowed users to download Torrents directly from their browser using a Java applet (until browsers removed support for Java applets). An increasing number of hardware devices are being made to support BitTorrent. These include routers and NAS devices containing BitTorrent-capable firmware like OpenWrt. Proprietary versions of the protocol which implement DRM, encryption, and authentication are found within managed clients such as Pando. Adoption A growing number of individuals and organizations are using BitTorrent to distribute their own or licensed works (e.g. indie bands distributing digital files of their new songs). Independent adopters report that BitTorrent technology reduces demands on private networking hardware and bandwidth, an essential for non-profit groups with large amounts of internet traffic. Some uses of BitTorrent for file sharing may violate laws in some jurisdictions (see legislation section). Film, video, and music BitTorrent Inc. has obtained a number of licenses from Hollywood studios for distributing popular content from their websites. Sub Pop Records releases tracks and videos via BitTorrent Inc. to distribute its 1000+ albums. Babyshambles and The Libertines (both bands associated with Pete Doherty) have extensively used torrents to distribute hundreds of demos and live videos. US industrial rock band Nine Inch Nails frequently distributes albums via BitTorrent. Podcasting software is starting to integrate BitTorrent to help podcasters deal with the download demands of their MP3 "radio" programs. Specifically, Juice and Miro (formerly known as Democracy Player) support automatic processing of .torrent files from RSS feeds. Similarly, some BitTorrent clients, such as μTorrent, are able to process web feeds and automatically download content found within them. DGM Live purchases are provided via BitTorrent. VODO, a service which distributes "free-to-share" movies and TV shows via BitTorrent. Broadcasters In 2008, the CBC became the first public broadcaster in North America to make a full show (Canada's Next Great Prime Minister) available for download using BitTorrent. The Norwegian Broadcasting Corporation (NRK) has since March 2008 experimented with bittorrent distribution, available online. Only selected works in which NRK owns all royalties are published. Responses have been very positive, and NRK is planning to offer more content. The Dutch VPRO broadcasting organization released four documentaries in 2009 and 2010 under a Creative Commons license using the content distribution feature of the Mininova tracker. Cloud Service Providers The Amazon AWS's Simple Storage Service (S3), until April 29, 2021, had supported sharing of bucket objects with BitTorrent protocols. As of June 13, 2020, the feature is only available in service regions launched after May 30, 2016. The feature for the existing customers will be extended for an additional 12 months following the deprecation. After April 29, 2022, BitTorrent clients will no longer connect to Amazon S3. Software Blizzard Entertainment uses BitTorrent (via a proprietary client called the "Blizzard Downloader", associated with the Blizzard "BattleNet" network) to distribute content and patches for Diablo III, StarCraft II and World of Warcraft, including the games themselves. Wargaming uses BitTorrent in their popular titles World of Tanks, World of Warships and World of Warplanes to distribute game updates. CCP Games, maker of the space simulation MMORPG Eve Online, has announced that a new launcher will be released that is based on BitTorrent. Many software games, especially those whose large size makes them difficult to host due to bandwidth limits, extremely frequent downloads, and unpredictable changes in network traffic, will distribute instead a specialized, stripped down BitTorrent client with enough functionality to download the game from the other running clients and the primary server (which is maintained in case not enough peers are available). Many major open source and free software projects encourage BitTorrent as well as conventional downloads of their products (via HTTP, FTP etc.) to increase availability and to reduce load on their own servers, especially when dealing with larger files. Resilio Sync is a BitTorrent-based folder-syncing tool which can act as an alternative to server-based synchronisation services such as Dropbox. Government The British government used BitTorrent to distribute details about how the tax money of British citizens was spent. Education Florida State University uses BitTorrent to distribute large scientific data sets to its researchers. Many universities that have BOINC distributed computing projects have used the BitTorrent functionality of the client-server system to reduce the bandwidth costs of distributing the client-side applications used to process the scientific data. If a BOINC distributed computing application needs to be updated (or merely sent to a user), it can do so with little impact on the BOINC server. The developing Human Connectome Project uses BitTorrent to share their open dataset. Academic Torrents is a BitTorrent tracker for use by researchers in fields that need to share large datasets Others Facebook uses BitTorrent to distribute updates to Facebook servers. Twitter uses BitTorrent to distribute updates to Twitter servers. The Internet Archive added BitTorrent to its file download options for over 1.3 million existing files, and all newly uploaded files, in August 2012. This method is the fastest means of downloading media from the Archive. By early 2015, AT&T estimated that BitTorrent accounted for 20% of all broadband traffic. Routers that use network address translation (NAT) must maintain tables of source and destination IP addresses and ports. Because BitTorrent frequently contacts 20–30 servers per second, the NAT tables of some consumer-grade routers are rapidly filled. This is a known cause of some home routers ceasing to work correctly. Legislation Although the protocol itself is legal, problems stem from using the protocol to traffic copyright infringing works, since BitTorrent is often used to download otherwise paid content, such as movies and video games. There has been much controversy over the use of BitTorrent trackers. BitTorrent metafiles themselves do not store file contents. Whether the publishers of BitTorrent metafiles violate copyrights by linking to copyrighted works without the authorization of copyright holders is controversial. Various jurisdictions have pursued legal action against websites that host BitTorrent trackers. High-profile examples include the closing of Suprnova.org, TorrentSpy, LokiTorrent, BTJunkie, Mininova, Oink's Pink Palace and What.cd. BitTorrent search engine The Pirate Bay torrent website, formed by a Swedish group, is noted for the "legal" section of its website in which letters and replies on the subject of alleged copyright infringements are publicly displayed. On 31 May 2006, The Pirate Bay's servers in Sweden were raided by Swedish police on allegations by the MPAA of copyright infringement; however, the tracker was up and running again three days later. In the study used to value NBC Universal in its merger with Comcast, Envisional examined the 10,000 torrent swarms managed by PublicBT which had the most active downloaders. After excluding pornographic and unidentifiable content, it was found that only one swarm offered legitimate content. In the United States, more than 200,000 lawsuits have been filed for copyright infringement on BitTorrent since 2010. In the United Kingdom, on 30 April 2012, the High Court of Justice ordered five ISPs to block The Pirate Bay. Security One concern is the UDP flood attack. BitTorrent implementations often use μTP for their communication. To achieve high bandwidths, the underlying protocol used is UDP, which allows spoofing of source addresses of internet traffic. It has been possible to carry out Denial-of-service attacks in a P2P lab environment, where users running BitTorrent clients act as amplifiers for an attack at another service. However this is not always an effective attack because ISPs can check if the source address is correct. Several studies on BitTorrent found files available for download containing malware. In particular, one small sample indicated that 18% of all executable programs available for download contained malware. Another study claims that as much as 14.5% of BitTorrent downloads contain zero-day malware, and that BitTorrent was used as the distribution mechanism for 47% of all zero-day malware they have found. See also Anonymous P2P Anti-Counterfeiting Trade Agreement Bencode Cache Discovery Protocol Comparison of BitTorrent clients Comparison of BitTorrent sites Comparison of BitTorrent tracker software Glossary of BitTorrent terms Magnet URI scheme Simple file verification Super-seeding Torrent poisoning References Further reading External links Specification Unofficial BitTorrent Protocol Specification v1.0 at wiki.theory.org Unofficial BitTorrent Location-aware Protocol 1.0 Specification at wiki.theory.org Application layer protocols Computer-related introductions in 2001 File sharing Web 2.0
239365
https://en.wikipedia.org/wiki/David%20Chaum
David Chaum
David Lee Chaum (born 1955) is an American computer scientist and cryptographer. He is known as a pioneer in cryptography and privacy-preserving technologies, and widely recognized as the inventor of digital cash. His 1982 dissertation "Computer Systems Established, Maintained, and Trusted by Mutually Suspicious Groups" is the first known proposal for a blockchain protocol. Complete with the code to implement the protocol, Chaum's dissertation proposed all but one element of the blockchain later detailed in the Bitcoin whitepaper. He has been referred to as "the father of online anonymity". He is also known for developing ecash, an electronic cash application that aims to preserve a user's anonymity, and inventing many cryptographic protocols like the blind signature, mix networks and the Dining cryptographers protocol. In 1995 his company DigiCash created the first digital currency with eCash. His 1981 paper, "Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms", laid the groundwork for the field of anonymous communications research. Life and career Chaum is Jewish and was born to a Jewish family in Los Angeles. He gained a doctorate in computer science from the University of California, Berkeley in 1982. Also that year, he founded the International Association for Cryptologic Research (IACR), which currently organizes academic conferences in cryptography research. Subsequently, he taught at the New York University Graduate School of Business Administration and at the University of California, Santa Barbara (UCSB). He also formed a cryptography research group at CWI, the Dutch National Research Institute for Mathematics and Computer Science in Amsterdam. He founded DigiCash, an electronic cash company, in 1990. Chaum received the Information Technology European Award for 1995. In 2004, he was named an IACR Fellow. In 2010, he received during the RSA Conference the RSA Award for Excellence in Mathematics. In 2019, he was awarded the honorary title of Dijkstra Fellow by CWI. He received an honorary doctorate from the University of Lugano in 2021. Chaum resides in Sherman Oaks, Los Angeles. Notable research contributions Vault systems Recently credited by Alan Sherman's "On the Origins and Variations of Blockchain Technologies", Chaum's 1982 Berkeley dissertation proposed every element of the blockchain found in Bitcoin except proof of work. The proposed vault system lays out a plan for achieving consensus state between nodes, chaining the history of consensus in blocks, and immutably time-stamping the chained data. The paper also lays out the specific code to implement such a protocol. Digital cash Chaum is credited as the inventor of secure digital cash for his 1983 paper, which also introduced the cryptographic primitive of a blind signature. These ideas have been described as the technical roots of the vision of the Cypherpunk movement that began in the late 1980s. Chaum's proposal allowed users to obtain digital currency from a bank and spend it in a manner that is untraceable by the bank or any other party. In 1988, he extended this idea (with Amos Fiat and Moni Naor) to allow offline transactions that enable detection of double-spending. In 1990, he founded DigiCash, an electronic cash company, in Amsterdam to commercialize the ideas in his research. The first electronic payment was sent in 1994. In 1999, Chaum left the company. New types of digital signatures In the same 1982 paper that proposed digital cash, Chaum introduced blind signatures. This form of digital signature blinds the content of a message before it is signed, so that the signer cannot determine the content. The resulting blind signature can be publicly verified against the original, unblinded message in the manner of a regular digital signature. In 1989, he (with Hans van Antwerpen) introduced undeniable signatures. This form of digital signature uses a verification process that is interactive, so that the signatory can limit who can verify the signature. Since signers may refuse to participate in the verification process, signatures are considered valid unless a signer specifically uses a disavowal protocol to prove that a given signature was not authentic. In 1991, he (with Eugene van Heyst) introduced group signatures, which allow a member of a group to anonymously sign a message on behalf of the entire group. However an appointed group manager holds the power to revoke the anonymity of any signer in the case of disputes. Anonymous communication In 1981, Chaum proposed the idea of an anonymous communication network in a paper. His proposal, called mix networks, allows a group of senders to submit an encryption of a message and its recipient to a server. Once the server has a batch of messages, it will reorder and obfuscate the messages so that only this server knows which message came from which sender. The batch is then forwarded to another server who does the same process. Eventually, the messages reach the final server where they are fully decrypted and delivered to the recipient. A mechanism to allow return messages is also proposed. Mix networks are the basis of some remailers and are the conceptual ancestor to modern anonymous web browsing tools like Tor (based on onion routing). Chaum has advocated that every router be made, effectively, a Tor node. In 1988, Chaum introduced a different type of anonymous communication system called a DC-Net, which is a solution to his proposed Dining Cryptographers Problem. DC-Nets is the basis of the software tool Dissent. Trustworthy voting systems Chaum has made numerous contributions to secure voting systems, including the first proposal of a system that is end-to-end verifiable. This proposal, made in 1981, was given as an application of mix networks. In this system, the individual ballots of voters were kept private which anyone could verify that the tally was counted correctly. This, and other early cryptographic voting systems, assumed that voters could reliably compute values with their personal computers. In 1991, Chaum introduced SureVote which allowed voters to cast a ballot from an untrustworthy voting system, proposing a process now called "code voting" and used in remote voting systems like Remotegrity and DEMOS. In 1994, Chaum introduced the first in-person voting system in which voters cast ballots electronically at a polling station and cryptographically verify that the DRE did not modify their vote (or even learn what it was). In the following years, Chaum proposed (often with others) a series a cryptographically verifiable voting systems that use conventional paper ballots: Pret a Voter, Punchscan, and Scantegrity. The city of Takoma Park, Maryland used Scantegrity for its November, 2009 election. This was the first time a public sector election was run using any cryptographically verifiable voting system. In 2011, Chaum proposed Random Sample Elections. This electoral system allows a verifiably random selection of voters, who can maintain their anonymity, to cast votes on behalf the entire electorate. Near Eye Display A near eye display patent application authored by David Chaum has been updated. "PERSPECTIVA - All styles of eyeglasses can be upgraded to overlay, anywhere you can see through them, digital imagery that is of unbeatable quality." "Invented then founded and led an effort that has demonstrated feasibility of a new paradigm for delivering light that digitally deconstructs images so that they can be reconstructed on the retina with dynamic focus and exquisite clarity." Other contributions In 1979, Chaum proposed a mechanism for splitting a key into partial keys, a predecessor to secret sharing. In 1985, Chaum proposed the original anonymous credential system, which is sometimes also referred to as a pseudonym system. This stems from the fact that the credentials of such a system are obtained from and shown to organizations using different pseudonyms which cannot be linked. In 1988, Chaum with Gilles Brassard and Claude Crepeau published a paper that introduced zero-knowledge arguments, as well as a security model using information-theoretic private-channels, and also first formalized the concept of a commitment scheme. 1991, with Torben Pedersen, he demonstrated a well-cited zero-knowledge proof of a DDH tuple. This proof is particularly useful as it can prove proper reencryption of an ElGamal ciphertext. Chaum contributed to an important commitment scheme which is often attributed to Pedersen. In fact, Pedersen, in his 1991 paper, cites a rump session talk on an unpublished paper by Jurjen Bos and Chaum for the scheme. It appeared even earlier in a paper by Chaum, Damgard, and Jeroen van de Graaf. In 1993 with Stefan Brands, Chaum introduced the concept of a distance-bounding protocol. Bibliography Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms, 1981 Advances in Cryptology: Proceedings of Crypto 82, 1983 Advances in Cryptology: Proceedings of Crypto 83, 1984 David Chaum, Amos Fiat and Moni Naor, Untraceable Electronic Cash David Lee Chaum, Computer Systems Established, Maintained and Trusted by Mutually Suspicious Groups, University of California, Berkeley, 1982 David Chaum, Towards Trustworthy Elections, Springer-Verlag Berlin and Heidelberg GmbH & Co. K, 2010 How to issue a central bank digital currency (working paper), 2021 See also ecash Blind signature Group signature Undeniable signature Mix network Dining cryptographers protocol Anonymous remailer End-to-end auditable voting systems Punchscan Scantegrity Digital credential References Further reading Chaum, D. (1992). "Achieving Electronic Privacy," Scientific American, August 1992, p. 96-101. External links Home page Punchscan Homepage David Chaum research papers Living people Modern cryptographers American computer scientists Financial cryptography Election people Haas School of Business alumni 1955 births International Association for Cryptologic Research fellows Jewish American scientists 21st-century American Jews People associated with cryptocurrency
239668
https://en.wikipedia.org/wiki/Substitution
Substitution
Substitution may refer to: Arts and media Chord substitution, in music, swapping one chord for a related one within a chord progression Substitution (poetry), a variation in poetic scansion "Substitution" (song), a 2009 song by Silversun Pickups Substitution (theatre), an acting methodology Tritone substitution, in music, reinterpreting a chord via a new root note located an augmented fourth or diminished fifth distant from the root of the original interpretation Science and mathematics Biology and chemistry Base-pair substitution or point mutation, a type of mutation Substitution reaction, where a functional group in a chemical compound is replaced by another group Substitution, a process in which an allele arises and undergoes fixation Mathematics and computing Substitution (algebra), replacing occurrences of some symbol by a given value Substitution (logic), a syntactic transformation on strings of symbols of a formal language String substitution, a mapping of letters in an alphabet to languages Substitution cipher, a method of encryption Integration by substitution, a method for finding antiderivatives and integrals Other uses in science Substitution (economics), switching between alternative consumable goods as their relative prices change Attribute substitution, a psychological process thought to underlie a number of cognitive biases and perceptual illusions Substitution method, a method of measuring the transmission loss of an optical fiber Other uses Substitution (law), the replacement of a judge Substitution (sport), where a sports team is able to change one player for another during a match Within Wikipedia Help:Substitution, help performing substitution on Wikipedia pages Special:ExpandTemplates, page that shows what will result from substitution Wikipedia:Substitution, where, when, how, and what about using substitution on Wikipedia See also Import substitution industrialization, a trade and economic policy Penal substitution, a theory of the atonement within Christian theology Simultaneous substitution, a practice requiring Canadian television distribution companies to substitute a non-local station signal with the local signal Substituent, an atom or group of atoms Substitute (disambiguation) Substitution therapy or opiate replacement therapy
239970
https://en.wikipedia.org/wiki/WASTE
WASTE
WASTE is a peer-to-peer and friend-to-friend protocol and software application developed by Justin Frankel at Nullsoft in 2003 that features instant messaging, chat rooms, and file browsing/sharing capabilities. The name WASTE is a reference to Thomas Pynchon's novel The Crying of Lot 49. In the novel, W.A.S.T.E. is (among other things) an underground postal service. In 2003, less than 24 hours after its release, WASTE was removed from distribution by AOL, Nullsoft's parent company. The original page was replaced with a statement claiming that the posting of the software was unauthorized and that no lawful rights to it were held by anyone who had downloaded it, in spite of the original claim that the software was released under the terms of the GNU General Public License. Several developers have modified and upgraded the WASTE client and protocol. The SourceForge edition is considered by many to be the official development branch, but there are several forks. Description WASTE is a decentralized chat, instant messaging and file sharing program and protocol. It behaves similarly to a virtual private network by connecting to a group of trusted computers, as determined by the users. This kind of network is commonly referred to as a darknet. It uses strong encryption to ensure that third parties cannot decipher the messages being transferred. The same encryption is used to transmit and receive instant messages, chat, and files, maintain the connection, and browse and search. WASTE networks WASTE networks are decentralized (see social networks), meaning there is no central hub or server that everyone connects to. Peers must connect to each other individually. Normally, this is accomplished by having individuals sharing their RSA public keys, ensuring that their computers are accessible via the appropriate ports (one or more parties must have an IP address and port that can be reached by the other), and entering the IP address and port of someone on the network to connect to. Once connected to the network, public keys are automatically exchanged amongst members (provided enough of the members are set to forward and accept public keys), and nodes will then attempt to connect to each other, strengthening the network (decreasing the odds that any one node going down will collapse or shut out any part of the network), as well as increasing the number of possible routes from any given point to any other point, decreasing latency and bandwidth required for communication and file transfer. Since WASTE connects small, private groups rather than large, public ones, the network search feature is one of the fastest of all the decentralized P2P applications. Its instant messaging and file sharing capabilities are much closer to those of AOL Instant Messenger than more typical file sharing programs. Members of the network can create private and public chat rooms, instant message each other, browse each other's files, and trade files, including the pushing or active sending of files by hosts, as well as the more common downloading by users. Simple drag-and-drop to chat boxes will send files to their intended destinations. The suggested size for a WASTE network (referred to as a mesh by users) is 10-50 nodes, though it has been suggested that the size of the network is less critical than the ratio of nodes willing to route traffic to those that are not. With original Nullsoft-client groups now exceeding ten years of age, it's not uncommon for stable meshes to host multiple terabytes of secure content. By default, WASTE listens to incoming connections on port 1337. This was probably chosen because of 1337's leet connotations. Since there is no central hub, WASTE networks typically employ a password or passphrase, also called a network name to prevent collision. That is, a member from one network connecting to a member of another network, thus bridging the two networks. By assigning a unique identifier (passphrase) to your network, the risk of collisions can be reduced, particularly with the original clients. Nullnets Nullnets are networks without a passphrase. It is impossible to know how many nullnets exist, but there is one primary nullnet. The best way to access the nullnet is to post your credentials to the WASTE Key Exchange. The nullnet can easily merge with other nullnets because there is no passphrase, which makes it a great place for public discussion and file sharing. Strengths Secured through the trade of RSA public keys, allowing for safe and secure communication and data transfer with trusted hosts. The distributed nature means that the network isn't dependent on anyone setting up a server to act as a hub. Contrast this with other P2P and chat protocols that require you to connect to a server. This means there is no single point of vulnerability for the network. Similarly, there is no single group leader; everyone on the network is equal in what they can or cannot do, including inviting other members into the group, nor can any member kick another from the group, exclude them from public chats, etc. WASTE can obfuscate its protocol, making it difficult to detect that WASTE is being used. WASTE has a Saturate feature which adds random traffic, making traffic analysis more difficult. The nodes (each a trusted connection) automatically determine the lowest latency route for traffic and, in doing so, load balance. This also improves privacy, because packets often take different routes. Shortcomings Trading public keys, enabling port forwarding on your firewall (if necessary), and connecting to each other can be a difficult and/or tedious process, especially for those who aren't very technically proficient. Due to the network's distributed nature, it is impossible to kick someone from the network once they've gained access. Since every member of the network will have that member's public key, all that member needs to do to regain access is to connect to another member. Coordinating the change of the network name is exceedingly difficult, so the best course of action is to create another network and migrate everyone over to the new network. This could, of course, also be seen as a strength. Since there is no central server, once someone disconnects from the network, they must know at least one network IP address to reconnect. It is possible that the network will drift from all the IP addresses used before so that none are known, and it becomes necessary to contact a network member and ask for address information to be able to reconnect. Indeed, it is possible that a network could unknowingly split into two this way. It takes at least some coordination to keep a WASTE network intact; this can be as simple as one or more volunteers with a static IP address or a fixed dynamic DNS (DDNS) address (available free of charge from a number of providers) keeping their node up to allow people to reconnect to the network. While encryption is performed using the Blowfish algorithm, which is thought to be strong, the PCBC mode used has several known security flaws. Nicknames are not registered, which allows eavesdropping and spoofing. WASTE version 1.6 reduces the chances of eavesdropping by using public keys for communication, but as network members may choose any nickname a user must know and recognize the hash of the person they wish to communicate with to be sure of their identity. To connect from behind a firewall, one party must have the proper port forwarded to their computer; as WASTE networks do not depend on a central server there is no way around this. However, as long as one node accepts incoming connections it can act as a server, connecting nodes that cannot themselves accept incoming connections. Indeed, the long-term stability of a WASTE network depends on these hubs. Versions As of version 1.7, WASTE comes in an experimental and a stable release. The experimental branch implements a new 16k packet size, which improves overhead and transfer speeds, but is not compatible with previous versions which support a 4k packet size. WASTE 1.7.4 for Windows was released on 24 December 2008, and was current . This is a new branch on SourceForge created because of inactivity on the main WASTE development branch. This is the most fully featured version to date. A cross-platform (including Linux, OS X, and Microsoft Windows) beta version of WASTE called Waste 1.5 beta 4 a.k.a. wxWaste, using the WxWidgets toolkit is available. VIA Technologies released a fork of WASTE under the name PadlockSL, but removed the product's website after a few weeks. The user interface was written in Qt and the client was available for Linux and Windows. BlackBelt WASTE is a released fork of WASTE. Its build is labelled 1.8 to mark its significant improvements across its various areas of functionality. It supports Tor and i2p networks as well as clearnet. Its routing has been updated to provide even more obfuscated meta-data internally. It has uPnP support to automatically handle port forwarding. It also has automatic Anti-Spoofing Technology to encourage unique users. Under development since 2010, it currently (Jul 2021) has regular releases and improvements. See also Peer-to-peer (P2P) Anonymous P2P File sharing Friend-to-friend (F2F) References External links WASTE again - a fork Original WASTE SourceForge site - now defunct BlackBelt WASTE - Fork of WASTE with support for i2p and Tor as well as clearnet The Invisible Inner Circle Anonymous Communication With Waste 'Secure File Transfer With WASTE - Introductory video' by Russell Sayers at showmedo The Zer0Share Project - Jack Spratts' Darknet 2003 software File sharing networks Anonymous file sharing networks Windows instant messaging clients MacOS instant messaging clients Instant messaging clients for Linux Free instant messaging clients Free file sharing software Free software programmed in C++ Cross-platform free software Discontinued software
240358
https://en.wikipedia.org/wiki/NSAKEY
NSAKEY
_NSAKEY was a variable name discovered in Windows NT 4 SP5 in 1999 by Andrew D. Fernandes of Cryptonym Corporation. The variable contained a 1024-bit public key; such keys are used in public-key cryptography for encryption and authentication. Because of the name, however, it was speculated that the key would allow the United States National Security Agency (NSA) to subvert any Windows user's security. Microsoft denied the speculation and said that the key's name came from the fact that NSA was the technical review authority for U.S. cryptography export controls. Overview Microsoft requires all cryptography suites that interoperate with Microsoft Windows to have a digital signature. Since only Microsoft-approved cryptography suites can be shipped with Windows, it is possible to keep export copies of this operating system in compliance with the Export Administration Regulations (EAR), which are enforced by the Bureau of Industry and Security (BIS). It was already known that Microsoft used two keys, a primary and a spare, either of which can create valid signatures. Upon releasing the Service Pack 5 for Windows NT 4, Microsoft had neglected to remove the debugging symbols in ADVAPI32.DLL, a library used for advanced Windows features such as Registry and Security. Andrew Fernandes, chief scientist with Cryptonym, found the primary key stored in the variable _KEY and the second key was labeled _NSAKEY. Fernandes published his discovery, touching off a flurry of speculation and conspiracy theories, including the possibility that the second key was owned by the United States National Security Agency (the NSA) and allowed the intelligence agency to subvert any Windows user's security. During a presentation at the Computers, Freedom and Privacy 2000 (CFP2000) conference, Duncan Campbell, senior research fellow at the Electronic Privacy Information Center (EPIC), mentioned the _NSAKEY controversy as an example of an outstanding issue related to security and surveillance. In addition, Dr. Nicko van Someren found a third key in Windows 2000, which he doubted had a legitimate purpose, and declared that "It looks more fishy". Microsoft's reaction Microsoft denied the backdoor speculations on _NSAKEY and said "This speculation is ironic since Microsoft has consistently opposed the various key escrow proposals suggested by the government." According to Microsoft, the key's symbol was "_NSAKEY" because the NSA was the technical review authority for U.S. cryptography export controls, and the key ensured compliance with U.S. export laws. Richard Purcell, Microsoft's Director of Corporate Privacy, approached Campbell after his presentation and expressed a wish to clear up the confusion and doubts about _NSAKEY. Immediately after the conference, Scott Culp, of the Microsoft Security Response Center, contacted Campbell and offered to answer his questions. Their correspondence began cordially but soon became strained; Campbell apparently felt Culp was being evasive and Culp apparently felt that Campbell was hostilely repeating questions that he had already answered. On 28 April 2000, Culp stated that "we have definitely reached the end of this discussion ... [which] is rapidly spiraling into the realm of conspiracy theory". Microsoft claimed the third key was only in beta builds of Windows 2000 and that its purpose was for signing Cryptographic Service Providers. The Mozilla page on common questions on cryptography mentions: It is in fact possible under certain circumstances to obtain an export license for software invoking cryptographic functions through an API. For example, Microsoft's implementation of the Microsoft Cryptographic API (CryptoAPI) specification was approved for export from the US, even though it implements an API by which third parties, including third parties outside the US, can add separate modules ("Cryptographic Service Providers" or CSPs) implementing cryptographic functionality. This export approval was presumably made possible because a) the CryptoAPI implementation requires third party CSPs to be digitally signed by Microsoft and rejects attempts to call CSPs not so signed; b) through this signing process Microsoft can ensure compliance with the relevant US export control regulations (e.g., they presumably would not sign a CSP developed outside the US that implements strong cryptography); and c) Microsoft's CryptoAPI implementation is available only in executable form, and thus is presumed to be reasonably resistant to user tampering to disable the CSP digital signature check. Microsoft stated that the second key is present as a backup to guard against the possibility of losing the primary secret key. Fernandes doubts this explanation, pointing out that the generally accepted way to guard against loss of a secret key is secret splitting, which would divide the key into several different parts, which would then be distributed throughout senior management. He stated that this would be far more robust than using two keys; if the second key is also lost, Microsoft would need to patch or upgrade every copy of Windows in the world, as well as every cryptographic module it had ever signed. On the other hand, if Microsoft failed to think about the consequences of key loss and created a first key without using secret splitting (and did so in secure hardware which doesn't allow protection to be weakened after key generation), and the NSA pointed out this problem as part of the review process, it might explain why Microsoft weakened their scheme with a second key and why the new one was called _NSAKEY. (The second key might be backed up using secret splitting, so losing both keys should not be a problem.) Another possibility is that Microsoft included a second key to be able to sign cryptographic modules outside the United States, while still complying with the BIS's EAR. If cryptographic modules were to be signed in multiple locations, using multiple keys is a reasonable approach. However, no cryptographic module has ever been found to be signed by _NSAKEY, and Microsoft denies that any other certification authority exists. It was possible to remove the second _NSAKEY. There is good news among the bad, however. It turns out that there is a flaw in the way the "crypto_verify" function is implemented. Because of the way the crypto verification occurs, users can easily eliminate or replace the NSA key from the operating system without modifying any of Microsoft's original components. Since the NSA key is easily replaced, it means that non-US companies are free to install "strong" crypto services into Windows, without Microsoft's or the NSA's approval. Thus the NSA has effectively removed export control of "strong" crypto from Windows. A demonstration program that replaces the NSA key can be found on Cryptonym's website. PGP keys In September 1999, an anonymous researcher reverse-engineered both the primary key and the _NSAKEY into PGP-compatible format and published them to key servers. Primary key (_KEY) Type Bits/KeyID Date User ID pub 1024/346B5095 1999/09/06 Microsoft's CAPI key <postmaster@microsoft.com> -----BEGIN PGP PUBLIC KEY BLOCK----- Version: 2.6.3i mQCPAzfTc8YAAAEEALJz4nepw3XHC7dJPlKws2li6XZiatYJujG+asysEvHz2mwY 2WlRggxFfHtMSJO9FJ3ieaOfbskm01RNs0kfoumvG/gmCzsPut1py9d7KAEpJXEb F8C4d+r32p0C3V+FcoVOXJDpsQz7rq+Lj+HfUEe8GIKaUxSZu/SegCE0a1CVABEB AAG0L01pY3Jvc29mdCdzIENBUEkga2V5IDxwb3N0bWFzdGVyQG1pY3Jvc29mdC5j b20+iQEVAwUQN9Nz5j57yqgoskVRAQFr/gf8DGm1hAxWBmx/0bl4m0metM+IM39J yI5mub0ie1HRLExP7lVJezBTyRryV3tDv6U3OIP+KZDthdXb0fmGU5z+wHt34Uzu xl6Q7m7oB76SKfNaWgosZxqkE5YQrXXGsn3oVZhV6yBALekWtsdVaSmG8+IJNx+n NvMTYRUz+MdrRFcEFDhFntblI8NlQenlX6CcnnfOkdR7ZKyPbVoSXW/Z6q7U9REJ TSjBT0swYbHX+3EVt8n2nwxWb2ouNmnm9H2gYfXHikhXrwtjK2aG/3J7k6EVxS+m Rp+crFOB32sTO1ib2sr7GY7CZUwOpDqRxo8KmQZyhaZqz1x6myurXyw3Tg== =ms8C -----END PGP PUBLIC KEY BLOCK----- Secondary key (_NSAKEY and _KEY2) Type Bits/KeyID Date User ID pub 1024/51682D1F 1999/09/06 NSA's Microsoft CAPI key <postmaster@nsa.gov> -----BEGIN PGP PUBLIC KEY BLOCK----- Version: 2.6.3i mQCPAzfTdH0AAAEEALqOFf7jzRYPtHz5PitNhCYVryPwZZJk2B7cNaJ9OqRQiQoi e1YdpAH/OQh3HSQ/butPnjUZdukPB/0izQmczXHoW5f1Q5rbFy0y1xy2bCbFsYij 4ReQ7QHrMb8nvGZ7OW/YKDCX2LOGnMdRGjSW6CmjK7rW0veqfoypgF1RaC0fABEB AAG0LU5TQSdzIE1pY3Jvc29mdCBDQVBJIGtleSA8cG9zdG1hc3RlckBuc2EuZ292 PokBFQMFEDfTdJE+e8qoKLJFUQEBHnsH/ihUe7oq6DhU1dJjvXWcYw6p1iW+0euR YfZjwpzPotQ8m5rC7FrJDUbgqQjoFDr++zN9kD9bjNPVUx/ZjCvSFTNu/5X1qn1r it7IHU/6Aem1h4Bs6KE5MPpjKRxRkqQjbW4f0cgXg6+LV+V9cNMylZHRef3PZCQa 5DOI5crQ0IWyjQCt9br07BL9C3X5WHNNRsRIr9WiVfPK8eyxhNYl/NiH2GzXYbNe UWjaS2KuJNVvozjxGymcnNTwJltZK4RLZxo05FW2InJbtEfMc+m823vVltm9l/f+ n2iYBAaDs6I/0v2AcVKNy19Cjncc3wQZkaiIYqfPZL19kT8vDNGi9uE= =PhHT -----END PGP PUBLIC KEY BLOCK----- See also Lotus Notes – openly used an NSA key in order to comply with cryptography export regulations Clipper chip References Microsoft criticisms and controversies History of cryptography Conspiracy theories National Security Agency Microsoft Windows security technology Articles with underscores in the title
240991
https://en.wikipedia.org/wiki/S-box
S-box
In cryptography, an S-box (substitution-box) is a basic component of symmetric key algorithms which performs substitution. In block ciphers, they are typically used to obscure the relationship between the key and the ciphertext, thus ensuring Shannon's property of confusion. Mathematically, an S-box is a vectorial Boolean function. In general, an S-box takes some number of input bits, m, and transforms them into some number of output bits, n, where n is not necessarily equal to m. An m×n S-box can be implemented as a lookup table with 2m words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard (DES), but in some ciphers the tables are generated dynamically from the key (e.g. the Blowfish and the Twofish encryption algorithms). Example One good example of a fixed table is the S-box from DES (S5), mapping 6-bit input into a 4-bit output: Given a 6-bit input, the 4-bit output is found by selecting the row using the outer two bits (the first and last bits), and the column using the inner four bits. For example, an input "011011" has outer bits "01" and inner bits "1101"; the corresponding output would be "1001". The eight S-boxes of DES were the subject of intense study for many years out of a concern that a backdoor (a vulnerability known only to its designers) might have been planted in the cipher. The S-box design criteria were eventually published (in ) after the public rediscovery of differential cryptanalysis, showing that they had been carefully tuned to increase resistance against this specific attack. Biham and Shamir found that even small modifications to an S-box could significantly weaken DES. Analysis and properties There has been a great deal of research into the design of good S-boxes, and much more is understood about their use in block ciphers than when DES was released. Any S-box where any linear combination of output bits is produced by a bent function of the input bits is termed a perfect S-box. S-boxes can be analyzed using linear cryptanalysis and differential cryptanalysis in the form of a Linear Approximation Table (LAT) or Walsh transform and Difference Distribution Table (DDT) or autocorrelation table and spectrum. Its strength may be summarized by the nonlinearity (bent, almost bent) and differential uniformity (perfectly nonlinear, almost perfectly nonlinear). See also Bijection, injection and surjection Boolean function Nothing-up-my-sleeve number Permutation box (P-box) Permutation cipher Rijndael S-box Substitution cipher References Further reading External links A literature survey on S-box design John Savard's "Questions of S-box Design" "Substitution Box Design based on Gaussian Distribution" Cryptographic algorithms
242669
https://en.wikipedia.org/wiki/Traffic%20shaping
Traffic shaping
Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimize or guarantee performance, improve latency, or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking. The most common type of traffic shaping is application-based traffic shaping. In application-based traffic shaping, fingerprinting tools are first used to identify applications of interest, which are then subject to shaping policies. Some controversial cases of application-based traffic shaping include bandwidth throttling of peer-to-peer file sharing traffic. Many application protocols use encryption to circumvent application-based traffic shaping. Another type of traffic shaping is route-based traffic shaping. Route-based traffic shaping is conducted based on previous-hop or next-hop information. Functionality If a link becomes utilized to the point where there is a significant level of congestion, latency can rise substantially. Traffic shaping can be used to prevent this from occurring and keep latency in check. Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such as generic cell rate algorithm. This control can be accomplished in many ways and for many reasons; however traffic shaping is always achieved by delaying packets. Traffic shaping is commonly applied at the network edges to control traffic entering the network, but can also be applied by the traffic source (for example, computer or network card) or by an element in the network. Uses Traffic shaping is sometimes applied by traffic sources to ensure the traffic they send complies with a contract which may be enforced in the network by traffic policing. Shaping is widely used for teletraffic engineering, and appears in domestic ISPs' networks as one of several Internet Traffic Management Practices (ITMPs). Some ISPs may use traffic shaping to limit resources consumed by peer-to-peer file-sharing networks, such as BitTorrent. Data centers use traffic shaping to maintain service level agreements for the variety of applications and the many tenants hosted as they all share the same physical network. Audio Video Bridging includes an integral traffic-shaping provision defined in IEEE 802.1Qav. Nodes in an IP network which buffer packets before sending on a link which is at capacity produce an unintended traffic shaping effect. This can appear across, for example, a low bandwidth link, a particularly expensive WAN link or satellite hop. Implementation A traffic shaper works by delaying metered traffic such that each packet complies with the relevant traffic contract. Metering may be implemented with, for example, the leaky bucket or token bucket algorithms (the former typically in ATM and the latter in IP networks). Metered packets or cells are then stored in a FIFO buffer, one for each separately shaped class, until they can be transmitted in compliance with the associated traffic contract. Transmission may occur immediately (if the traffic arriving at the shaper is already compliant), after some delay (waiting in the buffer until its scheduled release time) or never (in case of packet loss). Overflow condition All traffic shaper implementations have a finite buffer, and must cope with the case where the buffer is full. A simple and common approach is to drop traffic arriving while the buffer is full a strategy known as tail drop and which results in traffic policing as well as shaping. A more sophisticated implementation could apply a dropping algorithm such as random early detection. Traffic classification Simple traffic shaping schemes shape all traffic uniformly. More sophisticated shapers first classify traffic. Traffic classification categorises traffic (for example, based on port number or protocol). Different classes can then be shaped separately to achieve a desired effect. Self-limiting sources A self-limiting source produces traffic which never exceeds some upper bound, for example media sources which cannot transmit faster than their encoded rate allows. Self-limiting sources shape the traffic they generate to a greater or lesser degree. Congestion control mechanisms can also affect traffic shaping of sorts - for example TCP's window mechanism implements a variable rate constraint related to bandwidth-delay product. TCP Nice, a modified version of TCP developed by researchers at the University of Texas at Austin, allows applications to request that certain TCP connections be managed by the operating system as near zero-cost background transfers, or nice flows. Such flows interfere only minimally with foreground (non-nice) flows, while reaping a large fraction of spare network bandwidth. Relationship to bandwidth management Traffic shaping is a specific technique and one of several which combined constitute bandwidth management. ISPs and traffic management Traffic shaping is of interest especially to internet service providers (ISPs). Their high-cost, high-traffic networks are their major assets, and as such, are the focus of their attentions. They sometimes use traffic shaping to optimize the use of their network, sometimes by shaping traffic according to their assessment of importance and thus discouraging use of certain applications. Enterprises Most companies with remote offices are now connected via a wide area network (WAN). Applications tend to be centrally hosted at the head office and remote offices are expected to pull data from central databases and server farms. As applications become more hungry in terms of bandwidth and prices of dedicated circuits being relatively high in most areas of the world, instead of increasing the size of their WAN circuits, companies feel the need to properly manage their circuits to make sure business-oriented traffic gets priority over other traffic. Traffic shaping is thus a good means for companies to avoid purchasing additional bandwidth while properly managing these resources. Alternatives to traffic shaping in this regard are application acceleration and WAN optimization and compression, which are fundamentally different from traffic shaping. Traffic shaping defines bandwidth rules whereas application acceleration using multiple techniques like a TCP performance-enhancing proxy. WAN optimization, on the other hand, compresses data streams or sends only differences in file updates. The latter is quite effective for chatty protocols like CIFS. Traffic shaping detection There are several methods to detect and measure traffic shaping. Tools have been developed to assist with detection. See also Network congestion avoidance Quality of service Multilayer switch TCP pacing Broadband networks Net neutrality Tc (Linux) command used to manage traffic shaping References External links BBC News - Traffic Shaping and BitTorrent IT-world.com, Traffic Shaping article comparing traffic management techniques circa 2001 Network World, 03/05/01: Where should traffic shaping occur? Network World, 03/07/01: WAN-side traffic shaping Linux Kernel: Traffic Control, Shaping and QoS A Practical Guide to Linux Traffic Control Web based traffic shaping bridge/router Dynamisches Bandbreitenmanagement im Chemnitzer StudentenNetz (German work about "DynShaper-Software" used at CSN (student network at Chemnitz University of Technology): Manuals) Network performance Network scheduling algorithms
243964
https://en.wikipedia.org/wiki/Gtk-gnutella
Gtk-gnutella
gtk-gnutella is a peer-to-peer file sharing application which runs on the gnutella network. gtk-gnutella uses the GTK+ toolkit for its graphical user interface. Released under the GNU General Public License, gtk-gnutella is free software. History Initially gtk-gnutella was written to look like the original Nullsoft Gnutella client. The original author Yann Grossel stopped working on the client in early 2001. After a while Raphael Manfredi took over as the main software architect, and the client has been in active development ever since. Versions released after July 2002 do not look like the original Nullsoft client. Features gtk-gnutella is programmed in C with an emphasis on efficiency and portability without being minimalistic but rather head-on with most of the modern features of the gnutella network. Therefore, it requires fewer resources (such as CPU and/or RAM) than the major gnutella clients. It can also be used as headless gnutella client not requiring GTK+ at all. gtk-gnutella has a filtering engine that can reduce the amount of spam and other irrelevant results. gtk-gnutella supports a large range of the features of modern gnutella clients. gtk-gnutella was the first gnutella client to support IPv6 and encryption using TLS. It can handle and export magnet links. It has strong internationalization features, supporting English, German, Greek, French, Hungarian, Spanish, Japanese, Norwegian, Dutch and Chinese. gtk-gnutella also has support to prevent spamming and other hostile peer activity. Several software distributions provide pre-compiled packages, but they are usually outdated as many distributions version freeze old stable releases. The gnutella network benefits from running the latest version obtainable as peer and hostile IP address lists change rapidly, making building the latest SVN snapshot the best option. There are also pre-compiled packages for many Linux distributions available online. Persons concerned about security might wish to compile their own. The gtk-gnutella sources use dist as build and configuration system instead of Autoconf. Most users are only familiar with the configure scripts generated by the latter. Another hazard for novices is configuring NAT devices to enable full network connectivity for gtk-gnutella. gtk-gnutella, like any gnutella client, is still usable behind a firewall or a router, but with some reduced functionality, if it cannot receive incoming TCP connections or UDP packets. In an attempt to mitigate the issue for newcomers, gtk-gnutalla implements the UPnP and NAT-PMP client protocols. gtk-gnutella supports features for downloading larger files (videos, programs, and disk images). Version 0.96.4 supports Tiger tree hash serving and versions after 0.96.5 support tiger tree hashes for uploads and downloads. Tiger tree hashing and other gtk-gnutella features make file transfers as efficient as BitTorrent. Specifically, gtk-gnutella supports partial file sharing, remote queueing and files larger than 4 GiB. Overlap checking was the only mechanism to guard against bad data prior to versions 0.96.4. Overlap checking does not guard against malicious corruption like Tiger tree hashing does. Version 0.96.6 introduced preliminary support for a Kademlia DHT, which was completed in version 0.96.7. The DHT is replacing search by SHA-1, when locating alternate sources for a known file or looking for push-proxies. In version 0.96.7, the DHT is enabled by default. LimeWire first developed the DHT and named it Mojito DHT. Version 0.96.9 introduced full native support for UPnP and NAT-PMP, making the usage behind a compatible router much easier since there is no longer any need to manually forward ports on the firewall. In this version the code was also ported to Microsoft Windows however the Windows port is still considered beta due to lack of wide testing so far. Version 0.96.9 also introduced important DHT protection against Sybil attacks, using algorithms based on statistical properties. Version 0.97 was a major release, introducing client-side support for HTTP pipelining, "What's New?" queries, MIME type query filtering, GUESS support (Gnutella UDP Extension for Scalable Searches) and partial file querying. Although many Gnutella vendors already supported server-side GUESS, gtk-gnutella introduced the client-side as well, also enhancing the original specifications of the protocol to make it truly usable. Version 0.98.2 employs a minor patch to correct malloc memory allocations and multiple threads issues, mainly on Ubuntu 11.10 operating systems. This 2011 gtk-gnutella version was also dedicated to the memory of Dennis Ritchie, 1941-2011. Version 0.98.4 added RUDP (reliable UDP) and improved partial file transfers. Version 1.1 is a major release which added G2 support: gtk-gnutella will now connect to the G2 network in leaf mode. This allows searches from G2 nodes and lets local queries be propagated to the G2 network as well. File exchanges with G2 hosts are fully inter-operable and are permitted without restriction. Popularity gtk-gnutella does not rank as one of the most popular clients on GnutellaNet crawls. gtk-gnutella developers' proposals have been incorporated into many gnutella clients. In 2011, gtk-gnutella vendor extensions are the third most prolific on the GDF (Gnutella Developer Forum), following Limewire and Bearshare. Salon listed gtk-gnutella as one of the five most popular gnutella applications in 2002. XoloX and Toadnode, also in the list, are no longer actively developed. Notes References External links gtk-gnutella homepage Free file sharing software Gnutella clients File sharing software that uses GTK File sharing software for Linux Free software programmed in C Cross-platform software
244374
https://en.wikipedia.org/wiki/X86-64
X86-64
x86-64 (also known as x64, x86_64, AMD64, and Intel 64) is a 64-bit version of the x86 instruction set, first released in 1999. It introduced two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging mode. With 64-bit mode and the new paging mode, it supports vastly larger amounts of virtual memory and physical memory than was possible on its 32-bit predecessors, allowing programs to store larger amounts of data in memory. x86-64 also expands general-purpose registers to 64-bit, and expands the number of them from 8 (some of which had limited or fixed functionality, e.g. for stack management) to 16 (fully general), and provides numerous other enhancements. Floating-point arithmetic is supported via mandatory SSE2-like instructions, and x87/MMX style registers are generally not used (but still available even in 64-bit mode); instead, a set of 16 vector registers, 128 bits each, is used. (Each register can store one or two double-precision numbers or one to four single-precision numbers, or various integer formats.) In 64-bit mode, instructions are modified to support 64-bit operands and 64-bit addressing mode. The compatibility mode defined in the architecture allows 16- and 32-bit user applications to run unmodified, coexisting with 64-bit applications if the 64-bit operating system supports them. As the full x86 16-bit and 32-bit instruction sets remain implemented in hardware without any intervening emulation, these older executables can run with little or no performance penalty, while newer or modified applications can take advantage of new features of the processor design to achieve performance improvements. Also, a processor supporting x86-64 still powers on in real mode for full backward compatibility with the 8086, as x86 processors supporting protected mode have done since the 80286. The original specification, created by AMD and released in 2000, has been implemented by AMD, Intel, and VIA. The AMD K8 microarchitecture, in the Opteron and Athlon 64 processors, was the first to implement it. This was the first significant addition to the x86 architecture designed by a company other than Intel. Intel was forced to follow suit and introduced a modified NetBurst family which was software-compatible with AMD's specification. VIA Technologies introduced x86-64 in their VIA Isaiah architecture, with the VIA Nano. The x86-64 architecture is distinct from the Intel Itanium architecture (formerly IA-64). The architectures are not compatible on the native instruction set level, and operating systems and applications compiled for one cannot be run on the other. AMD64 History AMD64 (also variously referred to by AMD in their literature and documentation as “AMD 64-bit Technology” and “AMD x86-64 Architecture”) was created as an alternative to the radically different IA-64 architecture designed by Intel and Hewlett-Packard, which was backward-incompatible with IA-32, the 32-bit version of the x86 architecture. Originally announced in 1999 with a full specification available in August 2000, the AMD64 architecture was positioned by AMD from the beginning as an evolutionary way to add 64-bit computing capabilities to the existing x86 architecture while supporting legacy 32-bit x86 code, as opposed to Intel's approach of creating an entirely new 64-bit architecture with IA-64. The first AMD64-based processor, the Opteron, was released in April 2003. Implementations AMD's processors implementing the AMD64 architecture include Opteron, Athlon 64, Athlon 64 X2, Athlon 64 FX, Athlon II (followed by "X2", "X3", or "X4" to indicate the number of cores, and XLT models), Turion 64, Turion 64 X2, Sempron ("Palermo" E6 stepping and all "Manila" models), Phenom (followed by "X3" or "X4" to indicate the number of cores), Phenom II (followed by "X2", "X3", "X4" or "X6" to indicate the number of cores), FX, Fusion/APU and Ryzen/Epyc. Architectural features The primary defining characteristic of AMD64 is the availability of 64-bit general-purpose processor registers (for example, ), 64-bit integer arithmetic and logical operations, and 64-bit virtual addresses. The designers took the opportunity to make other improvements as well. Notable changes in the 64-bit extensions include: 64-bit integer capability All general-purpose registers (GPRs) are expanded from 32 bits to 64 bits, and all arithmetic and logical operations, memory-to-register and register-to-memory operations, etc., can operate directly on 64-bit integers. Pushes and pops on the stack default to 8-byte strides, and pointers are 8 bytes wide. Additional registers In addition to increasing the size of the general-purpose registers, the number of named general-purpose registers is increased from eight (i.e. , , , , , , , ) in x86 to 16 (i.e. , , , , , , , , , , , , , , , ). It is therefore possible to keep more local variables in registers rather than on the stack, and to let registers hold frequently accessed constants; arguments for small and fast subroutines may also be passed in registers to a greater extent. AMD64 still has fewer registers than many RISC instruction sets (e.g. PA-RISC, Power ISA, and MIPS have 32 GPRs; Alpha, 64-bit ARM, and SPARC have 31) or VLIW-like machines such as the IA-64 (which has 128 registers). However, an AMD64 implementation may have far more internal registers than the number of architectural registers exposed by the instruction set (see register renaming). (For example, AMD Zen cores have 168 64-bit integer and 160 128-bit vector floating-point physical internal registers.) Additional XMM (SSE) registers Similarly, the number of 128-bit XMM registers (used for Streaming SIMD instructions) is also increased from 8 to 16. The traditional x87 FPU register stack is not included in the register file size extension in 64-bit mode, compared with the XMM registers used by SSE2, which did get extended. The x87 register stack is not a simple register file although it does allow direct access to individual registers by low cost exchange operations. Larger virtual address space The AMD64 architecture defines a 64-bit virtual address format, of which the low-order 48 bits are used in current implementations. This allows up to 256 TiB (248 bytes) of virtual address space. The architecture definition allows this limit to be raised in future implementations to the full 64 bits, extending the virtual address space to 16 EiB (264 bytes). This is compared to just 4 GiB (232 bytes) for the x86. This means that very large files can be operated on by mapping the entire file into the process's address space (which is often much faster than working with file read/write calls), rather than having to map regions of the file into and out of the address space. Larger physical address space The original implementation of the AMD64 architecture implemented 40-bit physical addresses and so could address up to 1 TiB (240 bytes) of RAM. Current implementations of the AMD64 architecture (starting from AMD 10h microarchitecture) extend this to 48-bit physical addresses and therefore can address up to 256 TiB (248 bytes) of RAM. The architecture permits extending this to 52 bits in the future (limited by the page table entry format); this would allow addressing of up to 4 PiB of RAM. For comparison, 32-bit x86 processors are limited to 64 GiB of RAM in Physical Address Extension (PAE) mode, or 4 GiB of RAM without PAE mode. Larger physical address space in legacy mode When operating in legacy mode the AMD64 architecture supports Physical Address Extension (PAE) mode, as do most current x86 processors, but AMD64 extends PAE from 36 bits to an architectural limit of 52 bits of physical address. Any implementation, therefore, allows the same physical address limit as under long mode. Instruction pointer relative data access Instructions can now reference data relative to the instruction pointer (RIP register). This makes position-independent code, as is often used in shared libraries and code loaded at run time, more efficient. SSE instructions The original AMD64 architecture adopted Intel's SSE and SSE2 as core instructions. These instruction sets provide a vector supplement to the scalar x87 FPU, for the single-precision and double-precision data types. SSE2 also offers integer vector operations, for data types ranging from 8bit to 64bit precision. This makes the vector capabilities of the architecture on par with those of the most advanced x86 processors of its time. These instructions can also be used in 32-bit mode. The proliferation of 64-bit processors has made these vector capabilities ubiquitous in home computers, allowing the improvement of the standards of 32-bit applications. The 32-bit edition of Windows 8, for example, requires the presence of SSE2 instructions. SSE3 instructions and later Streaming SIMD Extensions instruction sets are not standard features of the architecture. No-Execute bit The No-Execute bit or NX bit (bit 63 of the page table entry) allows the operating system to specify which pages of virtual address space can contain executable code and which cannot. An attempt to execute code from a page tagged "no execute" will result in a memory access violation, similar to an attempt to write to a read-only page. This should make it more difficult for malicious code to take control of the system via "buffer overrun" or "unchecked buffer" attacks. A similar feature has been available on x86 processors since the 80286 as an attribute of segment descriptors; however, this works only on an entire segment at a time. Segmented addressing has long been considered an obsolete mode of operation, and all current PC operating systems in effect bypass it, setting all segments to a base address of zero and (in their 32-bit implementation) a size of 4 GiB. AMD was the first x86-family vendor to implement no-execute in linear addressing mode. The feature is also available in legacy mode on AMD64 processors, and recent Intel x86 processors, when PAE is used. Removal of older features A few "system programming" features of the x86 architecture were either unused or underused in modern operating systems and are either not available on AMD64 in long (64-bit and compatibility) mode, or exist only in limited form. These include segmented addressing (although the FS and GS segments are retained in vestigial form for use as extra-base pointers to operating system structures), the task state switch mechanism, and virtual 8086 mode. These features remain fully implemented in "legacy mode", allowing these processors to run 32-bit and 16-bit operating systems without modifications. Some instructions that proved to be rarely useful are not supported in 64-bit mode, including saving/restoring of segment registers on the stack, saving/restoring of all registers (PUSHA/POPA), decimal arithmetic, BOUND and INTO instructions, and "far" jumps and calls with immediate operands. Virtual address space details Canonical form addresses Although virtual addresses are 64 bits wide in 64-bit mode, current implementations (and all chips that are known to be in the planning stages) do not allow the entire virtual address space of 264 bytes (16 EiB) to be used. This would be approximately four billion times the size of the virtual address space on 32-bit machines. Most operating systems and applications will not need such a large address space for the foreseeable future, so implementing such wide virtual addresses would simply increase the complexity and cost of address translation with no real benefit. AMD, therefore, decided that, in the first implementations of the architecture, only the least significant 48 bits of a virtual address would actually be used in address translation (page table lookup). In addition, the AMD specification requires that the most significant 16 bits of any virtual address, bits 48 through 63, must be copies of bit 47 (in a manner akin to sign extension). If this requirement is not met, the processor will raise an exception. Addresses complying with this rule are referred to as "canonical form." Canonical form addresses run from 0 through 00007FFF'FFFFFFFF, and from FFFF8000'00000000 through FFFFFFFF'FFFFFFFF, for a total of 256 TiB of usable virtual address space. This is still 65,536 times larger than the virtual 4 GiB address space of 32-bit machines. This feature eases later scalability to true 64-bit addressing. Many operating systems (including, but not limited to, the Windows NT family) take the higher-addressed half of the address space (named kernel space) for themselves and leave the lower-addressed half (user space) for application code, user mode stacks, heaps, and other data regions. The "canonical address" design ensures that every AMD64 compliant implementation has, in effect, two memory halves: the lower half starts at 00000000'00000000 and "grows upwards" as more virtual address bits become available, while the higher half is "docked" to the top of the address space and grows downwards. Also, enforcing the "canonical form" of addresses by checking the unused address bits prevents their use by the operating system in tagged pointers as flags, privilege markers, etc., as such use could become problematic when the architecture is extended to implement more virtual address bits. The first versions of Windows for x64 did not even use the full 256 TiB; they were restricted to just 8 TiB of user space and 8 TiB of kernel space. Windows did not support the entire 48-bit address space until Windows 8.1, which was released in October 2013. Page table structure The 64-bit addressing mode ("long mode") is a superset of Physical Address Extensions (PAE); because of this, page sizes may be 4 KiB (212 bytes) or 2 MiB (221 bytes). Long mode also supports page sizes of 1 GiB (230 bytes). Rather than the three-level page table system used by systems in PAE mode, systems running in long mode use four levels of page table: PAE's Page-Directory Pointer Table is extended from four entries to 512, and an additional Page-Map Level 4 (PML4) Table is added, containing 512 entries in 48-bit implementations. A full mapping hierarchy of 4 KiB pages for the whole 48-bit space would take a bit more than 512 GiB of memory (about 0.195% of the 256 TiB virtual space). Intel has implemented a scheme with a 5-level page table, which allows Intel 64 processors to support a 57-bit virtual address space. Further extensions may allow full 64-bit virtual address space and physical memory by expanding the page table entry size to 128-bit, and reduce page walks in the 5-level hierarchy by using a larger 64 KiB page allocation size that still supports 4 KiB page operations for backward compatibility. Operating system limits The operating system can also limit the virtual address space. Details, where applicable, are given in the "Operating system compatibility and characteristics" section. Physical address space details Current AMD64 processors support a physical address space of up to 248 bytes of RAM, or 256 TiB. However, , there were no known x86-64 motherboards that support 256 TiB of RAM. The operating system may place additional limits on the amount of RAM that is usable or supported. Details on this point are given in the "Operating system compatibility and characteristics" section of this article. Operating modes The architecture has two primary modes of operation: long mode and legacy mode. Long mode Long mode is the architecture's intended primary mode of operation; it is a combination of the processor's native 64-bit mode and a combined 32-bit and 16-bit compatibility mode. It is used by 64-bit operating systems. Under a 64-bit operating system, 64-bit programs run under 64-bit mode, and 32-bit and 16-bit protected mode applications (that do not need to use either real mode or virtual 8086 mode in order to execute at any time) run under compatibility mode. Real-mode programs and programs that use virtual 8086 mode at any time cannot be run in long mode unless those modes are emulated in software. However, such programs may be started from an operating system running in long mode on processors supporting VT-x or AMD-V by creating a virtual processor running in the desired mode. Since the basic instruction set is the same, there is almost no performance penalty for executing protected mode x86 code. This is unlike Intel's IA-64, where differences in the underlying instruction set mean that running 32-bit code must be done either in emulation of x86 (making the process slower) or with a dedicated x86 coprocessor. However, on the x86-64 platform, many x86 applications could benefit from a 64-bit recompile, due to the additional registers in 64-bit code and guaranteed SSE2-based FPU support, which a compiler can use for optimization. However, applications that regularly handle integers wider than 32 bits, such as cryptographic algorithms, will need a rewrite of the code handling the huge integers in order to take advantage of the 64-bit registers. Legacy mode Legacy mode is the mode that the processor is in when it is not in long mode. In this mode, the processor acts like an older x86 processor, and only 16-bit and 32-bit code can be executed. Legacy mode allows for a maximum of 32 bit virtual addressing which limits the virtual address space to 4 GiB. 64-bit programs cannot be run from legacy mode. Protected mode Protected mode is made into a submode of legacy mode. It is the submode that 32-bit operating systems and 16-bit protected mode operating systems operate in when running on an x86-64 CPU. Real mode Real mode is the initial mode of operation when the processor is initialized, and is a submode of legacy mode. It is backwards compatible with the original Intel 8086 and Intel 8088 processors. Real mode is primarily used today by operating system bootloaders, which are required by the architecture to configure virtual memory details before transitioning to higher modes. This mode is also used by any operating system that needs to communicate with the system firmware with a traditional BIOS-style interface. Intel 64 Intel 64 is Intel's implementation of x86-64, used and implemented in various processors made by Intel. History Historically, AMD has developed and produced processors with instruction sets patterned after Intel's original designs, but with x86-64, roles were reversed: Intel found itself in the position of adopting the ISA that AMD created as an extension to Intel's own x86 processor line. Intel's project was originally codenamed Yamhill (after the Yamhill River in Oregon's Willamette Valley). After several years of denying its existence, Intel announced at the February 2004 IDF that the project was indeed underway. Intel's chairman at the time, Craig Barrett, admitted that this was one of their worst-kept secrets. Intel's name for this instruction set has changed several times. The name used at the IDF was CT (presumably for Clackamas Technology, another codename from an Oregon river); within weeks they began referring to it as IA-32e (for IA-32 extensions) and in March 2004 unveiled the "official" name EM64T (Extended Memory 64 Technology). In late 2006 Intel began instead using the name Intel 64 for its implementation, paralleling AMD's use of the name AMD64. The first processor to implement Intel 64 was the multi-socket processor Xeon code-named Nocona in June 2004. In contrast, the initial Prescott chips (February 2004) did not enable this feature. Intel subsequently began selling Intel 64-enabled Pentium 4s using the E0 revision of the Prescott core, being sold on the OEM market as the Pentium 4, model F. The E0 revision also adds eXecute Disable (XD) (Intel's name for the NX bit) to Intel 64, and has been included in then current Xeon code-named Irwindale. Intel's official launch of Intel 64 (under the name EM64T at that time) in mainstream desktop processors was the N0 stepping Prescott-2M. The first Intel mobile processor implementing Intel 64 is the Merom version of the Core 2 processor, which was released on July 27, 2006. None of Intel's earlier notebook CPUs (Core Duo, Pentium M, Celeron M, Mobile Pentium 4) implement Intel 64. Implementations Intel's processors implementing the Intel64 architecture include the Pentium 4 F-series/5x1 series, 506, and 516, Celeron D models 3x1, 3x6, 355, 347, 352, 360, and 365 and all later Celerons, all models of Xeon since "Nocona", all models of Pentium Dual-Core processors since "Merom-2M", the Atom 230, 330, D410, D425, D510, D525, N450, N455, N470, N475, N550, N570, N2600 and N2800, all versions of the Pentium D, Pentium Extreme Edition, Core 2, Core i9, Core i7, Core i5, and Core i3 processors, and the Xeon Phi 7200 series processors. VIA's x86-64 implementation VIA Technologies introduced their first implementation of the x86-64 architecture in 2008 after five years of development by its CPU division, Centaur Technology. Codenamed "Isaiah", the 64-bit architecture was unveiled on January 24, 2008, and launched on May 29 under the VIA Nano brand name. The processor supports a number of VIA-specific x86 extensions designed to boost efficiency in low-power appliances. It is expected that the Isaiah architecture will be twice as fast in integer performance and four times as fast in floating-point performance as the previous-generation VIA Esther at an equivalent clock speed. Power consumption is also expected to be on par with the previous-generation VIA CPUs, with thermal design power ranging from 5 W to 25 W. Being a completely new design, the Isaiah architecture was built with support for features like the x86-64 instruction set and x86 virtualization which were unavailable on its predecessors, the VIA C7 line, while retaining their encryption extensions. Microarchitecture levels In 2020, through a cross-vendor collaboration, a few microarchitecture levels were defined, x86-64-v2, x86-64-v3 and x86-64-v4. These levels define specific features that can be targeted by programmers to provide compile-time optimizations. The features exposed by each level are as follows: All levels include features found in the previous levels. Instruction set extensions not concerned with general-purpose computation, including AES-NI and RDRAND, are excluded from the level requirements. Differences between AMD64 and Intel 64 Although nearly identical, there are some differences between the two instruction sets in the semantics of a few seldom used machine instructions (or situations), which are mainly used for system programming. Compilers generally produce executables (i.e. machine code) that avoid any differences, at least for ordinary application programs. This is therefore of interest mainly to developers of compilers, operating systems and similar, which must deal with individual and special system instructions. Recent implementations Intel 64's BSF and BSR instructions act differently than AMD64's when the source is zero and the operand size is 32 bits. The processor sets the zero flag and leaves the upper 32 bits of the destination undefined. Note that Intel documents that the destination register has an undefined value in this case, but in practice in silicon implements the same behaviour as AMD (destination unmodified). The separate claim about maybe not preserving bits in the upper 32 hasn't been verified, but has only been ruled out for Core 2 and Skylake, not all Intel microarchitectures like 64-bit Pentium 4 or low-power Atom. AMD64 requires a different microcode update format and control MSRs (model-specific registers) while Intel 64 implements microcode update unchanged from their 32-bit only processors. Intel 64 lacks some MSRs that are considered architectural in AMD64. These include SYSCFG, TOP_MEM, and TOP_MEM2. Intel 64 allows SYSCALL/SYSRET only in 64-bit mode (not in compatibility mode), and allows SYSENTER/SYSEXIT in both modes. AMD64 lacks SYSENTER/SYSEXIT in both sub-modes of long mode. In 64-bit mode, near branches with the 66H (operand size override) prefix behave differently. Intel 64 ignores this prefix: the instruction has 32-bit sign extended offset, and instruction pointer is not truncated. AMD64 uses 16-bit offset field in the instruction, and clears the top 48 bits of instruction pointer. AMD processors raise a floating-point Invalid Exception when performing an FLD or FSTP of an 80-bit signalling NaN, while Intel processors do not. Intel 64 lacks the ability to save and restore a reduced (and thus faster) version of the floating-point state (involving the FXSAVE and FXRSTOR instructions). AMD processors ever since Opteron Rev. E and Athlon 64 Rev. D have reintroduced limited support for segmentation, via the Long Mode Segment Limit Enable (LMSLE) bit, to ease virtualization of 64-bit guests. When returning to a non-canonical address using SYSRET, AMD64 processors execute the general protection fault handler in privilege level 3, while on Intel 64 processors it is executed in privilege level 0. Older implementations Early AMD64 processors (typically on Socket 939 and 940) lacked the CMPXCHG16B instruction, which is an extension of the CMPXCHG8B instruction present on most post-80486 processors. Similar to CMPXCHG8B, CMPXCHG16B allows for atomic operations on octa-words (128-bit values). This is useful for parallel algorithms that use compare and swap on data larger than the size of a pointer, common in lock-free and wait-free algorithms. Without CMPXCHG16B one must use workarounds, such as a critical section or alternative lock-free approaches. Its absence also prevents 64-bit Windows prior to Windows 8.1 from having a user-mode address space larger than 8 TiB. The 64-bit version of Windows 8.1 requires the instruction. Early AMD64 and Intel 64 CPUs lacked LAHF and SAHF instructions in 64-bit mode. AMD introduced these instructions (also in 64-bit mode) with their Athlon 64, Opteron and Turion 64 revision D processors in March 2005 while Intel introduced the instructions with the Pentium 4 G1 stepping in December 2005. The 64-bit version of Windows 8.1 requires this feature. Early Intel CPUs with Intel 64 also lack the NX bit of the AMD64 architecture. This feature is required by all versions of Windows 8.x. Early Intel 64 implementations (Prescott and Cedar Mill) only allowed access to 64 GiB of physical memory while original AMD64 implementations allowed access to 1 TiB of physical memory. Recent AMD64 implementations provide 256 TiB of physical address space (and AMD plans an expansion to 4 PiB), while some Intel 64 implementations could address up to 64 TiB. Physical memory capacities of this size are appropriate for large-scale applications (such as large databases), and high-performance computing (centrally oriented applications and scientific computing). Adoption In supercomputers tracked by TOP500, the appearance of 64-bit extensions for the x86 architecture enabled 64-bit x86 processors by AMD and Intel to replace most RISC processor architectures previously used in such systems (including PA-RISC, SPARC, Alpha and others), as well as 32-bit x86, even though Intel itself initially tried unsuccessfully to replace x86 with a new incompatible 64-bit architecture in the Itanium processor. , a Fujitsu A64FX-based supercomputer called Fugaku is number one. The first ARM-based supercomputer appeared on the list in 2018 and, in recent years, non-CPU architecture co-processors (GPGPU) have also played a big role in performance. Intel's Xeon Phi "Knights Corner" coprocessors, which implement a subset of x86-64 with some vector extensions, are also used, along with x86-64 processors, in the Tianhe-2 supercomputer. Operating system compatibility and characteristics The following operating systems and releases support the x86-64 architecture in long mode. BSD DragonFly BSD Preliminary infrastructure work was started in February 2004 for a x86-64 port. This development later stalled. Development started again during July 2007 and continued during Google Summer of Code 2008 and SoC 2009. The first official release to contain x86-64 support was version 2.4. FreeBSD FreeBSD first added x86-64 support under the name "amd64" as an experimental architecture in 5.1-RELEASE in June 2003. It was included as a standard distribution architecture as of 5.2-RELEASE in January 2004. Since then, FreeBSD has designated it as a Tier 1 platform. The 6.0-RELEASE version cleaned up some quirks with running x86 executables under amd64, and most drivers work just as they do on the x86 architecture. Work is currently being done to integrate more fully the x86 application binary interface (ABI), in the same manner as the Linux 32-bit ABI compatibility currently works. NetBSD x86-64 architecture support was first committed to the NetBSD source tree on June 19, 2001. As of NetBSD 2.0, released on December 9, 2004, NetBSD/amd64 is a fully integrated and supported port. 32-bit code is still supported in 64-bit mode, with a netbsd-32 kernel compatibility layer for 32-bit syscalls. The NX bit is used to provide non-executable stack and heap with per-page granularity (segment granularity being used on 32-bit x86). OpenBSD OpenBSD has supported AMD64 since OpenBSD 3.5, released on May 1, 2004. Complete in-tree implementation of AMD64 support was achieved prior to the hardware's initial release because AMD had loaned several machines for the project's hackathon that year. OpenBSD developers have taken to the platform because of its support for the NX bit, which allowed for an easy implementation of the W^X feature. The code for the AMD64 port of OpenBSD also runs on Intel 64 processors which contains cloned use of the AMD64 extensions, but since Intel left out the page table NX bit in early Intel 64 processors, there is no W^X capability on those Intel CPUs; later Intel 64 processors added the NX bit under the name "XD bit". Symmetric multiprocessing (SMP) works on OpenBSD's AMD64 port, starting with release 3.6 on November 1, 2004. DOS It is possible to enter long mode under DOS without a DOS extender, but the user must return to real mode in order to call BIOS or DOS interrupts. It may also be possible to enter long mode with a DOS extender similar to DOS/4GW, but more complex since x86-64 lacks virtual 8086 mode. DOS itself is not aware of that, and no benefits should be expected unless running DOS in an emulation with an adequate virtualization driver backend, for example: the mass storage interface. Linux Linux was the first operating system kernel to run the x86-64 architecture in long mode, starting with the 2.4 version in 2001 (preceding the hardware's availability). Linux also provides backward compatibility for running 32-bit executables. This permits programs to be recompiled into long mode while retaining the use of 32-bit programs. Several Linux distributions currently ship with x86-64-native kernels and userlands. Some, such as Arch Linux, SUSE, Mandriva, and Debian allow users to install a set of 32-bit components and libraries when installing off a 64-bit DVD, thus allowing most existing 32-bit applications to run alongside the 64-bit OS. Other distributions, such as Fedora, Slackware and Ubuntu, are available in one version compiled for a 32-bit architecture and another compiled for a 64-bit architecture. Fedora and Red Hat Enterprise Linux allow concurrent installation of all userland components in both 32 and 64-bit versions on a 64-bit system. x32 ABI (Application Binary Interface), introduced in Linux 3.4, allows programs compiled for the x32 ABI to run in the 64-bit mode of x86-64 while only using 32-bit pointers and data fields. Though this limits the program to a virtual address space of 4 GiB it also decreases the memory footprint of the program and in some cases can allow it to run faster. 64-bit Linux allows up to 128 TiB of virtual address space for individual processes, and can address approximately 64 TiB of physical memory, subject to processor and system limitations. macOS Mac OS X 10.4.7 and higher versions of Mac OS X 10.4 run 64-bit command-line tools using the POSIX and math libraries on 64-bit Intel-based machines, just as all versions of Mac OS X 10.4 and 10.5 run them on 64-bit PowerPC machines. No other libraries or frameworks work with 64-bit applications in Mac OS X 10.4. The kernel, and all kernel extensions, are 32-bit only. Mac OS X 10.5 supports 64-bit GUI applications using Cocoa, Quartz, OpenGL, and X11 on 64-bit Intel-based machines, as well as on 64-bit PowerPC machines. All non-GUI libraries and frameworks also support 64-bit applications on those platforms. The kernel, and all kernel extensions, are 32-bit only. Mac OS X 10.6 is the first version of macOS that supports a 64-bit kernel. However, not all 64-bit computers can run the 64-bit kernel, and not all 64-bit computers that can run the 64-bit kernel will do so by default. The 64-bit kernel, like the 32-bit kernel, supports 32-bit applications; both kernels also support 64-bit applications. 32-bit applications have a virtual address space limit of 4 GiB under either kernel. The 64-bit kernel does not support 32-bit kernel extensions, and the 32-bit kernel does not support 64-bit kernel extensions. OS X 10.8 includes only the 64-bit kernel, but continues to support 32-bit applications; it does not support 32-bit kernel extensions, however. macOS 10.15 includes only the 64-bit kernel and no longer supports 32-bit applications. This removal of support has presented a problem for WineHQ (and the commercial version CrossOver), as it needs to still be able to run 32-bit Windows applications. The solution, termed wine32on64, was to add thunks that bring the CPU in and out of 32-bit compatibility mode in the nominally 64-bit application. macOS uses the universal binary format to package 32- and 64-bit versions of application and library code into a single file; the most appropriate version is automatically selected at load time. In Mac OS X 10.6, the universal binary format is also used for the kernel and for those kernel extensions that support both 32-bit and 64-bit kernels. Solaris Solaris 10 and later releases support the x86-64 architecture. For Solaris 10, just as with the SPARC architecture, there is only one operating system image, which contains a 32-bit kernel and a 64-bit kernel; this is labeled as the "x64/x86" DVD-ROM image. The default behavior is to boot a 64-bit kernel, allowing both 64-bit and existing or new 32-bit executables to be run. A 32-bit kernel can also be manually selected, in which case only 32-bit executables will run. The isainfo command can be used to determine if a system is running a 64-bit kernel. For Solaris 11, only the 64-bit kernel is provided. However, the 64-bit kernel supports both 32- and 64-bit executables, libraries, and system calls. Windows x64 editions of Microsoft Windows client and server—Windows XP Professional x64 Edition and Windows Server 2003 x64 Edition—were released in March 2005. Internally they are actually the same build (5.2.3790.1830 SP1), as they share the same source base and operating system binaries, so even system updates are released in unified packages, much in the manner as Windows 2000 Professional and Server editions for x86. Windows Vista, which also has many different editions, was released in January 2007. Windows 7 was released in July 2009. Windows Server 2008 R2 was sold in only x64 and Itanium editions; later versions of Windows Server only offer an x64 edition. Versions of Windows for x64 prior to Windows 8.1 and Windows Server 2012 R2 offer the following: 8 TiB of virtual address space per process, accessible from both user mode and kernel mode, referred to as the user mode address space. An x64 program can use all of this, subject to backing store limits on the system, and provided it is linked with the "large address aware" option. This is a 4096-fold increase over the default 2 GiB user-mode virtual address space offered by 32-bit Windows. 8 TiB of kernel mode virtual address space for the operating system. As with the user mode address space, this is a 4096-fold increase over 32-bit Windows versions. The increased space primarily benefits the file system cache and kernel mode "heaps" (non-paged pool and paged pool). Windows only uses a total of 16 TiB out of the 256 TiB implemented by the processors because early AMD64 processors lacked a CMPXCHG16B instruction. Under Windows 8.1 and Windows Server 2012 R2, both user mode and kernel mode virtual address spaces have been extended to 128 TiB. These versions of Windows will not install on processors that lack the CMPXCHG16B instruction. The following additional characteristics apply to all x64 versions of Windows: Ability to run existing 32-bit applications (.exe programs) and dynamic link libraries (.dlls) using WoW64 if WoW64 is supported on that version. Furthermore, a 32-bit program, if it was linked with the "large address aware" option, can use up to 4 GiB of virtual address space in 64-bit Windows, instead of the default 2 GiB (optional 3 GiB with /3GB boot option and "large address aware" link option) offered by 32-bit Windows. Unlike the use of the /3GB boot option on x86, this does not reduce the kernel mode virtual address space available to the operating system. 32-bit applications can, therefore, benefit from running on x64 Windows even if they are not recompiled for x86-64. Both 32- and 64-bit applications, if not linked with "large address aware," are limited to 2 GiB of virtual address space. Ability to use up to 128 GiB (Windows XP/Vista), 192 GiB (Windows 7), 512 GiB (Windows 8), 1 TiB (Windows Server 2003), 2 TiB (Windows Server 2008/Windows 10), 4 TiB (Windows Server 2012), or 24 TiB (Windows Server 2016/2019) of physical random access memory (RAM). LLP64 data model: "int" and "long" types are 32 bits wide, long long is 64 bits, while pointers and types derived from pointers are 64 bits wide. Kernel mode device drivers must be 64-bit versions; there is no way to run 32-bit kernel mode executables within the 64-bit operating system. User mode device drivers can be either 32-bit or 64-bit. 16-bit Windows (Win16) and DOS applications will not run on x86-64 versions of Windows due to the removal of the virtual DOS machine subsystem (NTVDM) which relied upon the ability to use virtual 8086 mode. Virtual 8086 mode cannot be entered while running in long mode. Full implementation of the NX (No Execute) page protection feature. This is also implemented on recent 32-bit versions of Windows when they are started in PAE mode. Instead of FS segment descriptor on x86 versions of the Windows NT family, GS segment descriptor is used to point to two operating system defined structures: Thread Information Block (NT_TIB) in user mode and Processor Control Region (KPCR) in kernel mode. Thus, for example, in user mode GS:0 is the address of the first member of the Thread Information Block. Maintaining this convention made the x86-64 port easier, but required AMD to retain the function of the FS and GS segments in long mode – even though segmented addressing per se is not really used by any modern operating system. Early reports claimed that the operating system scheduler would not save and restore the x87 FPU machine state across thread context switches. Observed behavior shows that this is not the case: the x87 state is saved and restored, except for kernel mode-only threads (a limitation that exists in the 32-bit version as well). The most recent documentation available from Microsoft states that the x87/MMX/3DNow! instructions may be used in long mode, but that they are deprecated and may cause compatibility problems in the future. Some components like Jet Database Engine and Data Access Objects will not be ported to 64-bit architectures such as x86-64 and IA-64. Microsoft Visual Studio can compile native applications to target either the x86-64 architecture, which can run only on 64-bit Microsoft Windows, or the IA-32 architecture, which can run as a 32-bit application on 32-bit Microsoft Windows or 64-bit Microsoft Windows in WoW64 emulation mode. Managed applications can be compiled either in IA-32, x86-64 or AnyCPU modes. Software created in the first two modes behave like their IA-32 or x86-64 native code counterparts respectively; When using the AnyCPU mode, however, applications in 32-bit versions of Microsoft Windows run as 32-bit applications, while they run as a 64-bit application in 64-bit editions of Microsoft Windows. Video game consoles Both PlayStation 4 and Xbox One and their variants incorporate AMD x86-64 processors, based on the Jaguar microarchitecture. Firmware and games are written in x86-64 code; no legacy x86 code is involved. Their next generations, the PlayStation 5 and the Xbox Series X and Series S respectively, also incorporate AMD x86-64 processors, based on the Zen 2 microarchitecture. Industry naming conventions Since AMD64 and Intel 64 are substantially similar, many software and hardware products use one vendor-neutral term to indicate their compatibility with both implementations. AMD's original designation for this processor architecture, "x86-64", is still sometimes used for this purpose, as is the variant "x86_64". Other companies, such as Microsoft and Sun Microsystems/Oracle Corporation, use the contraction "x64" in marketing material. The term IA-64 refers to the Itanium processor, and should not be confused with x86-64, as it is a completely different instruction set. Many operating systems and products, especially those that introduced x86-64 support prior to Intel's entry into the market, use the term "AMD64" or "amd64" to refer to both AMD64 and Intel 64. amd64 Most BSD systems such as FreeBSD, MidnightBSD, NetBSD and OpenBSD refer to both AMD64 and Intel 64 under the architecture name "amd64". Some Linux distributions such as Debian, Ubuntu, Gentoo Linux refer to both AMD64 and Intel 64 under the architecture name "amd64". Microsoft Windows's x64 versions use the AMD64 moniker internally to designate various components which use or are compatible with this architecture. For example, the environment variable PROCESSOR_ARCHITECTURE is assigned the value "AMD64" as opposed to "x86" in 32-bit versions, and the system directory on a Windows x64 Edition installation CD-ROM is named "AMD64", in contrast to "i386" in 32-bit versions. Sun's Solaris's isalist command identifies both AMD64- and Intel 64-based systems as "amd64". Java Development Kit (JDK): the name "amd64" is used in directory names containing x86-64 files. x86_64 The Linux kernel and the GNU Compiler Collection refers to 64-bit architecture as "x86_64". Some Linux distributions, such as Fedora, openSUSE, Arch Linux, Gentoo Linux refer to this 64-bit architecture as "x86_64". Apple macOS refers to 64-bit architecture as "x86-64" or "x86_64", as seen in the Terminal command arch and in their developer documentation. Breaking with most other BSD systems, DragonFly BSD refers to 64-bit architecture as "x86_64". Haiku refers to 64-bit architecture as "x86_64". Licensing x86-64/AMD64 was solely developed by AMD. AMD holds patents on techniques used in AMD64; those patents must be licensed from AMD in order to implement AMD64. Intel entered into a cross-licensing agreement with AMD, licensing to AMD their patents on existing x86 techniques, and licensing from AMD their patents on techniques used in x86-64. In 2009, AMD and Intel settled several lawsuits and cross-licensing disagreements, extending their cross-licensing agreements. See also AMD Generic Encapsulated Software Architecture (AGESA) IA-32 x86 Notes References External links AMD Developer Guides, Manuals & ISA Documents x86-64: Extending the x86 architecture to 64-bits – technical talk by the architect of AMD64 (video archive), and second talk by the same speaker (video archive) AMD's "Enhanced Virus Protection" Intel tweaks EM64T for full AMD64 compatibility Analyst: Intel Reverse-Engineered AMD64 Early report of differences between Intel IA32e and AMD64 Porting to 64-bit GNU/Linux Systems, by Andreas Jaeger from GCC Summit 2003. An excellent paper explaining almost all practical aspects for a transition from 32-bit to 64-bit. Intel 64 Architecture Intel Software Network: "64 bits" TurboIRC.COM tutorials, including examples of how to of enter protected and long mode the raw way from DOS Seven Steps of Migrating a Program to a 64-bit System Memory Limits for Windows Releases Computer-related introductions in 2003 X86 architecture 64-bit computers Advanced Micro Devices technologies
244490
https://en.wikipedia.org/wiki/SOCKS
SOCKS
SOCKS is an Internet protocol that exchanges network packets between a client and server through a proxy server. SOCKS5 optionally provides authentication so only authorized users may access a server. Practically, a SOCKS server proxies TCP connections to an arbitrary IP address, and provides a means for UDP packets to be forwarded. SOCKS performs at Layer 5 of the OSI model (the session layer, an intermediate layer between the presentation layer and the transport layer). A SOCKS server accepts incoming client connection on TCP port 1080, as defined in . History The protocol was originally developed/designed by David Koblas, a system administrator of MIPS Computer Systems. After MIPS was taken over by Silicon Graphics in 1992, Koblas presented a paper on SOCKS at that year's Usenix Security Symposium, making SOCKS publicly available. The protocol was extended to version 4 by Ying-Da Lee of NEC. The SOCKS reference architecture and client are owned by Permeo Technologies, a spin-off from NEC. (Blue Coat Systems bought out Permeo Technologies, and were in turn acquired by Symantec.) The SOCKS5 protocol was originally a security protocol that made firewalls and other security products easier to administer. It was approved by the IETF in 1996 as (authored by: M. Leech, M. Ganis, Y. Lee, R. Kuris, D. Koblas, and L. Jones). The protocol was developed in collaboration with Aventail Corporation, which markets the technology outside of Asia. Usage SOCKS is a de facto standard for circuit-level gateways (level 5 gateways). The circuit/session level nature of SOCKS make it a versatile tool in forwarding any TCP (or UDP since SOCKS5) traffic, creating a good interface for all types of routing tools. It can be used as: A circumvention tool, allowing traffic to bypass Internet filtering to access content otherwise blocked, e.g., by governments, workplaces, schools, and country-specific web services. Since SOCKS is very detectable, a common approach is to present a SOCKS interface for more sophisticated protocols: The Tor onion proxy software presents a SOCKS interface to its clients. Providing similar functionality to a virtual private network, allowing connections to be forwarded to a server's "local" network: Some SSH suites, such as OpenSSH, support dynamic port forwarding that allows the user to create a local SOCKS proxy. This can free the user from the limitations of connecting only to a predefined remote port and server. Protocol SOCKS4 A typical SOCKS4 connection request looks like this: VER SOCKS version number, 0x04 for this version CMD command code: 0x01 = establish a TCP/IP stream connection 0x02 = establish a TCP/IP port binding DSTPORT2-byte port number (in network byte order) DESTIP IPv4 Address, 4 bytes (in network byte order) ID the user ID string, variable length, null-terminated. VN reply version, null byte REP reply code {| class="wikitable" !Byte !Meaning |- |0x5A |Request granted |- |0x5B |Request rejected or failed |- |0x5C |Request failed because client is not running identd (or not reachable from server) |- |0x5D |Request failed because client's identd could not confirm the user ID in the request |} DSTPORT destination port, meaningful if granted in BIND, otherwise ignore DSTIP destination IP, as above – the ip:port the client should bind to For example, this a SOCKS4 request to connect Fred to 66.102.7.99:80, the server replies with an "OK": Client: 0x04 | 0x01 | 0x00 0x50 | 0x42 0x66 0x07 0x63 | 0x46 0x72 0x65 0x64 0x00 The last field is "Fred" in ASCII, followed by a null byte. Server: 0x00 | 0x5A | 0xXX 0xXX | 0xXX 0xXX 0xXX 0xXX 0xXX can be any byte value. The SOCKS4 protocol specifies that the values of these bytes should be ignored. From this point onwards, any data sent from the SOCKS client to the SOCKS server is relayed to 66.102.7.99, and vice versa. The command field may be 0x01 for "connect" or 0x02 for "bind"; the "bind" command allows incoming connections for protocols such as active FTP. SOCKS4a SOCKS4a extends the SOCKS4 protocol to allow a client to specify a destination domain name rather than an IP address; this is useful when the client itself cannot resolve the destination host's domain name to an IP address. It was proposed by Ying-Da Lee, the author of SOCKS4. The client should set the first three bytes of DSTIP to NULL and the last byte to a non-zero value. (This corresponds to IP address 0.0.0.x, with x nonzero, an inadmissible destination address and thus should never occur if the client can resolve the domain name.) Following the NULL byte terminating USERID, the client must send the destination domain name and terminate it with another NULL byte. This is used for both "connect" and "bind" requests. Client to SOCKS server: SOCKS4_C SOCKS4 client handshake packet (above) DOMAIN the domain name of the host to contact , null (0x00) terminated Server to SOCKS client: (Same as SOCKS4) A server using protocol SOCKS4a must check the DSTIP in the request packet. If it represents address 0.0.0.x with nonzero x, the server must read in the domain name that the client sends in the packet. The server should resolve the domain name and make connection to the destination host if it can. SOCKS5 The SOCKS5 protocol is defined in . It is an incompatible extension of the SOCKS4 protocol; it offers more choices for authentication and adds support for IPv6 and UDP, the latter of which can be used for DNS lookups. The initial handshake consists of the following: Client connects and sends a greeting, which includes a list of authentication methods supported. Server chooses one of the methods (or sends a failure response if none of them are acceptable). Several messages may now pass between the client and the server, depending on the authentication method chosen. Client sends a connection request similar to SOCKS4. Server responds similar to SOCKS4. The initial greeting from the client is: VER SOCKS version (0x05) NAUTH Number of authentication methods supported, uint8 AUTH Authentication methods, 1 byte per method supported The authentication methods supported are numbered as follows: 0x00: No authentication 0x01: GSSAPI ( 0x02: Username/password () 0x03–0x7F: methods assigned by IANA 0x03: Challenge-Handshake Authentication Protocol 0x04: Unassigned 0x05: Challenge-Response Authentication Method 0x06: Secure Sockets Layer 0x07: NDS Authentication 0x08: Multi-Authentication Framework 0x09: JSON Parameter Block 0x0A–0x7F: Unassigned 0x80–0xFE: methods reserved for private use VER SOCKS version (0x05) CAUTH chosen authentication method, or 0xFF if no acceptable methods were offered The subsequent authentication is method-dependent. Username and password authentication (method 0x02) is described in : VER 0x01 for current version of username/password authentication IDLEN, ID Username length, uint8; username as bytestring PWLEN, PW Password length, uint8; password as bytestring VER 0x01 for current version of username/password authentication STATUS 0x00 success, otherwise failure, connection must be closed After authentication the connection can proceed. We first define an address datatype as: TYPE type of the address. One of: 0x01: IPv4 address 0x03: Domain name 0x04: IPv6 address ADDR the address data that follows. Depending on type: 4 bytes for IPv4 address 1 byte of name length followed by 1–255 bytes for the domain name 16 bytes for IPv6 address VER SOCKS version (0x05) CMD command code: 0x01: establish a TCP/IP stream connection 0x02: establish a TCP/IP port binding 0x03: associate a UDP port RSV reserved, must be 0x00 DSTADDR destination address, see the address structure above. DSTPORT port number in a network byte order VER SOCKS version (0x05) STATUS status code: 0x00: request granted 0x01: general failure 0x02: connection not allowed by ruleset 0x03: network unreachable 0x04: host unreachable 0x05: connection refused by destination host 0x06: TTL expired 0x07: command not supported / protocol error 0x08: address type not supported RSV reserved, must be 0x00 BNDADDR server bound address (defined in ) in the "SOCKS5 address" format specified above BNDPORT server bound port number in a network byte order Since clients are allowed to use either resolved addresses or domain names, a convention from cURL exists to label the domain name variant of SOCKS5 "socks5h", and the other simply "socks5". A similar convention exists between SOCKS4a and SOCKS4. Software Servers SOCKS proxy server implementations Sun Java System Web Proxy Server is a caching proxy server running on Solaris, Linux and Windows servers that support HTTPS, NSAPI I/O filters, dynamic reconfiguration, SOCKSv5 and reverse proxy. WinGate is a multi-protocol proxy server and SOCKS server for Microsoft Windows which supports SOCKS4, SOCKS4a and SOCKS5 (including UDP-ASSOCIATE and GSSAPI auth). It also supports handing over SOCKS connections to the HTTP proxy, so can cache and scan HTTP over SOCKS. Socksgate5 SocksGate5 is an application-SOCKS firewall with inspection feature on Layer 7 of the OSI model, the Application Layer. Because packets are inspected at 7 OSI Level the application-SOCKS firewall may search for protocol non-compliance and blocking specified content. Dante is a circuit-level SOCKS server that can be used to provide convenient and secure network connectivity, requiring only the host Dante runs on to have external network connectivity. Other programs providing SOCKS server interface OpenSSH allows dynamic creation of tunnels, specified via a subset of the SOCKS protocol, supporting the CONNECT command. PuTTY is a Win32 SSH client that supports local creation of SOCKS (dynamic) tunnels through remote SSH servers. ShimmerCat is a web server that uses SOCKS5 to simulate an internal network, allowing web developers to test their local sites without modifying their /etc/hosts file. Tor is a system intended to enable online anonymity. Tor offers a TCP-only SOCKS server interface to its clients. Shadowsocks is a circumvent censorship tool. It provides a SOCKS5 interface. Clients Client software must have native SOCKS support in order to connect through SOCKS. There are programs that allow users to circumvent such limitations: Socksifiers Socksifiers allow applications to access the networks to use a proxy without needing to support any proxy protocols. The most common way is to set up a virtual network adapter and appropriate routing tables to send traffic through the adapter. Win2Socks, which enables applications to access the network through SOCKS5, HTTPS or Shadowsocks. tun2socks, an open source tool that creates virtual TCP TUN adapters from a SOCKS proxy. Works on Linux and Windows, has a macOS port and a UDP-capable reimplementation in Golang. proxychains, a Unix program that forces TCP traffic through SOCKS or HTTP proxies on (dynamically-linked) programs it launches. Works on various Unix-like systems. Translating proxies Polipo, a forwarding and caching HTTP/1.1 proxy server with IPv4 support. Open Source running on Linux, OpenWrt, Windows, Mac OS X, and FreeBSD. Almost any Web browser can use it. Privoxy, a non-caching SOCKS-to-HTTP proxy. Security Due to lack of request and packets exchange encryption it makes SOCKS practically vulnerable to man-in-the-middle attacks and IP addresses eavesdropping which in consequence clears a way to censorship by governments. References External links : SOCKS Protocol Version 5 : Username/Password Authentication for SOCKS V5 : GSS-API Authentication Method for SOCKS Version 5 : A SOCKS-based IPv6/IPv4 Gateway Mechanism Draft-ietf-aft-socks-chap, Challenge-Handshake Authentication Protocol for SOCKS V5 SOCKS: A protocol for TCP proxy across firewalls, SOCKS Protocol Version 4 (NEC) Internet protocols Internet privacy software Session layer protocols
246729
https://en.wikipedia.org/wiki/VIA%20C3
VIA C3
The VIA C3 is a family of x86 central processing units for personal computers designed by Centaur Technology and sold by VIA Technologies. The different CPU cores are built following the design methodology of Centaur Technology. In addition to x86 instructions, VIA C3 CPUs contain an undocumented Alternate Instruction Set allowing lower-level access to the CPU and in some cases privilege escalation. Cores Samuel 2 and Ezra cores VIA Cyrix III was renamed VIA C3 with the switch to the advanced "Samuel 2" (C5B) core. The addition of an on-die L2 cache improved performance somewhat. As it was not built upon Cyrix technology at all, the new name was just a logical step. To improve power consumption and reduce manufacturing costs, Samuel 2 was produced with 150 nm process technology. The VIA C3 processor continued an emphasis on minimizing power consumption with the next die shrink to a mixed 130/150 nm process. "Ezra" (C5C) and "Ezra-T" (C5N) were only new revisions of the "Samuel 2" core with some minor modifications to the bus protocol of "Ezra-T" to match compatibility with Intel's Pentium III "Tualatin" cores. VIA enjoyed the lowest power usage in the x86 CPU market for several years. Performance, however, fell behind due to the lack of improvements to the design. Uniquely, the retail C3 CPU shipped inside a decorative tin. Nehemiah cores The "Nehemiah" (C5XL) was a major core revision. At the time, VIA's marketing efforts did not fully reflect the changes that had taken place. The company addressed numerous design shortcomings of the older cores, including the half-speed FPU. The number of pipeline stages was increased from 12 to 16, to allow for continued increases in clock speed. Additionally, it implemented the cmov instruction, making it a 686-class processor. The Linux kernel refers to this core as the C3-2. It also removes 3DNow! instructions in favour of implementing SSE. However, it was still based upon the aging Socket 370, running the single data rate front-side bus at just 133 MHz. Because the embedded system marketplace prefers low-power, low-cost CPU designs, VIA began targeting this segment more aggressively because the C3 fit those traits rather well. Centaur Technology concentrated on adding features attractive to the embedded marketplace. An example built into the first "Nehemiah" (C5XL) core were the twin hardware random number generators. (These generators are erroneously called “quantum-based” in VIA's marketing literature. Detailed analysis of the generator makes it clear that the source of randomness is thermal, not quantum.) The "Nehemiah+" (C5P) (stepping 8) revision brought a few more advancements, including a high-performance AES encryption engine along with a notably small ball grid array chip package the size of a US 1 cent coin. At the time VIA also boosted the FSB to 200 MHz and introduced new chipsets such as the CN400 to support it. The new 200 MHz FSB chips are only available in BGA packages, as they are not compatible with existing Socket 370 motherboards. When this architecture was marketed it was often referred to as the "VIA C5". Technical information Comparative die size Design methodology While slower than x86 CPUs being sold by AMD and Intel, both in absolute terms and on a clock-for-clock basis, VIA's chips were much smaller, cheaper to manufacture, and lower power. This made them highly attractive in the embedded marketplace. This also enabled VIA to continue to scale the frequencies of their chips with each manufacturing process die shrink, while competitive products from Intel (such as the P4 Prescott) encountered severe thermal management issues, although the later Intel Core generation of chips were substantially cooler. C3 Because memory performance is the limiting factor in many benchmarks, VIA processors implement large primary caches, large TLBs, and aggressive prefetching, among other enhancements. While these features are not unique to VIA, memory access optimization is one area where they have not dropped features to save die space. Clock frequency is in general terms favored over increasing instructions per cycle. Complex features such as out-of-order instruction execution are deliberately not implemented, because they impact the ability to increase the clock rate, require a lot of extra die space and power, and have little impact on performance in several common application scenarios. The pipeline is arranged to provide one-clock execution of the heavily used register–memory and memory–register forms of x86 instructions. Several frequently used instructions require fewer pipeline clocks than on other x86 processors. Infrequently used x86 instructions are implemented in microcode and emulated. This saves die space and reduces power consumption. The impact upon the majority of real-world application scenarios is minimized. These design guidelines are derivative from the original RISC advocates, who stated a smaller set of instructions, better optimized, would deliver faster overall CPU performance. As it makes heavy use of memory operands, both as source and destination, the C3 design itself cannot qualify as RISC however. Business Contracts VIA's embedded platform products have reportedly (2005) been adopted in Nissan's car series, the Lafesta, Murano, and Presage. These and other high volume industrial applications are starting to generate big profits for VIA as the small form factor and low power advantages close embedded deals. Legal issues On the basis of the IDT Centaur acquisition, VIA appears to have come into possession of at least three patents, which cover key aspects of processor technology used by Intel. On the basis of the negotiating leverage these patents offered, in 2003 VIA arrived at an agreement with Intel that allowed for a ten-year patent cross license, enabling VIA to continue to design and manufacture x86 compatible CPUs. VIA was also granted a three-year period of grace in which it could continue to use Intel socket infrastructure. See also List of VIA C3 microprocessors List of VIA Eden microprocessors List of VIA microprocessors References Further reading External links VIA-C3-Nehemiah review VIA C3 Gold CPU - 1 GHz VIA's Small & Quiet Eden Platform GHz_processor_review/ VIA C3 1 GHz Processor Review BlueSmoke - Review : VIA C3 Processor http://www.cpushack.com/VIA.html https://web.archive.org/web/20070717014946/http://www.sandpile.org/impl/c5.htm https://web.archive.org/web/20060615180950/http://www.sandpile.org/impl/c5xl.htm VIA C3 Kernel for FreeBSD C3 Embedded microprocessors
246953
https://en.wikipedia.org/wiki/Tom%20Clancy%27s%20Splinter%20Cell%20%28video%20game%29
Tom Clancy's Splinter Cell (video game)
Tom Clancy's Splinter Cell is a 2002 stealth video game developed by Ubi Soft Montreal and published by Ubi Soft. It is the first game in the Splinter Cell series. Endorsed by author Tom Clancy, it follows the activities of NSA black ops agent Sam Fisher (voiced by Michael Ironside). The game was inspired by the Metal Gear series and games created by Looking Glass Studios and built using Unreal Engine 2. Originally released as an Xbox exclusive in 2002, the game was ported to Microsoft Windows, PlayStation 2, GameCube and Mac OS X in 2003. A side-scrolling adaptation developed by Gameloft was also released in 2003 for Game Boy Advance, mobile phones and N-Gage (the latter with the subtitle Team Stealth Action). A remastered high definition version was released on PlayStation 3 in September 2011, and an Xbox version was made available for Xbox One via backward compatibility in June 2019. Splinter Cell received critical acclaim on release and is considered as one of the best video games ever made. The success of the game lead to multiple sequels, starting with Pandora Tomorrow in 2004, and a series of novels written under the pseudonym David Michaels. A remake of Splinter Cell is currently in development by Ubisoft Toronto. Gameplay The primary focus and hallmark of Splinter Cells gameplay is stealth, with strong emphasis on light and darkness. The player is encouraged to move through the shadows for concealment whenever possible. The game displays a "light meter" that reflects how visible the player character is to enemies, and night vision and thermal vision goggles to help the player navigate in darkness or smoke/fog, respectively. The light meter functions even when night vision goggles is activated, and it is possible to destroy lights, thus reducing the chances of exposure significantly. Splinter Cell strongly encourages the use of stealth over brute force. Although Sam Fisher is usually equipped with firearms, he carries limited ammunition and is not frequently provided with access to additional ammo. The player begins most missions with a limited supply of less-than-lethal weapons in addition to Fisher's firearms, a suppressed FN Five-Seven sidearm that is provided for every mission, as well as a suppressed FN F2000 assault rifle during some missions, which includes a telescopic sight and a launcher for some of the less-lethal devices such as ring airfoil projectiles, "sticky shockers" and CS gas grenades. The weapon can even fire a camera that sticks onto surfaces, allowing Fisher to covertly perform surveillance from a safe area. Flexibility of movement is a focuspoint of Splinter Cell. Fisher can sneak up on enemies from behind to grab them; allowing interrogation, quiet incapacitation, or use as a human shield. Fisher is acrobatic and physically adept, and has a variety of maneuvers including the ability to mantle onto and climb along ledges, hang from pipes and perform a "split jump" in narrow spaces to mantle up a steep wall. Plot In August 2004, former U.S. Navy SEAL officer Sam Fisher joins the National Security Agency, as part of its newly formed division "Third Echelon" headed by his old friend Irving Lambert. Two months later, Fisher, aided by technical expert Anna "Grim" Grimsdóttír and field runner Vernon Wilkes Jr., is sent to Georgia to investigate the disappearance of two CIA officers Alice Madison who had been installed in the new government of Georgian president Kombayn Nikoladze, who seized power in a bloodless coup d'état following the assassination of his predecessor earlier in the year; and Robert Blaustein who was sent in to find her. Fisher discovers both were murdered on Nikoladze's orders by former Spetsnaz member Vyacheslav Grinko. Further investigation soon reveals that the CIA agents had discovered that Nikoladze is waging an ethnic cleansing campaign across Azerbaijan with Georgian commandos. In retaliation, NATO forces enter Azerbaijan, prompting Nikoladze to go underground. Third Echelon soon discovers a data exchange is taking place between a Caspian oil rig and the Georgian presidential palace, and assign Fisher to recover the data. Narrowly avoiding an airstrike by NATO, Fisher recovers a technician's laptop with files on an item called "The Ark", as well as evidence that there is a mole in the CIA. Shortly after this, North America is hit by a massive cyber warfare attack directed at military targets, to which Nikoladze claims responsibility before declaring war on the United States and its allies. Investigating the leak, Fisher discovers a back-up of data by a staff member to an unsecured laptop that was exploited by a Virginian-based network owned by Kalinatek, Inc. After Grim's efforts spook the Georgians, Fisher is sent to the company's Virginia offices to recover an encryption key from Ivan, a technician in the building, as Georgian-hired mafiosos attempt to liquidate all the incriminating evidence. In his escape, Wilkes is mortally wounded extracting him and dies soon afterwards. With the encryption key, the NSA discover that Nikoladze has been using a network of unconventional relays to communicate with Georgian military cells. Tracing the full relay network back to the Chinese embassy in Yangon, Myanmar, Fisher is sent in discreetly to investigate. Fisher discovers from captured U.S. soldiers and high-ranking Chinese diplomats that Nikoladze is working alongside a rogue collective of Chinese soldiers led by General Kong Feirong, after rescuing them from being executed on a live web broadcast. After killing Grinko in a firefight when he attempts to kill the Americans and Chinese, Fisher moves to capture Feirong for information on Nikoladze's location. After preventing him from committing suicide in a drunken stupor, Feirong reveals that Nikoladze had fled back to Georgia in order to activate a device codenamed "The Ark". Infiltrating the Georgian presidential palace where Nikoladze and new Georgian president Varlam Cristavi are, Fisher attempts to recover the key to the Ark, which he learns is in fact a nuclear suitcase bomb that has been placed somewhere in the United States. Fisher corners Nikoladze, who bargains to give the Ark key in exchange for safe passage out of Georgia. After Cristavi's forces arrive and escort Nikoladze to safety, Lambert rescues Fisher from execution by creating a diversion via power blackout. Discovering that Nikoladze is offering the Ark's location for protection, Fisher assassinates him. The National Guard eventually locates the bomb and evacuates an apartment complex in Hope's Gate, Maryland under the pretense of a gas leak before secretly recovering the weapon. Despite a war being averted, Nikoladze's corpse sparks international backlash due to the suspicious circumstances around his death. Watching the U.S. president give a speech on the end of the crisis, Fisher then receives a secure phone call from Lambert for another assignment. Development The game originally started development as a sci-fi, James Bond type game called The Drift, which Ubisoft intended to be "a Metal Gear Solid 2 killer". The game's producer Mathieu Ferland said "Metal Gear Solid was a huge inspiration for Splinter Cell." The game's designer and writer Clint Hocking also said Splinter Cell "owes its existence to" the Metal Gear series, while noting he was also influenced by System Shock, Thief and Deus Ex. Because the development team was aiming for a Teen ESRB rating, the team tried to minimize the level of violence. The soundtrack for the game was composed by English composer Michael Richard Plowman. Version differences The PC version of Tom Clancy's Splinter Cell is fairly closely based on the original Xbox version. Both were made by Ubisoft Montreal. The GameCube and PlayStation 2 versions, released later, were developed by Ubisoft Shanghai and are similar to each other, but have many small changes over the originals with the result that they are generally easier. Some doors are moved around, guards are less likely to notice gunshots, etc. Each version of the game has some exclusive features. The Xbox and Windows versions have three new downloadable missions which involve a Russian nuclear sub. The PlayStation 2 version includes an exclusive level between Kalinatek and the Chinese Embassy which takes place in a nuclear power plant in the Kola Peninsula, new cinematics, a new intro cinematic with original music by the Prague Orchestra and many behind-the-scenes interviews and documentaries both about the new intro and the game itself. The GameCube version includes the same cinematics, uses the Game Boy Advance link cable to give players a real-time overhead map, a new sticky-bomb weapon and progressive scan (480p) support. Additionally, both the GameCube and PlayStation 2 versions include new binoculars items. A PlayStation 3 version was announced to be part of the Splinter Cell Trilogy which was released in September 2011 as part of Sony's Classics HD series. It was revealed on the PlayStation Blog that it would be ported from the PC version, because it had more details and more content than the PlayStation 2 version. It was released on the European PlayStation Network on August 10, 2011. The PlayStation 3 version does not include the downloadable bonus missions that the Xbox and PC versions had. Reception Tom Clancy's Splinter Cell received positive reviews upon the game's release. GameSpot's Greg Kasavin said that Splinter Cell has "hands down the best lighting effects seen in any game to date." GameSpot later named Splinter Cell the second-best Xbox game of November 2002, behind MechAssault. IGN likewise praised the game for its graphics and lighting, while also praising how it evolved Metal Gear Solids third-person stealth-action gameplay. Both praised the game's audio, noting that Michael Ironside as Sam Fisher's voice suited the role perfectly. Scott Alan Marriott of AllGame gave the Xbox version four-and-a-half stars out of five and called it "one of the few games to elicit a feeling of suspense without resorting to shock techniques found in survival horror titles like Resident Evil." Criticism of the game was also present. Greg Kasavin said that Splinter Cell is "sometimes reduced to frustrating bouts of trial and error." In addition, Kasavin criticized the game's cutscenes, saying that they are not up to par with the rest of the game's graphics. Non video-game publications also gave the game favorable reviews. Entertainment Weekly gave the Xbox version an A and called it "wickedly ingenious". The Village Voice gave the PlayStation 2 version eight out of ten and said, "If this game were any more realistic, you'd have to hold in your farts." The Cincinnati Enquirer gave the Game Boy Advance version all four stars and said that "While it lacks 3-D graphics and an impressive use of lighting and shadows found in its predecessors, the stealthy action game still captures the thrill of modern espionage." Sales Tom Clancy's Splinter Cell was a commercial success. Pre-orders reached 1.1 million units and the game sold 480,000 copies worldwide by the end of 2002, after three weeks on sale. France accounted for 60,000 units in the initial three weeks. By early January 2003, sales in North America had surpassed 1 million units, while Europe accounted for 600,000 units. By March 31, 2003, its sales had risen to 3.6 million copies. Splinter Cell sold 4.5 million copies by June and 5 million by the end of September, and its sales reached 6 million units by the end of March 2004. By July 2006, the Xbox version of Splinter Cell had sold 2.4 million copies and earned $62 million in the United States alone. Next Generation ranked it as the 10th highest-selling game launched for the PlayStation 2, Xbox or GameCube between January 2000 and July 2006 in that country. It remained the best-selling Splinter Cell game in the United States by July 2006. The game's PlayStation 2 and Xbox versions each received a "Platinum" sales award from the Entertainment and Leisure Software Publishers Association (ELSPA), given to titles that sell at least 300,000 copies in the United Kingdom. Splinter Cells computer version received a "Silver" sales award from ELSPA, indicating sales of at least 100,000 copies in the United Kingdom. Awards E3 2002 Game Critics Awards: Best Action/Adventure Game 3rd Annual Game Developers Choice Awards: Excellence in Writing 6th Annual Interactive Achievement Awards: Console Game of the Year, Outstanding Achievement in Game Play Engineering IGN Best of 2002: Xbox Game of the Year, Xbox Best Graphics 2003 Spike Video Game Awards: Best Handheld Game, Splinter Cell was a runner-up for Computer Games Magazines list of the 10 best games of 2003. It won GameSpots 2002 "Best Graphics (Technical)" and "Best Action Adventure Game" awards among Xbox games, and was nominated in the "Best Sound", "Best Graphics (Artistic)" and overall "Game of the Year on Xbox" categories. Nominations 3rd Annual Game Developers Choice Awards: Game of the Year, Original Game Character of the Year, Excellence in Game Design, Excellence in Level Design, and Excellence in Programming 6th Annual Interactive Achievement Awards: Innovation in Console Gaming, Outstanding Achievement in Sound Design, Outstanding Achievement in Visual Engineering, and Console Action/Adventure Game of the Year IGN Best of 2002: Overall Game of the Year Remake On December 15, 2021, Ubisoft announced that a remake of the game is under development at Ubisoft Toronto using Snowdrop, the game engine behind Tom Clancy's The Division and Avatar: Frontiers of Pandora. Most of the staff involved in the original game are involved in doing the remake. Notes References External links Official website via Internet Archive 2002 video games Action-adventure games Game Boy Advance games Interactive Achievement Award winners MacOS games Mobile games N-Gage games GameCube games PlayStation 2 games PlayStation 3 games 01 Stealth video games Tom Clancy games Ubisoft games Video games developed in Canada Video games developed in France Video games developed in China Video games set in 2004 Video games set in 2005 Video games set in Azerbaijan Video games set in Georgia (country) Video games set in Russia Video games set in Myanmar Video games set in Virginia Windows games Xbox games Video games with alternative versions de:Tom Clancy’s Splinter Cell#Splinter Cell hr:Tom Clancy's Splinter Cell#Splinter Cell
247137
https://en.wikipedia.org/wiki/Sharp%20Zaurus
Sharp Zaurus
The Sharp Zaurus is the name of a series of personal digital assistants (PDAs) made by Sharp Corporation. The Zaurus was the most popular PDA during the 1990s in Japan and was based on a proprietary operating system. The first Sharp PDA to use the Linux operating system was the SL-5000D, running the Qtopia-based Embedix Plus. The Linux Documentation Project considers the Zaurus series to be "true Linux PDAs" because their manufacturers install Linux-based operating systems on them by default. The name derives from the common suffix applied to the names of dinosaurs. History In September 1993, Sharp introduced the PI-3000, the first in the Zaurus line of PDAs, as a follow-on to Sharp's earlier Wizard line of PDAs (the Wizard also influenced Apple's Newton). Featuring a black and white LCD screen, handwriting recognition, and optical communication capabilities among its features, the Zaurus soon became one of Sharp's best selling products. The PI-4000, released in 1994, expanded the Zaurus' features with a built-in modem and facsimile functions. This was succeeded in 1995 by the PI-5000, which had e-mail and mobile phone interfaces, as well as PC linking capability. The Zaurus K-PDA was the first Zaurus to have a built-in keyboard in addition to handwriting recognition; the PI-6000 and PI-7000 brought in additional improvements. In 1996 Sharp introduced the Sharp Zaurus ZR-5800. It used the same compact design, ports and pointing device as the previous Zaurus models. The changes were mostly in the ROM. It came with 2 MB RAM and a backlit 320x240 LCD display. During this time, Sharp was making significant advances in color LCD technology. In May 1996, the first color Zaurus was released; the MI-10 and MI-10DC were equipped with a five-inch (12.7 cm) color thin-film transistor (TFT) LCD screen. This model had the ability to connect to the internet, and had a built-in camera and audio recorder. Later that year, Sharp developed a forty-inch (100 cm) TFT LCD screen, the world's largest at the time. In December, the MI-10/10DC Zaurus was chosen as the year's best product by Information Display Magazine in the United States. Sharp continued to make advancements in display technology; the Zaurus gained additional multimedia capabilities, such as video playback, with the introduction of the MI-E1 in Japan in November 2000. The MI-E1 was also the first Zaurus to support both Secure Digital and Compact Flash memory cards, a feature which would become standard on future models as well. Although the MI series sold well in Japan, it was never released in either the USA or Europe, and the Japanese user interface was never translated into any other language. The machines released outside Japan were the Linux based SL series, the first of which was the SL-5000D "developer edition". This was shortly followed by the SL-5500; both used 'Embedix' - an embedded version of the Linux operating system developed by Lineo - combined with Qtopia, the Qt toolkit-based embedded application environment developed by Trolltech. The development of the MI series in Japan was continued for a while, but the MI-E25DC has been officially declared to be the last MI-Series Zaurus. Sharp has continued development of the SL series in Japan releasing the SL-C700, C750, C760 and C860 models which all feature 640x480 VGA screen resolution. They are all based on faster 400 MHz Intel XScale technology, although the SL-C700 was flawed and the apparent speed was the same as the 206 MHz SL-5500. All four of the SL-C models are clamshell type devices with the unusual ability to rotate the screen. This allows the device to be used in landscape mode with the keyboard, much like a miniature notebook PC, or in portrait mode as a PDA. Sharp introduced a very different device from the clamshells in the form of the SL-6000 in early 2004; the SL-6000L (Wi-Fi only, no Bluetooth) was sold in North America, the last and only device since the 5xxx series to be officially sold outside Japan. It returned to the slider form of the 5xxx, but with a VGA display; a slider with a few key buttons covered a thumbboard. There was a joint project with IBM; the 6000 did not gain mass popularity and Amazon sold off their remaindered stock. In October 2004 Sharp announced the SL-C3000 - the world's first PDA with an integrated hard disk drive (preceding the Palm Life Drive). It featured a similar hardware and software specification to the earlier C860 model; the key differences were that it only had 16 MB of flash memory yet gained an internal 4 GB Hitachi microdrive, a USB Host port, and "lost" the serial port (in some cases the components were not fitted to the motherboard or were incapable of driving the regular serial adaptor cables). The keyboard feel and layout changed somewhat, and most owners preferred it over the 760/860. In March 2005 the C3000 product was joined by the SL-C1000 which returned to the traditional 128 MB flash memory but lost the internal micro-drive. The C1000 was cheaper, lighter, faster in execution due to running from flash memory, but would require the user to "waste" the SD or CF card slots to fit a memory card for mass storage; at the time the largest card supported was 1GB. The C1000 cannot be upgraded to fit an internal micro-drive because vital components were missing, but the space can be used to fit internal Bluetooth and Wi-Fi modules using the USB host facility. In June 2005, Sharp released the SL-C3100, which had flash capacity of the C1000 yet also had the micro-drive, and proved a very popular model indeed. The 1000, 3000 and 3100 models were overclockable, boosting the device's ability to play back video more smoothly. In March 2006 the latest model launched, predictably labelled as the SL-C3200. It is basically an SL-C3100 but with the newer 6 GB Hitachi micro-drive and another tweak to the case colours. The Intel PXA270 CPU is a later variant, and some would regard as inferior because it cannot be overclocked so highly. The kernel gained a vital tweak to the Sharp proprietary SD/MMC module and allowed 4GB SD cards to be used (and this was quickly borrowed by 3000 and 3100 owners). The software package gained text-to-speech software from Nuance Communications and an upgraded dictionary. While the SL series devices have long been sold only in Japan, there are companies in Japan who specialise in exporting them worldwide; sometimes without modifying them at all, sometimes an English conversion is available at extra cost. Not all Zaurus models came from Sharp with universal (100/110/240 V) power supplies (the Zaurus takes a regulated 5 V/1 A supply), so either an additional or an exchanged power adaptor would be needed, and not all exporters provide this by default. When buying directly from an exporter in Japan, the buyer is liable for import duties and taxes, and attempting to avoid them can be a criminal offense. There are also companies in the US, UK and DE who are unofficial resellers; one notable example is Trisoft who prepare and certify the device to "CE" standard compliance. Since there is no official export channel from Japan, Sharp offers no warranty or repair service outside Japan, so foreign buyers are dependent on their chosen reseller to handle repairs, usually by sending to their agent in Japan who acts as if the device was owned and used in Japan in order to have it repaired by Sharp, before sending it back to the owner. Whilst Zauruses are actually quite robust devices, due to their miniaturization they are not easily repairable by casual electronics hobbyists. In January 2007, it was reported that Sharp would discontinue production of the Zaurus line after February 2007. Later, in March, a European supplier tried to buy a batch of Zauruses as demand was still strong and noticed that they were all manufactured after Sharp's original cut-off date, however, Sharp was not able to explain its plans. Their later units were the WS003SH and WS004SH which, whilst adding wireless and cellular phone and data features, ran the Windows Mobile 5.0 operating system/application suite. Models Personal Information (PI) series Pi² T, proof of concept model presented in April 1992 PI-3000, the first model, introduced to the Japanese market on October 1, 1993 PI-4000/FX, second generation with ink and fax capabilities, on sale in Japan June 1994 PI-5000/FX/DA, first model capable of syncing data to a personal computer, going on sale in November 1994. PI-4500, introduced in January 1995 PI-6000/FX, featuring a new handwriting recognition software, on sale in Japan August 1995. PI-6000DA, adding a digital adapter for cellular phones, introduced on December 12, 1995 PI-7000, dubbed AccessZaurus (アクセスザウルス) sports a built in modem, is introduced in February 1996. Note: Confusingly, Sharp made another unit called the "PI-7000 ExpertPad", which was a Newton based device, not a Zaurus. PI-6500, was introduced to the Japanese market with a list price of 55,000 Yen on November 22, 1996. Measuring 147x87x17 mm and weighing 195 g including the batteries, it sports a 239x168 dot matrix display and 715 KB of user addressable memory. PI-8000, went on sale on January 24, 1997 with a list price of 80,000 Yen. It featured a 319x168 dot matrix display, 711KB user addressable memory, measuring 157 x 90 x 17 mm, and weighing 215 g including batteries. PI-6600, the last AccessZaurus with a 239 x 168 dot matrix display, measuring 147 x 87 x 17 mm and a weight of 195 g including batteries. It went on sale in Japan on September 25, 1997. K-PDA (ZR) series ZR-3000, 320x240 touch screen, 1 MB RAM. ZR-3500, similar to the ZR-3000, with new internal 14.4/9.6 kbit/s modem. ZR-5000/FX, a clam-shell model only sold outside Japan, going on sale in January 1995. ZR-5700 ZR-5800 Having a touch screen and 2 MB of RAM. MI series MI-10DC/10, nicknamed ColorZaurus, was the first model to have a color display. The DC model featured a digital camera and was initially priced 155,000 Yen. The MI-10 was listed as 120,000 Yen. Both models went on sale on June 25, 1996. MI-506DC/506/504, PowerZaurus MI-110M/106M/106, ZaurusPocket MI-610/610DC, PowerZaurus MI-310, ZaurusColorPocket MI-EX1, Zaurus iCRUISE - This was the first PDA with a 640x480 resolution display MI-C1-A/S, PowerZaurus MI-P1-W/A/LA, Zaurus iGeti MI-P2-B, Zaurus iGeti - More internal software, more Flash MI-P10-S, Zaurus iGeti - Larger RAM and Flash than P1/P2 MI-J1, Internet Dictionary Zaurus MI-E1, First vertical display model - mini keyboard MI-L1, Stripped down E1 - lacks display backlight MI-E21, Enhanced version of E1 - double RAM and ROM size MI-E25DC, a MI-E21 with an internal 640 x 480 digital camera Other MI Series related devices BI-L10, Business Zaurus - Mono screen, 4 Mb IRDA, Network Adapter MT-200, Communications pal - Keyboard input, limited I/O MT-300, Communications pal - 4 MB flash, restyled MT-300C, Communications pal - CDMAone version Browser Board, MT-300 with NTT DoCoMo specific software Linux based SL series SL-5000D, a developer edition of the SL-5500, containing 32 MB of RAM. (2001) SL-5500 (Collie), the first new Zaurus to be sold outside Japan, is based on the Intel SA-1110 StrongARM processor running at 206 MHz, has 64 MB of RAM and 16MB Flash, a built-in keyboard, CompactFlash (CF) slot, Secure Digital (SD) slot, and Infrared port. (2002) SL-A300 (Discovery), an ultra-light PDA with no keyboard, sold only in Japan (2003) SL-5600 (Poodle), the successor to the SL-5500, with greater processing capability, increased RAM and an inbuilt speaker and microphone. Based on the Intel XScale 400 MHz processor. However some had a Cache bug on the PXA-250 processor (easily fixed!). Popular ROMs for the SL-5600 include Watapon, Synergy, and OpenZaurus. (2002) SL-B500, name of the SL-5600 in Japan SL-C700 (Corgi), a clam-shell model and the first PDA to use Sharp's "System LCD", sold only in Japan. (2003) SL-C750 (Shepherd), an improved version of the SL-C700 with longer battery life, a faster processor and updated software, sold only in Japan. (2003) SL-C760 (Husky), an improved version of the SL-C700 with double the internal flash storage of the SL-C750 and a larger battery, sold only in Japan. (2004) SL-C860 (Boxer), similar to SL-C760, it contains a software upgrade which allows it to be recognised as a USB storage device and has built in English-Japanese translation software, sold only in Japan. (2004) SL-6000 (Tosa) (2005), the successor to the SL-5600, available in 3 versions: SL-6000N, 4" VGA display, Intel XScale PXA255 400 MHz processor, 64 MB flash memory, 64 MB SDRAM, CF and SD slots, and IR port. Built in microphone, speaker, USB host port. There seems to be a version called HC-6000N equipped with Microsoft Windows Mobile 2003 Second Edition, and a handheld from Hitachi called FLORA-ie MX1 with same hardware, both are only available in Japan. SL-6000L, same as SL-6000N, also with built-in 802.11b Wi-Fi. SL-6000W, same as SL-6000N, also with built-in 802.11b Wi-Fi and Bluetooth. SL-C3000 (Spitz), similar to SL-C860, the SL-C3000 contains a USB host port to allow the connection of USB devices such as keyboards and mice. It also features an Intel Xscale PXA270 416 MHz CPU. While that model features only 16 MB flash storage it has a 4 GB Hitachi HDD and was the first PDA to feature a hard disk. It is sold only in Japan. SL-C1000 (Akita), similar to SL-C3000, but with 128 MB Flash ROM instead of HDD. SL-C3100 (Borzoi), similar to SL-C3000, Flash ROM has been increased 128 MB, still has 4 GB HDD. SL-C3200 (Terrier), latest clam-shell model, released on March 17, 2006, similar to SL-C3100. HDD has been increased to 6 GB, comes with updated dictionary, text to speech software from Nuance Communications and TOEIC (Test of English for International Communication) test. Operating systems These are frequently called 'ROMs' in the community because the Zaurus' OS is usually stored in embedded flash memory, and are installed using a flashing tool. There's also a special "rescue" mode NOR flash (or P2ROM in newer models) in all Zauruses since the 5xxx series which allows recovery from a corrupted OS. OpenZaurus, which uses the OPIE or GPE graphical user interfaces and is designed for the power user. OpenZaurus does not include the proprietary software that comes with Sharp's distribution. OpenZaurus development has been dropped in favour of Ångström which is also based on the OpenEmbedded build environment, but supports a larger range of devices, not limited to Zauri. Ångström distribution, the replacement for OpenZaurus. OpenZaurus no longer is being developed for, as its developers now work on Ångström distribution. Ångström is an OpenEmbedded build system based Zaurus distribution with its current images being console and an X11 GPE based ROM. pdaXrom, a distribution based on the X graphics system and the matchbox/openbox user interface. Cacko, an alternative to the original Sharp ROM, it is based on the same Qt graphics system with as many underlying parts of the OS upgraded as possible yet still maintains full compatibility and allows the proprietary Sharp applications to be run. In August 2007 a port of Gentoo Linux was started which offers some promise. Zubuntu, based on the ARM port of Ubuntu for the clam-shell models C3x00 et al., SL-6000 was started in 2008. Arch Linux ARM has been ported in 2015 on the C3x00 models. For the Sharp and Cacko ROMs, there are third party and somewhat experimental kernels such as "Tetsu's" (a Japanese Zaurus expert) which offer interesting optimisations and drivers for unusual hardware. It is possible to replace only the Linux kernel which can give better performance while maintaining compatibility and retaining installed software that comes with a "stock" ROM. As well as the choice of GUI (qt/qtopia, X11 + matchbox, X11 + E17 etc.), one key difference is the choice of whether the kernel was built with using ARM standard EABI or not, and whether it uses software or hardware floating point (code using hardware floating point is actually slower because the hardware doesn't support it, so those instructions cause an exception which then has to be handled by the kernel, with noticeable overhead). There was a port of OpenBSD for several Zaurus models. The port is available on the SL-C3000, SL-C3100, and SL-C3200 with development continuing in order to expand support to the C860 and C1000. This port of OpenBSD does not however replace the original operating system entirely, nor is it made available as a ROM image, instead it uses the original Linux install as a bootloader and installs the same as OpenBSD would on any other platform. There is also a NetBSD port is in development, based on the work from OpenBSD. In early September 2016, the OpenBSD Project ceased support for the Zaurus port of their operating system. Software With the switch to the Linux operating system the Zaurus became capable of running variations of a wide variety of proprietary and open source software, including web and FTP servers, databases, and compilers. Developers have created several replacement Linux distributions for the Zaurus. Software provided by Sharp includes basic PDA packages such as a datebook, addressbook, and to do list. These PIM applications are fairly unsophisticated, and a number of individuals and groups have developed alternatives. One popular - and free - alternative that runs on the Sharp ROM and OpenZaurus as well as Windows and Linux is the KDE PIM/Platform-independent set of applications. KDE PIM/PI is based on PIM applications from the KDE desktop suite for Linux. KDE PIM/PI includes KOrganizer/Platform-independent (or KOPI), KAddressbook/Platform-independent (or KAPI), K-OpieMail/pi (or OMPI), K-Phone/pi (kppi) and PwM/PI, a password manager with strong encryption. In addition to standard PDA applications there are many programs available that are more commonly associated with desktop and laptop computers. Among these are a selection of office programs, web browsers, media applications and many others. References External links Official Japan Zaurus Site Personal digital assistants Linux-based devices Embedded Linux Zaurus Qt (software)
247158
https://en.wikipedia.org/wiki/Mek
Mek
Mek or Mek may refer to: Mek people, an indigenous tribe of West Papua, Indonesia Mek languages, a family of Papuan languages spoken by the Mek peoples Mek (comics), a comic mini series by Warren Ellis MEK Compound, in Fallujah, Iraq, a compound used by the U.S. military from 2003 to 2009 Master encryption key, a type of key in cryptography Methyl ethyl ketone or butanone, a solvent, used also to weld some plastics Mitogen-activated protein kinase kinase, an important enzyme in biochemical MAPK/ERK pathways Mu Epsilon Kappa Society, an organization of anime clubs Meelo Evaru Koteeswarudu, a Telugu television show Magyar Elektronikus Könyvtár or Hungarian Electronic Library, a digital library Mojahedin-e Khalq or People's Mujahedin of Iran, an exiled Iranian organization Mobile Einsatzkommandos, German police special units Mek, variant of Makk, a royal title used in the Sudan Language and nationality disambiguation pages
247381
https://en.wikipedia.org/wiki/Null%20cipher
Null cipher
A null cipher, also known as concealment cipher, is an ancient form of encryption where the plaintext is mixed with a large amount of non-cipher material. Today it is regarded as a simple form of steganography, which can be used to hide ciphertext. Classical cryptography In classical cryptography, a null is intended to confuse the cryptanalyst. In a null cipher, the plaintext is included within the ciphertext and one needs to discard certain characters in order to decrypt the message. Most characters in such a cryptogram are nulls, only some are significant, and some others can be used as pointers to the significant ones. Here is an example null cipher message, sent by a German during World War I: Taking the first letter of every word reveals the hidden message "Pershing sails from N.Y. June I". A similar technique is to hide entire words, such as in this seemingly innocent message written by a prison inmate but deciphered by the FBI: Taking only every fifth word, one can reconstruct the hidden text which recommends a "hit" on someone: Other options include positioning of the significant letters next to or at certain intervals from punctuation marks or particular characters. Historically, users of concealment ciphers often used substitution and transposition ciphers on the data prior to concealment. For example, Cardinal Richelieu is said to have used a grille to write secret messages, after which the blank spaces were filled out with extraneous matter to create the impression of a continuous text. Usage In general, it is difficult and time-consuming to produce covertexts that seem natural and would not raise suspicion. If no key or actual encryption is involved, the security of the message relies entirely on the secrecy of the concealment method. Null ciphers in modern times are used by prison inmates in an attempt to have their most suspicious messages pass inspection. See also Transposition cipher Substitution ciphers Acrostic References Classical ciphers Steganography
248931
https://en.wikipedia.org/wiki/Oe
Oe
Oe or OE may refer to: Education Old Edwardian (OE), a former pupil of various schools named after a King Edward or St. Edward Old Etonian (OE), a former pupil of Eton College, England Olathe East High School, a high school in Olathe, Kansas, US Language and writing Old English, the English language spoken in the Early Middle Ages Œ or œ, a ligature of o and e used in the modern French and medieval Latin alphabets Oe (digraph) Open-mid front rounded vowel or Ö, a character sometimes representing 'oe', appearing in some Germanic, Turkic, and Uralic languages Ø, a Northern European (Danish, Faroese, Norwegian) or Sami vowel, representing œ, 'oe' diphthong etc. Ө, a letter in the Cyrillic alphabet Kenzaburō Ōe, a major Japanese writer Places Oe, Estonia, a village of Estonia Ōe, Yamagata, a town of Japan Oe District, Tokushima, a former district of Japan Ōe, Kyoto, a former town of Japan Oe (Attica), a town of ancient Attica, Greece Oe, a shortcut for the Czech city Otrokovice Science and technology °Oe, a measurement on the Oechsle scale for the density of grape must Oersted (Oe), a unit of magnetic field strength On30 or Oe, a model railway gauge OpenEmbedded (OE), a Linux-based embedded build system Opportunistic encryption (OE), a means to combat passive wiretapping Outlook Express, a former email program of Microsoft Ophryocystis elektroscirrha, a parasite of monarch and queen butterflies Other uses Ōe (surname), a Japanese surname (OE), a Portuguese order of engineers Order of Excellence of Guyana, the highest honour of Guyana Overseas experience (OE), a New Zealand term for extended working holidays Okean Elzy, a Ukrainian rock band OE, the aircraft registration prefix for Austrian aircraft 0e (disambiguation), listing uses with the number nought Odakyū Enoshima Line
249854
https://en.wikipedia.org/wiki/Ampex
Ampex
Ampex is an American electronics company founded in 1944 by Alexander M. Poniatoff as a spin-off of Dalmo-Victor. The name AMPEX is a portmanteau, created by its founder, which stands for Alexander M. Poniatoff Excellence. Today, Ampex operates as Ampex Data Systems Corporation, a subsidiary of Delta Information Systems, and consists of two business units. The Silicon Valley unit, known internally as Ampex Data Systems (ADS), manufactures digital data storage systems capable of functioning in harsh environments. The Colorado Springs, Colorado unit, referred to as Ampex Intelligent Systems (AIS), serves as a laboratory and hub for the company's line of industrial control systems, cyber security products and services and its artificial intelligence/machine learning technology. Ampex's first great success was a line of reel-to-reel tape recorders developed from the German wartime Magnetophon system at the behest of Bing Crosby. Ampex quickly became a leader in audio tape technology, developing many of the analog recording formats for both music and movies that remained in use into the 1990s. Starting in the 1950s, the company began developing video tape recorders, and later introduced the helical scan concept that make home video players possible. They also introduced multi-track recording, slow-motion and instant playback television, and a host of other advances. Ampex's tape business was rendered obsolete during the 1990s, and the company turned to digital storage products. Ampex moved into digital storage for DoD Flight Test Instrumentation (FTI) with the introduction of the first, true all digital flight test recorder. Ampex supports numerous major DoD programs with the US Air Force, US Army, US Marines, US Navy and other government entities (NASA, DHS and national labs). Ampex also works with all major DoD primes and integrators including Boeing, General Atomics, Lockheed, Northrop, Raytheon and many others. Currently, Ampex is attempting to do more with the data stored on its network attached storage (NAS) devices. This includes adding encryption for secure data storage; algorithms focused on control system cyber security for infrastructure and aerospace platforms; and artificial intelligence/machine learning for automated entity identification and data analytics. Origin Russian-American inventor Alexander Matthew Poniatoff established the company in San Carlos, California, in 1944 as the Ampex Electric and Manufacturing Company. The company name came from his initials plus "ex" to avoid using the name AMP already in use (by Aircraft and Marine Products). During World War II, Ampex was a subcontractor to Dalmo-Victor, manufacturing high quality electric motors and generators for radars that used alnico 5 magnets from General Electric. Ampex was initially set up in an abandoned loft-space above the Dalmo-Victor plant; eventually they would have offices at 1313 Laurel Street, San Carlos, California (at the intersection of Howard Avenue and Laurel Street). Near the end of the war, while serving in the U.S. Army Signal Corps, Major Jack Mullin was assigned to investigate German radio and electronics experiments. He discovered the Magnetophons with AC biasing on a trip to Radio Frankfurt. The device produced much better fidelity than shellac records. The technological processes in tape recording and equipment developed by German companies before and during the 1939-45 War had copyrights which were effectively voided after Germany's 1945 surrender and defeat. Mullin acquired two Magnetophon recorders and 50 reels of BASF Type L tape, and brought them to America, where he produced modified versions. He demonstrated them to the Institute of Radio Engineers in San Francisco on May 16, 1946. Bing Crosby, a big star on radio at the time, was receptive to the idea of pre-recording his radio programs. He disliked the regimentation of live broadcasts, and much preferred the relaxed atmosphere of the recording studio. He had already asked the NBC network to let him pre-record his 1944–45 series on transcription discs, but the network refused; so Crosby had withdrawn from live radio for a year and returned (this time to the recently created ABC) for the 1946–47 season, only reluctantly. In June 1947, Mullin, who was pitching the technology to the major Hollywood movie studios, got the chance to demonstrate his modified tape recorders to Crosby. When Crosby heard a demonstration of Mullin's tape recorders, he immediately saw the potential of the new technology and commissioned Mullin to prepare a test recording of his radio show. Ampex was finishing its prototype of the Model 200 tape recorder, and Mullin used the first two models as soon as they were built. After a successful test broadcast, ABC agreed to allow Crosby to pre-record his shows on tape. Crosby immediately appointed Mullin as his chief engineer and placed an order for $50,000 worth of the new recorders so that Ampex (then a small six-man concern) could develop a commercial production model from the prototypes. Crosby Enterprises was Ampex's West Coast representative until 1957. Early tape recorders The company's first tape recorder, the Ampex Model 200, was first shipped in April 1948. The first two units, serial numbers 1 and 2, were used to record Bing Crosby's show. The American Broadcasting Company used these recorders along with 3M Scotch 111 gamma ferric oxide coated acetate tape for the first-ever U.S. delayed radio broadcast of Bing Crosby's Philco Radio Time. Ampex tape recorders revolutionized the radio and recording industries because of their superior audio quality and ease of operation over audio disk cutting lathes. During the early 1950s, Ampex began marketing one- and two-track machines using tape. The line soon expanded into three- and four-track models using tape. In the early 1950s, Ampex moved to Redwood City, California. Ampex acquired Orradio Industries in 1959, which became the Ampex Magnetic Tape Division, headquartered in Opelika, Alabama. This made Ampex a manufacturer of both recorders and tape. By the end of that decade Ampex products were much in demand by top recording studios worldwide. In 1952, movie producer Mike Todd asked Ampex to develop a high fidelity movie sound system using sound magnetically recorded on the film itself, as contrasted with the technology of the time, which used magnetic tracks on a separate celluloid base film (later commonly known as mag stock). The result of this development was the CinemaScope/Todd-AO motion picture sound system, which was first used in movies such as The Robe (1953) in 35mm and Oklahoma (1955) in 70mm (and also in 35mm). In 1960, the Academy of Motion Picture Arts and Sciences awarded Ampex an Oscar for technical achievement as a result of this development. Les Paul, a friend of Crosby and a regular guest on his shows, had already been experimenting with overdubbed recordings on disc. He received an early portable Ampex Model 200A from Crosby. Using this machine, Les Paul invented "Sound on Sound" recording technology. He placed an additional playback head, located before the conventional erase/record/playback heads. This allowed Paul to play along with a previously recorded track, both of which were mixed together on to a new track. This was a destructive process because the original recording was recorded over. Professional 8-track recorders Ampex built a handful of multitrack machines during the late 1950s that could record as many as eight tracks on tape. The project was overseen by Ross Snyder, Ampex manager of special products. To make the multitrack recorder work, Snyder invented the Sel-Sync process, which used some tracks on the head for playback and other tracks on the head for recording. This made the newly recorded material be in sync with the existing recorded tracks. The first of these machines cost $10,000 and was installed in Les Paul's home recording studio by David Sarser. In 1967, Ampex responded to demand by stepping up production of their 8-track machines with the production model MM 1000. Like earlier 8-track machines of this era, it used 1 inch tape. 16 and 24-track recorders In 1966, Ampex built their first 16-track recorder, the model AG-1000, at the request of Mirasound Studios in New York City. In 1967, Ampex introduced a 16-track version of the MM 1000 which was the world's first 16-track professional tape recorder put into mass-production. Both used a tape transport design adapted from the video recording division. The 16-track MM-1000 quickly became legendary for its tremendous flexibility, reliability and outstanding sound quality. This brought about the "golden age" of large format analog multitrack recorders which would last into the mid-1990s. MCI built the first 24-track recorder (using 2 inch tape) in 1968 which was installed at TTG Studios in Los Angeles. Later machines built by Ampex starting in 1969 would have as many as 24 tracks on 2 inch tape. In addition to this, the introduction of SMPTE time code allowed studios to run multiple machines in perfect synchronization, making the number of available tracks virtually unlimited. By the 1970s, Ampex faced tough competition from the Swiss company Studer and Japanese manufacturers such as Otari and Sony (who also purchased the MCI brand in 1982). In 1979, Ampex introduced their most advanced 24-track recorder, the model ATR-124. The ATR-124 was ruggedly constructed and had audio specifications that nearly rivaled the first digital recording machines. However, sales of the ATR-124 were slow due to the machine's high price tag. Ampex sold only about 50 or 60 ATR-124 machines, and withdrew from the professional audio tape recorder market entirely in 1983. The 1990s By the 1990s, Ampex focused more on video recorders, instrumentation recorders, and data recorders. In 1991, Ampex sold their professional audio recorder line to Sprague Magnetics. The Ampex Recording Media Corporation spun off in 1995 as Quantegy Inc.; that company has ceased producing recording tape. Video technology Video Processing While AMPEX are well recognised for their contribution to magnetic tape recording, they also had a huge impact on developments the whole video signal chain. They did rebadge some specialist low-volume OEM products to complete the package, but their in-house teams developed industry leading products in the following categories... Digital Optics ADO - Ampex Digital Optics provided comprehensive frame manipulation in 2 and 3 dimensions. Adjusting the aspect, size, and rotation of the image was performed continuously in real-time. An optional digital ‘combiner’ was available to perform the foreground layering and priority switching - to reduce the burden on the vision mixer with multi-channel effects. Video Switching & Effects AVC - The AVC range of vision mixers ranged from small, single buss devices up to the high-end Century Series, with multiple Mix/Effect busses, infinite re-entry and powerful keying and control software. Editing controllers The product line evolved quickly from manual editing on the actual VTRs themselves to incorporate SMPTE timecode providing advanced timeline control. The RA-4000 and EDM-1 were fully functional early products, but soon evolved to the extremely powerful ACE family to compete with CMX and other edit controller brands . Quadruplex Two-Inch tape Starting in the early 1950s, RCA, Bing Crosby and others tried to record analog video on very fast-moving magnetic tape. As early as 1952, Ampex developed prototype video tape recorders that used a spinning head and relatively slow-moving tape. In early 1956, a team produced the first videotape recorder. A young, 19-year-old engineer Ray Dolby was also part of the team. Ampex demonstrated the VR-1000, which was the first of Ampex's line of 2 inch Quadruplex videotape recorders on April 14, 1956, at the National Association of Radio and Television Broadcasters in Chicago. The first magnetically recorded time-delayed television network program using the new Ampex Quadruplex recording system was CBS's Douglas Edwards and the News on November 30, 1956. The "Quad" head assembly rotates at 14,400 rpm (NTSC). The four head pieces (quad) are switched successively so that recorded stripes cross the video portion (most of the tape middle, audio is on one edge, control track is on the other) so that head to tape write speed is well in excess of the physical motion. They wrote the video vertically across the width of a tape that was wide and ran at per second. This allowed hour-long television programs to be recorded on one reel of tape. In 1956, one reel of tape cost $300; and Ampex advertised the cost of the recorder as $45,000. A version was released later, and this required a new, narrower headwheel. This vertical writing facilitated mechanical editing, once the control track was developed to display the pulse that indicates where a frame ends and the next one begins. Later, Ampex developed electronic editing. The National Academy of Television Arts and Sciences awarded Ampex its first Emmy in 1957 for this development. Ampex received a total of 12 Emmys for its technical video achievements. In 1959, Richard Nixon, then Vice President, and Nikita Khrushchev held discussions at the Moscow Trade Fair, which became known as the "Kitchen Debate" because they were mostly held in the kitchen of a suburban model house. These discussions were recorded on an Ampex color videotape recorder, and during the debate Nixon pointed this out as one of the many American technological advances. In 1967, Ampex introduced the Ampex VR-3000 portable broadcast video recorder, which revolutionized the recording of broadcast quality television in the field without the need for long cables and large support vehicles. Broadcast quality images could now be shot anywhere, including from airplanes, helicopters and boats. The Quadruplex format dominated the broadcast industry for a quarter of a century. The format was licensed to RCA for use in their "television tape recorders." Ampex's invention revolutionized the television production industry by eliminating the kinescope process of time-shifting television programs, which required the use of motion picture film. For archival purposes, the kinescope method continued to be used for some years; film was still preferred by archivists. The Ampex broadcast video tape recorder facilitated time-zone broadcast delay so that networks could air programming at the same hour in various time zones. Ampex had trademarked the name "video tape", so competitor RCA called the medium "TV tape" or "television tape". The terms eventually became genericized, and "videotape" is commonly used today. While the quadruplex recording system per se is no longer in use, the principle evolved into the helical scanning technique used in virtually all video tape machines, such as those using the consumer formats of VHS, Sony Betamax and Video 2000. Sony Betacam was successful as a professional format, but operated with a different recording system and faster tape speed than Betamax. One of the key engineers in the development of the Quadruplex video recorder for Ampex was Ray Dolby, who worked under Charlie Ginsburg and went on to form Dolby Laboratories, a pioneer in audio noise reduction systems. Dolby's contribution to the videotape system was limited to the mathematics behind the reactance tube FM modulator, as videotape then used FM modulation for the video portion. Another contributor designed the FM modulator itself. Dolby left Ampex to seek a PhD in physics in England, which is where Dolby Labs was later founded, before moving back to San Francisco. Dolby's brother Dale was also an engineer at Ampex. VR-5000 and VR-8000 In 1961, Ampex introduced the first One-Inch helical scan video recorders, the Ampex 2 inch helical VTRs, which recorded video using helical scan recording technology on tape. Ampex 2 inch helical VTR Ampex 2 inch helical VTRs were manufactured from 1963 to 1970. Model VR-1500 for home. The VR-660 for Broadcast television systems, industrial companies, educational institutions, and a few for In-flight entertainment. The VR-1500 and VR-660 found service at educational institutions especially due to their relatively low cost vs. quadruplex VTRs. These machines were simple to operate, reliable, small in size and produced, for their time, very good video without the complexity of the larger and much more complex 2" Quad machines. HS-100 & HS-200 "slo-mo" disc recorder In March 1967, Ampex introduced the HS-100 video disc recorder. The system was developed by Ampex at the request of the American Broadcasting Company (ABC) for a variety of sports broadcast uses. It was first demonstrated on the air on March 18, 1967, when ABC's Wide World of Sports televised the "World Series of Skiing" from Vail, Colorado. The video was recorded on analog magnetic disc. The disc weighed and rotated at 60rps, 3600rpm (50rps in PAL). One NTSC unit could record 30 seconds of video, PAL units 36 seconds. The video could then be played back in slow motion, stop action to freeze frame. A more deluxe version, the HS-200, was introduced in April 1968, and provided a large control console with variable speed playback. This made it ideal for instant replay for sports events and precise timing control in post production service. CBS-TV was the first to use the technique during live sportscasts, though it was quickly adopted by all American TV networks. The HS-200, which was an HS-100 connected to a control console when combined called the HS-200, had greater precise frame and timing control capability, lending itself to post-production applications like special effects and titles. The HS-200 had a frame accurate timing computer that enabled frame-accurate cuts and dissolve transitions by way of a two-input video switcher. Slow-motion sequences could likewise be programmed and could be "triggered" to begin via an external control pulse such as might come from an external VTR editor like the Ampex VR-2000 VTR with Editec. The HS-200 was the first system capable of single-frame video animation recording, using magnetic discs as opposed to videotape. The HS-200 also provided a readout with specific frame numbers showing from the 900 frames available (NTSC version.) Sequences could be triggered to start from any of these 900 frames with frame-accurate repeatability for creative fine tuning of sequence start and end points. Type A 1 inch type A videotape (designated Type A by the Society of Motion Picture and Television Engineers, SMPTE) was an open-reel helical scan videotape format developed by Ampex in 1965, one of the first standardized open-reel videotape formats in the width; most others of that size at that time were proprietary. Type C 1 inch type C videotape (designated Type C by SMPTE) was a professional open-reel videotape format co-developed and introduced by Ampex and Sony in 1976. It became the replacement in the professional video and television broadcast industries for the then-incumbent Quadruplex. D2 D2 is a digital video tape format created by Ampex and other manufacturers (through a standards group of SMPTE) and introduced at the 1988 NAB (National Association of Broadcasters) convention as a lower-cost alternative to the D-1 format. Like D-1, D-2 video is uncompressed; however, it saves bandwidth and other costs by sampling a fully encoded NTSC or PAL composite video signal, and storing it directly to magnetic tape, rather than sampling component video. This is known as digital composite. DCT & DST Digital Component Technology (DCT) and Data Storage Technology (DST) are VTR and data storage devices respectively, created by Ampex in 1992. Both were similar to the D1 and D2 VTR formats, using a width, with the DCT format using DCT (discrete cosine transform) video compression, also its namesake. The DCT and DST formats yielded relatively high capacity and speed for data and video. Double-density DST data storage was introduced in 1996. The final generation of these products were quad density, introduced in 2000, resulting in a large cartridge holding 660GB of data. Milestones In 1948, the first tape-delayed U.S. radio program was broadcast by using an Ampex Model 200 tape recorder. In May 1949 Model 300 introduced improvements in audio head, tape drive and tape path. In 1950, Model 400 introduced lower cost professional quality audio recorder, soon to be replaced by the Model 400A, which was the logical precursor of the Model 350. In 1950, Ampex introduced the first "dedicated" instrumentation recorder, Model 500, built for the U.S. Navy. In April 1953 Model 350 introduced audio recorder to replace the Model 400/Model 400A. The Model 350 had more simplicity and durability. Ampex released the 35mm four-track CinemaScope stereo reproduction system. In May 1954 Model 600 introduced mastering quality audio portable recorder. Models 3200-3300 high-speed duplicators also introduced. In 1954, in a recording studio equipped with an Ampex reel-to-reel audio tape recording machine, an unknown truck driver named Elvis Presley recorded his historic first single, "That's All Right" at Sun Studios in Memphis. Also that year, Ampex introduced the first multi-track audio recorder derived from multi-track data recording technology. In 1955, Ampex released the 70mm/35mm six-track/four-track Todd-AO system, and an improved 35mm four-track system. On March 14, 1956, The Ampex VRX-1000 (later renamed the Mark IV) videotape recorder is introduced at the National Association of Radio and Television Broadcasters in Chicago. This is the first practical videotape recorder and is hailed as a major technological breakthrough. CBS goes on air with the first videotape delayed broadcast, Douglas Edwards and The News, on November 30, 1956, from Los Angeles, California, using the Ampex Mark IV. In March 1957, Ampex won an Emmy award for the invention of the Video Tape Recorder (VTR). In 1958, NASA selected Ampex data recorders and magnetic tape. It has been used for virtually all U.S. space missions since. In 1959, the Nixon-Khrushchev Kitchen Debate was recorded on Ampex videotape. The fact that the debate was being videotaped was mentioned by Nixon as an example of American technological development. In 1960, The Academy of Motion Picture Arts and Sciences presented Ampex with an Oscar for technical achievement. January 1961 Helical scan recording was invented by Ampex. The technology behind the worldwide consumer video revolution; it is used in all home Video Tape Recorders today. In 1963, Ampex technology was used to show replays of the live assassination of Lee Harvey Oswald. In 1963, Ampex introduced EDITEC, electronic video editing, allowing broadcast television editors frame-by-frame recording control, simplifying tape editing and the ability to make animation effects possible. This was the basis for all subsequent editing systems. On December 7, 1963 – Instant Replay is used for the first time during the live transmission of the Army Navy Game by its inventor, director, Tony Verna. In April 1964, Ampex introduced the VR-2000 high band videotape recorder, the first ever to be capable of color fidelity required for high quality color broadcasting. February 1965 introduced VR-303\VR-7000 closed-circuit video tape recorder. May 1965 introduced AG-350 first all-transistorized audio recorder. July 1965 introduced VR-660B VTR advanced version of VR-660; replaces VR-660/1500. November 1965 introduced VR-7000 compact portable closed circuit video tape recorder. During 1966–1967, Ampex FR-900 data drives were used to record the first images of the Earth from the Moon, as part of the Lunar Orbiter program. Two drives were refurbished to recover the images as part of the Lunar Orbiter Image Recovery Project (LOIRP). In 1967, ABC used the Ampex HS-100 disk recorder for slow-motion playback of downhill skiing on the program World Series of Skiing in Vail, Colorado. This was the first use of slow-motion instant replay in sporting events. In 1968, the introduction of the Ampex VR-3000 revolutionizes video recording: the first truly portable VTR. It is used at the '68 Summer Olympics in Mexico City to follow the world's cross-country runners for the first time in Broadcast history. In 1969, Ampex introduced Videofile, still in use today at Scotland Yard for the electronic storage and retrieval of fingerprints. In 1972, Ampex introduced the ACR-25, the first automated robotic library system for the recording and playback of television commercials. Each commercial was recorded on an individual cartridge. These cartridges were then loaded into a large rotating carousel. Using sophisticated mechanics and vacuum pneumatics, the "carts" were loaded into and extracted from the machine with an 8-second cycle time for spots under 61 seconds. This freed TV stations from loading individual machines with spots in real time, or preparing spot reels in advance of a broadcast. The TV newsroom also began to use the ACR-25 to run news stories because of its random access capability. The ACR-25 used AVR-1 signal, servo, and timebase systems, and a machine-programming control bay designed by Ampex engineer E. Stanley Busby. Both machines had a lockup time of 200 milliseconds, as distinct from the industry standard 5 second pre-roll. This was accomplished with optical-vacuum reel servos providing the vacuum capstan negligible inertial mass to control, and predictive digital servos that could re-frame vertically at horizontal rate, as well as timebase correction with a window exceeding 64 microseconds (compared to the VR-2000's window of less than 5 microseconds). Also in 1970, Ampex started its own record label, Ampex Records. Its biggest hit was "We Gotta Get You A Woman" by Todd Rundgren (as "Runt"), reaching #20 on the charts in 1970. In 1978, the Ampex Video Art (AVA) video graphics system is used by artist LeRoy Neiman on air during Super Bowl XII. AVA, the first video paint system, allows the graphic artist, using an electronic pen, to illustrate in a new medium, video. This innovation paved the way for today's high quality electronic graphics, such as those used in video games. In 1983, Ampex introduced the DCRS digital cassette recorder, offering compact cassette storage with the equivalent of 16 digital or 8 DDR instrumentation reels on one cassette. Also, Partial Response Maximum Likelihood (PRML) data decoding technology has its first use in Ampex's DCRsi recorders. This technology is now commonly used in high performance computer disk drives and other high density magnetic data storage devices. In 1982, Ampex introduced DST (high-performance computer mass storage products able to store half the Library of Congress in of floor space) and DCT, the first digital component post production system using image compression technology to produce high quality images. In 1985, Ampex introduced the DIS 120i and DIS 160i dual port, data/instrumentation recorders. These made it possible for the first time to capture real time instrumentation data and then utilize the same recorder to process the data in a computer environment through its second port using SCSI-2 protocol. In 2005, Ampex received its 12th Emmy award for its invention of slow-motion color recording and playback. Also honored with Lifetime Achievement Awards were the members of the engineering team that created the videotape recorder when they worked for Ampex: Charles Andersen, Ray Dolby, Shelby Henderson, Fred Pfost, and the late Charles Ginsburg and Alex Maxey. Sticky-shed syndrome Some master tapes and other recordings predominantly from the 1970s and 1980s have degraded due to the so-called sticky-shed syndrome. When sticky-shed syndrome occurs, the binding agent deteriorates, resulting in the magnetic coating coming off the base and either sticking to the backing of the tape layer wound on top of it (resulting in dropout), or being scraped off and deposited on the tape heads while lifting the head off the tape, degrading the treble. The problem has been reported on a number of makes of tape (usually back-coated tapes), including Ampex tapes. Ampex filed for a baking process ("A typical temperature used is and a typical time is 16 hours") to attempt to recover such tapes, allowing them to be played once more and the recordings transferred to new media. The problems have been reported on tapes of type 406/407, 456/457, 2020/373. Branding In 1959, Ampex acquired Orradio Industries and it became the Ampex Magnetic Tape Division. In 1995, Ampex divested this division, then called the Ampex Recording Media Corporation. This became Quantegy, Inc., which later changed its name to the current Quantegy Recording Solutions. In January 2005, having previously filed for bankruptcy protection, Quantegy closed its manufacturing facility in Opelika. In October 2014, Ampex Data Systems Corporation was sold to Delta Information Systems, but retains the rights to the Ampex name. In 2017, Ampex established a second business unit, Ampex Intelligent Systems (AIS), in Colorado Springs Colorado, and branded its Silicon Valley business unit Ampex Data Systems (ADS). Record labels Ampex Records started in 1970. Its biggest hit was "We Gotta Get You A Woman" by Todd Rundgren (as "Runt"), reaching #20 on the Billboard Hot 100 charts in 1970. Ampex also originated three subsidiary labels: Bearsville, Big Tree, and Lizard. Ampex Records ceased around 1973 and Bearsville and Big Tree switched distribution to Warner Bros. Records and Bell Records, respectively with Lizard becoming an independent entity. Later on, Big Tree was picked up by Atlantic Records. Legal history In 2005, iNEXTV, a wholly owned subsidiary of respondent Ampex Corporation, brought a defamation lawsuit against a poster on an Internet message board who posted messages critical of them (Ampex Corp. v. Cargle (2005), Cal.App.4th ). The poster, a former employee, responded with an anti-SLAPP suit and eventually recovered his attorney fees. The case was unique in that it involved the legality of speech in an electronic public forum. Current situation After being sold to Delta Information Systems in 2014, two former subsidiaries of Ampex Corporation continue business as part of the Ampex legacy.  Ampex Data Systems Corporation (ADSC) headquartered in Silicon Valley, and Ampex Japan Ltd.  These are the only two Ampex businesses that still exist as more than "in name only" entities.  ADSC has two business units.  The Silicon Valley unit, Ampex Data Systems (ADS), continues the Ampex tradition in the storage industry by manufacturing ruggedized, high-capacity, high-performance digital data storage systems. The Colorado Springs, Colorado unit, Ampex Intelligent Systems (AIS), is the center for company's line of industrial control system cyber security products and services and its artificial intelligence/machine learning technology which is available in all of the products. The Ampex video system is now obsolete, but many thousands of quadruplex videotape recordings remain. Machines that survive are used to transfer archival recordings to modern digital video formats. Ampex Corporation supported the Ampex Museum of Magnetic Recording, started by Peter Hammar in 1982. The contents of that museum were donated to Stanford in 2001. A project is now underway to curate Ampex artifacts in physical and digital form. This project will find a permanent home in Redwood City for the Ampex Museum and digital artifacts will be curated at AmpexMuseum.org This project is being funded by contributions from former Ampex employees. Photo gallery See also Ampex Golden Reel Award Erhard Kietz References Further reading Ampex Corporation Records, ca. 1944–1999 (c.577 linear ft.) are housed in the Department of Special Collections and University Archives at Stanford University Libraries FindLaw Ampex Corp v. Cargle (2005) External links Video storage Companies based in Hayward, California Electronics companies of the United States Defunct record labels of the United States Electronics companies established in 1944 American companies established in 1944 1944 establishments in California Record labels established in 1970 Record labels disestablished in 1973 Companies that filed for Chapter 11 bankruptcy in 2008 Video equipment manufacturers 2014 mergers and acquisitions
249886
https://en.wikipedia.org/wiki/Leonard%20Adleman
Leonard Adleman
Leonard Adleman (born December 31, 1945) is an American computer scientist. He is one of the creators of the RSA encryption algorithm, for which he received the 2002 Turing Award, often called the Nobel prize of Computer science. He is also known for the creation of the field of DNA computing. Biography Leonard M. Adleman was born to a Jewish family in California. His family had originally immigrated to the United States from modern-day Belarus, from the Minsk area. He grew up in San Francisco and attended the University of California, Berkeley, where he received his B.A. degree in mathematics in 1968 and his Ph.D. degree in EECS in 1976. He was also the mathematical consultant on the movie Sneakers. In 1996, he became a member of the National Academy of Engineering for contributions to the theory of computation and cryptography. He is also a member of the National Academy of Sciences. Adleman is also an amateur boxer and has sparred with James Toney. Discovery In 1994, his paper Molecular Computation of Solutions To Combinatorial Problems described the experimental use of DNA as a computational system. In it, he solved a seven-node instance of the Hamiltonian Graph problem, an NP-complete problem similar to the travelling salesman problem. While the solution to a seven-node instance is trivial, this paper is the first known instance of the successful use of DNA to compute an algorithm. DNA computing has been shown to have potential as a means to solve several other large-scale combinatorial search problems. Adleman is widely referred to as the Father of DNA Computing. In 2002, he and his research group managed to solve a 'nontrivial' problem using DNA computation. Specifically, they solved a 20-variable SAT problem having more than 1 million potential solutions. They did it in a manner similar to the one Adleman used in his seminal 1994 paper. First, a mixture of DNA strands logically representative of the problem's solution space was synthesized. This mixture was then operated upon algorithmically using biochemical techniques to winnow out the 'incorrect' strands, leaving behind only those strands that 'satisfied' the problem. Analysis of the nucleotide sequence of these remaining strands revealed 'correct' solutions to the original problem. He is one of the original discoverers of the Adleman–Pomerance–Rumely primality test. Fred Cohen, in his 1984 paper, Experiments with Computer Viruses credited Adleman with coining the term "computer virus". As of 2017, Adleman is working on the mathematical theory of Strata. He is a Computer Science professor at the University of Southern California. Awards For his contribution to the invention of the RSA cryptosystem, Adleman, along with Ron Rivest and Adi Shamir, has been a recipient of the 1996 Paris Kanellakis Theory and Practice Award and the 2002 Turing Award, often called the Nobel Prize of Computer Science. Adleman was elected a Fellow of the American Academy of Arts and Sciences in 2006 and a 2021 ACM Fellow. See also List of famous programmers Important publications in cryptography References External links Adleman's homepage Turing Award Citation Mathematical consultant for movie Sneakers American computer programmers American science writers People of Belarusian-Jewish descent 1945 births Living people Modern cryptographers Public-key cryptographers Scientists from the San Francisco Bay Area Turing Award laureates University of Southern California faculty Writers from San Francisco Jewish scientists Jewish biologists UC Berkeley College of Engineering alumni Fellows of the American Academy of Arts and Sciences Fellows of the Association for Computing Machinery Members of the United States National Academy of Engineering Members of the United States National Academy of Sciences 20th-century American scientists 21st-century American scientists Computer security academics UC Berkeley College of Letters and Science alumni
252857
https://en.wikipedia.org/wiki/Classified%20information
Classified information
Classified information is material that a government body deems to be sensitive information that must be protected. Access is restricted by law or regulation to particular groups of people with the necessary security clearance and need to know, and mishandling of the material can incur criminal penalties. A formal security clearance is required to view or handle classified documents or to access classified data. The clearance process requires a satisfactory background investigation. Documents and other information must be properly marked "by the author" with one of several (hierarchical) levels of sensitivity—e.g. restricted, confidential, secret, and top secret. The choice of level is based on an impact assessment; governments have their own criteria, including how to determine the classification of an information asset and rules on how to protect information classified at each level. This process often includes security clearances for personnel handling the information. Some corporations and non-government organizations also assign levels of protection to their private information, either from a desire to protect trade secrets, or because of laws and regulations governing various matters such as personal privacy, sealed legal proceedings and the timing of financial information releases. With the passage of time much classified information can become less sensitive, and may be declassified and made public. Since the late twentieth century there has been freedom of information legislation in some countries, whereby the public is deemed to have the right to all information that is not considered to be damaging if released. Sometimes documents are released with information still considered confidential obscured (redacted), as in the adjacent example. The question exists among some political science and legal experts, whether the definition of classified ought to be information that would cause injury to the cause of justice, human rights, etc., rather than information that would cause injury to the national interest, to distinguish when classifying information is in the collective best interest of a just society or merely the best interest of a society acting unjustly, to protect its people, government, or administrative officials from legitimate recourses consistent with a fair and just social contract. Government classification The purpose of classification is to protect information. Higher classifications protect information that might endanger national security. Classification formalises what constitutes a "state secret" and accords different levels of protection based on the expected damage the information might cause in the wrong hands. However, classified information is frequently "leaked" to reporters by officials for political purposes. Several U.S. presidents have leaked sensitive information to get their point across to the public. Typical classification levels Although the classification systems vary from country to country, most have levels corresponding to the following British definitions (from the highest level to lowest). Top Secret is the highest level of classified information. Information is further compartmented so that specific access using a code word after top secret is a legal way to hide collective and important information. Such material would cause "exceptionally grave damage" to national security if made publicly available. Prior to 1942, the United Kingdom and other members of the British Empire used Most Secret, but this was later changed to match the United States' category name of Top Secret in order to simplify Allied interoperability. The Washington Post reported in an investigation entitled Top Secret America that, as of 2010, "An estimated 854,000 people ... hold top-secret security clearances" in the United States. Secret Secret material would cause "serious damage" to national security if it were publicly available. In the United States, operational "Secret" information can be marked with an additional "LIMDIS", to limit distribution. Confidential Confidential material would cause "damage" or be prejudicial to national security if publicly available. Restricted Restricted material would cause "undesirable effects" if publicly available. Some countries do not have such a classification in public sectors, such as commercial industries. Such a level is also known as "Private Information". Official Official (Equivalent to US DOD classification FOUO – For Official Use Only) material forms the generality of government business, public service delivery and commercial activity. This includes a diverse range of information, of varying sensitivities, and with differing consequences resulting from compromise or loss. OFFICIAL information must be secured against a threat model that is broadly similar to that faced by a large private company. The OFFICIAL SENSITIVE classification replaced the Restricted classification in April 2014 in the UK; OFFICIAL indicates the previously used UNCLASSIFIED marking. Unclassified Unclassified is technically not a classification level, but this is a feature of some classification schemes, used for government documents that do not merit a particular classification or which have been declassified. This is because the information is low-impact, and therefore does not require any special protection, such as vetting of personnel. A plethora of pseudo-classifications exist under this category. Clearance Clearance is a general classification, that comprises a variety of rules controlling the level of permission required to view some classified information, and how it must be stored, transmitted, and destroyed. Additionally, access is restricted on a "need to know" basis. Simply possessing a clearance does not automatically authorize the individual to view all material classified at that level or below that level. The individual must present a legitimate "need to know" in addition to the proper level of clearance. Compartmented information In addition to the general risk-based classification levels, additional compartmented constraints on access exist, such as (in the U.S.) Special Intelligence (SI), which protects intelligence sources and methods, No Foreign dissemination (NOFORN), which restricts dissemination to U.S. nationals, and Originator Controlled dissemination (ORCON), which ensures that the originator can track possessors of the information. Information in these compartments is usually marked with specific keywords in addition to the classification level. Government information about nuclear weapons often has an additional marking to show it contains such information (CNWDI). International When a government agency or group shares information between an agency or group of other country's government they will generally employ a special classification scheme that both parties have previously agreed to honour. For example, the marking ATOMAL, is applied to U.S. RESTRICTED DATA or FORMERLY RESTRICTED DATA and United Kingdom ATOMIC information that has been released to NATO. ATOMAL information is marked COSMIC TOP SECRET ATOMAL (CTSA), NATO SECRET ATOMAL (NSAT), or NATO CONFIDENTIAL ATOMAL (NCA). NATO classifications For example, sensitive information shared amongst NATO allies has four levels of security classification; from most to least classified: COSMIC TOP SECRET (CTS) NATO SECRET (NS) NATO CONFIDENTIAL (NC) NATO RESTRICTED (NR) A special case exists with regard to NATO UNCLASSIFIED (NU) information. Documents with this marking are NATO property (copyright) and must not be made public without NATO permission. COSMIC is an abbreviation for "Control Of Secret Material in an International Command". International organizations The European Union has four levels: EU TOP SECRET, EU SECRET, EU CONFIDENTIAL, EU RESTRICTED. (Note that usually the French terms are used.) TRÈS SECRET UE/EU TOP SECRET: information and material the unauthorised disclosure of which could cause exceptionally grave prejudice to the essential interests of the European Union or of one or more of the Member States; SECRET UE/EU SECRET: information and material the unauthorised disclosure of which could seriously harm the essential interests of the European Union or of one or more of the Member States; CONFIDENTIEL UE/EU CONFIDENTIAL: information and material the unauthorised disclosure of which could harm the essential interests of the European Union or of one or more of the Member States; RESTREINT UE/EU RESTRICTED: information and material the unauthorised disclosure of which could be disadvantageous to the interests of the European Union or of one or more of the Member States. Organisation for Joint Armament Cooperation, a European defence organisation, has three levels of classification: OCCAR SECRET, OCCAR CONFIDENTIAL, and OCCAR RESTRICTED. ECIPS, the European Centre for Information Policy and Security, has 4 levels of Security Information, COSMIC (TOP SECRET), EC-SECRET, EC-CONFIDENTIAL and EC-COMMITTEES. By country Most countries employ some sort of classification system for certain government information. For example, in Canada, information that the U.S. would classify SBU (Sensitive but Unclassified) is called "protected" and further subcategorised into levels A, B, and C. Australia On 19 July 2011, the National Security (NS) classification marking scheme and the Non-National Security (NNS) classification marking scheme in Australia was unified into one structure. The Australian Government Security Classification system now comprises TOP SECRET, SECRET, CONFIDENTIAL and PROTECTED. A new dissemination limiting markers (DLMs) scheme was also introduced for information where disclosure may be limited or prohibited by legislation, or where it may otherwise require special handling. The DLM marking scheme comprises For Official Use Only (FOUO), Sensitive, Sensitive: Personal, Sensitive: Legal, and Sensitive: Cabinet. Documents marked Sensitive Cabinet, relating to discussions in Federal Cabinet, are treated as PROTECTED at minimum due to its higher sensitivity. Brazil There are three levels of document classification under Brazilian Information Access Law: ultrassecreto (top secret), secreto (secret) and reservado (restricted). A top secret (ultrassecreto) government-issued document may be classified for a period of 25 years, which may be extended up to another 25 years. Thus, no document remains classified for more than 50 years. This is mandated by the 2011 Information Access Law (Lei de Acesso à Informação), a change from the previous rule, under which documents could have their classification time length renewed indefinitely, effectively shuttering state secrets from the public. The 2011 law applies retroactively to existing documents. Canada Background and hierarchy The Government of Canada employs two main types of sensitive information designation: Classified and Protected. The access and protection of both types of information is governed by the Security of Information Act, effective December 24, 2001, replacing the Official Secrets Act 1981. To access the information, a person must have the appropriate security clearance and the need to know. In addition, the caveat "Canadian Eyes Only" is used to restrict access to Classified or Protected information only to Canadian citizens with the appropriate security clearance and need to know. Special operational information SOI is not a classification of data per se. It is defined under the Security of Information Act, and unauthorised release of such information constitutes a higher breach of trust, with penalty of life imprisonment. SOIs include: military operations in respect of a potential, imminent or present armed conflict the identity of confidential source of information, intelligence or assistance to the Government of Canada tools used for information gathering or intelligence the object of a covert investigation, or a covert collection of information or intelligence the identity of any person who is under covert surveillance encryption and cryptographic systems information or intelligence to, or received from, a foreign entity or terrorist group Classified information Classified information can be designated Top Secret, Secret or Confidential. These classifications are only used on matters of national interest. Top Secret: applies when compromise might reasonably cause exceptionally grave injury to the national interest. The possible impact must be great, immediate and irreparable. Secret: applies when compromise might reasonably cause serious injury to the national interest. Confidential: disclosure might reasonably cause injury to the national interest. Protected information Protected information is not classified. It pertains to any sensitive information that does not relate to national security and cannot be disclosed under the access and privacy legislation because of the potential injury to particular public or private interests. Protected C (Extremely Sensitive protected information): designates extremely sensitive information, which if compromised, could reasonably be expected to cause extremely grave injury outside the national interest. Examples include bankruptcy, identities of informants in criminal investigations, etc. Protected B (Particularly Sensitive protected information): designates information that could cause severe injury or damage to the people or group involved if it was released. Examples include medical records, annual personnel performance reviews, income tax returns, etc. Protected A (Low-Sensitive protected information): designates low sensitivity information that should not be disclosed to the public without authorization and could reasonably be expected to cause injury or embarrassment outside the national interest. Example of Protected A information include employee identification number, pay deposit banking information, etc. Federal Cabinet (Queen's Privy Council for Canada) papers are either protected (e.g., overhead slides prepared to make presentations to Cabinet) or classified (e.g., draft legislation, certain memos). People's Republic of China The Criminal Law of the People's Republic of China (which is not operative in the Special Administrative Regions of Hong Kong and Macau) makes it a crime to release a state secret. Regulation and enforcement is carried out by the National Administration for the Protection of State Secrets. Under the 1989 "Law on Guarding State Secrets," state secrets are defined as those that concern: Major policy decisions on state affairs The building of national defence and in the activities of the armed forces Diplomatic activities and in activities related to foreign countries and those to be maintained as commitments to foreign countries National economic and social development Science and technology Activities for preserving state security and the investigation of criminal offences Any other matters classified as "state secrets" by the national State Secrets Bureau Secrets can be classified into three categories: Top secret (绝密): defined as "vital state secrets whose disclosure would cause extremely serious harm to state security and national interests" Highly secret (机密): defined as "important state secrets whose disclosure would cause serious harm to state security and national interests" Secret (秘密): defined as "ordinary state secrets whose disclosure would cause harm to state security and national interests" France In France, classified information is defined by article 413-9 of the Penal Code. The three levels of military classification are Très Secret Défense (Very Secret Defence): Information deemed extremely harmful to national defense, and relative to governmental priorities in national defense. No service or organisation can elaborate, process, stock, transfer, display or destroy information or protected supports classified at this level without authorization from the Prime Minister or the national secretary for National Defence. Partial or exhaustive reproduction is strictly forbidden. Secret Défense (Secret Defence): Information deemed very harmful to national defense. Such information cannot be reproduced without authorisation from the emitting authority, except in exceptional emergencies. Confidentiel Défense (Confidential Defence): Information deemed potentially harmful to national defense, or that could lead to uncovering some information classified at a higher level of security. Less sensitive information is "protected". The levels are Confidentiel personnels Officiers ("Confidential officers") Confidentiel personnels Sous-Officiers ("Confidential non-commissioned officers") Diffusion restreinte ("restricted information") Diffusion restreinte administrateur ("administrative restricted information") Non Protégé (unprotected) A further caveat, "spécial France" (reserved France) restricts the document to French citizens (in its entirety or by extracts). This is not a classification level. Declassification of documents can be done by the Commission consultative du secret de la défense nationale (CCSDN), an independent authority. Transfer of classified information is done with double envelopes, the outer layer being plastified and numbered, and the inner in strong paper. Reception of the document involves examination of the physical integrity of the container and registration of the document. In foreign countries, the document must be transferred through specialised military mail or diplomatic bag. Transport is done by an authorised convoyer or habilitated person for mail under 20 kg. The letter must bear a seal mentioning "PAR VALISE ACCOMPAGNEE-SACOCHE". Once a year, ministers have an inventory of classified information and supports by competent authorities. Once their usage period is expired, documents are transferred to archives, where they are either destroyed (by incineration, crushing, or overvoltage), or stored. In case of unauthorized release of classified information, competent authorities are the Ministry of Interior, the Haut fonctionnaire de défense et de sécurité ("high civil servant for defence and security") of the relevant ministry, and the General secretary for National Defence. Violation of such secrets is an offence punishable with 7 years of imprisonment and a 100,000 Euro fine; if the offence is committed by imprudence or negligence, the penalties are 3 years of imprisonment and a 45,000 Euro fine. Hong Kong The Security Bureau is responsible for developing policies in regards to the protection and handling of confidential government information. In general, the system used in Hong Kong is very similar to the UK system, developed from the Colonial Hong Kong era. Four classifications exists in Hong Kong, from highest to lowest in sensitivity: Top Secret (絕對機密) Secret (高度機密) Confidential (機密) Temporary Confidential (臨時保密) Restricted (限閱文件/內部文件) Restricted (staff) (限閱文件(人事)) Restricted (tender) (限閱文件 (投標)) Restricted (administration) (限閱文件 (行政)) Restricted documents are not classified per se, but only those who have a need to know will have access to such information, in accordance with the Personal Data (Privacy) Ordinance. New Zealand New Zealand uses the Restricted classification, which is lower than Confidential. People may be given access to Restricted information on the strength of an authorisation by their Head of department, without being subjected to the background vetting associated with Confidential, Secret and Top Secret clearances. New Zealand's security classifications and the national-harm requirements associated with their use are roughly similar to those of the United States. In addition to national security classifications there are two additional security classifications, In Confidence and Sensitive, which are used to protect information of a policy and privacy nature. There are also a number of information markings used within ministries and departments of the government, to indicate, for example, that information should not be released outside the originating ministry. Because of strict privacy requirements around personal information, personnel files are controlled in all parts of the public and private sectors. Information relating to the security vetting of an individual is usually classified at the In Confidence level. Romania In Romania, classified information is referred to as "state secrets" (secrete de stat) and is defined by the Penal Code as "documents and data that manifestly appear to have this status or have been declared or qualified as such by decision of Government". There are three levels of classification—Secret, Top Secret, and Top Secret of Particular Importance. The levels are set by the Romanian Intelligence Service and must be aligned with NATO regulations—in case of conflicting regulations, the latter are applied with priority. Dissemination of classified information to foreign agents or powers is punishable by up to life imprisonment, if such dissemination threatens Romania's national security. Russia In the Russian Federation, a state secret (Государственная тайна) is information protected by the state on its military, foreign policy, economic, intelligence, counterintelligence, operational and investigative and other activities, dissemination of which could harm state security. Sweden The Swedish classification has been updated due to increased NATO/PfP cooperation. All classified defence documents will now have both a Swedish classification (Kvalificerat hemlig, Hemlig, Konfidentiell or Begränsat Hemlig), and an English classification (Top Secret, Secret, Confidential, or Restricted). The term skyddad identitet, "protected identity", is used in the case of protection of a threatened person, basically implying "secret identity", accessible only to certain members of the police force and explicitly authorised officials. Switzerland At the federal level, classified information in Switzerland is assigned one of three levels, which are from lowest to highest: INTERNAL, CONFIDENTIAL, SECRET. Respectively, these are, in German, INTERN, VERTRAULICH, GEHEIM; in French, INTERNE, CONFIDENTIEL, SECRET; in Italian, AD USO INTERNO, CONFIDENZIALE, SEGRETO. As in other countries, the choice of classification depends on the potential impact that the unauthorised release of the classified document would have on Switzerland, the federal authorities or the authorities of a foreign government. According to the Ordinance on the Protection of Federal Information, information is classified as INTERNAL if its "disclosure to unauthorised persons may be disadvantageous to national interests." Information classified as CONFIDENTIAL could, if disclosed, compromise "the free formation of opinions and decision-making of the Federal Assembly or the Federal Council," jeopardise national monetary/economic policy, put the population at risk or adversely affect the operations of the Swiss Armed Forces. Finally, the unauthorised release of SECRET information could seriously compromise the ability of either the Federal Assembly or the Federal Council to function or impede the ability of the Federal Government or the Armed Forces to act. Turkey According to the related regulations in Turkey, there are four levels of document classification: çok gizli (top secret), gizli (secret), özel (confidential) and hizmete özel (restricted). The fifth "inscription" is tasnif dışı, which means unclassified. United Kingdom Until 2013, the United Kingdom used five levels of classification—from lowest to highest, they were: PROTECT, RESTRICTED, CONFIDENTIAL, SECRET and TOP SECRET (formerly MOST SECRET). The Cabinet Office provides guidance on how to protect information, including the security clearances required for personnel. Staff may be required to sign to confirm their understanding and acceptance of the Official Secrets Acts 1911 to 1989, although the Act applies regardless of signature. PROTECT is not in itself a security protective marking level (such as RESTRICTED or greater), but is used to indicate information which should not be disclosed because, for instance, the document contains tax, national insurance, or other personal information. Government documents without a classification may be marked as UNCLASSIFIED or NOT PROTECTIVELY MARKED. This system was replaced by the Government Security Classifications Policy, which has a simpler model: TOP SECRET, SECRET, and OFFICIAL from April 2014. OFFICIAL SENSITIVE is a security marking which may be followed by one of three authorised descriptors: COMMERCIAL, LOCSEN (location sensitive) or PERSONAL. SECRET and TOP SECRET may include a caveat such as UK EYES ONLY. Also useful is that scientific discoveries may be classified via the D-Notice system if they are deemed to have applications relevant to national security. These may later emerge when technology improves so for example the specialised processors and routing engines used in graphics cards are loosely based on top secret military chips designed for code breaking and image processing. They may or may not have safeguards built in to generate errors when specific tasks are attempted and this is invariably independent of the card's operating system. United States The U.S. classification system is currently established under Executive Order 13526 and has three levels of classification—Confidential, Secret, and Top Secret. The U.S. had a Restricted level during World War II but no longer does. U.S. regulations state that information received from other countries at the Restricted level should be handled as Confidential. A variety of markings are used for material that is not classified, but whose distribution is limited administratively or by other laws, e.g., For Official Use Only (FOUO), or Sensitive but Unclassified (SBU). The Atomic Energy Act of 1954 provides for the protection of information related to the design of nuclear weapons. The term "Restricted Data" is used to denote certain nuclear technology. Information about the storage, use or handling of nuclear material or weapons is marked "Formerly Restricted Data". These designations are used in addition to level markings (Confidential, Secret and Top Secret). Information protected by the Atomic Energy Act is protected by law and information classified under the Executive Order is protected by Executive privilege. The U.S. government insists it is "not appropriate" for a court to question whether any document is legally classified. In the 1973 trial of Daniel Ellsberg for releasing the Pentagon Papers, the judge did not allow any testimony from Ellsberg, claiming it was "irrelevant", because the assigned classification could not be challenged. The charges against Ellsberg were ultimately dismissed after it was revealed that the government had broken the law in secretly breaking into the office of Ellsberg's psychiatrist and in tapping his telephone without a warrant. Ellsberg insists that the legal situation in the U.S. today is worse than it was in 1973, and Edward Snowden could not get a fair trial. The State Secrets Protection Act of 2008 might have given judges the authority to review such questions in camera, but the bill was not passed. When a government agency acquires classified information through covert means, or designates a program as classified, the agency asserts "ownership" of that information and considers any public availability of it to be a violation of their ownership — even if the same information was acquired independently through "parallel reporting" by the press or others. For example, although the CIA drone program has been widely discussed in public since the early 2000s, and reporters personally observed and reported on drone missile strikes, the CIA still considers the very existence of the program to be classified in its entirety, and any public discussion of it technically constitutes exposure of classified information. "Parallel reporting" was an issue in determining what constitutes "classified" information during the Hillary Clinton email controversy when Assistant Secretary of State for Legislative Affairs Julia Frifield noted, "When policy officials obtain information from open sources, ‘think tanks,’ experts, foreign government officials, or others, the fact that some of the information may also have been available through intelligence channels does not mean that the information is necessarily classified.” Table of equivalent classification markings in various countries Corporate classification Private corporations often require written confidentiality agreements and conduct background checks on candidates for sensitive positions. In the U.S. the Employee Polygraph Protection Act prohibits private employers from requiring lie detector tests, but there are a few exceptions. Policies dictating methods for marking and safeguarding company-sensitive information (e.g. "IBM Confidential") are common and some companies have more than one level. Such information is protected under trade secret laws. New product development teams are often sequestered and forbidden to share information about their efforts with un-cleared fellow employees, the original Apple Macintosh project being a famous example. Other activities, such as mergers and financial report preparation generally involve similar restrictions. However, corporate security generally lacks the elaborate hierarchical clearance and sensitivity structures and the harsh criminal sanctions that give government classification systems their particular tone. Traffic Light Protocol The Traffic Light Protocol was developed by the Group of Eight countries to enable the sharing of sensitive information between government agencies and corporations. This protocol has now been accepted as a model for trusted information exchange by over 30 other countries. The protocol provides for four "information sharing levels" for the handling of sensitive information. See also Economic Espionage Act of 1996 (U.S.) Espionage Espionage Act of 1917 (U.S.) Eyes only Five Eyes Golden Shield Project Government Security Classifications Policy (UK) Illegal number Information security Official Secrets Act (UK, India, Ireland, Malaysia, New Zealand) Security of Information Act (Canada) State Secrets Privilege (US) Wassenaar Arrangement WikiLeaks UKUSA Agreement References External links Defence Vetting Agency. Carries out national security checks in the UK. Peter Galison, Removing Knowledge in Critical Inquiry n°31 (Autumn 2004). Goldman, Jan, & Susan Maret. Intelligence and information policy for national security: Key terms and concepts. Rowman & Littlefield, 2016. Lerner, Brenda Wilmoth, & K. Lee Lerner, eds. Terrorism: Essential primary sources. Thomson Gale, 2006. Los Alamos table of equivalent US and UK classifications Maret, Susan. On their own terms: A lexicon with an emphasis on information-related terms produced by the U.S. federal government. , FAS, 6th ed., 2016. Marking Classified National Security Information ISOO booklet. The National Security Archive – a collection of declassified documents acquired through the FOIA. Parliament of Montenegro, Law on confidentiality of data. . Parliament of Serbia, Law on confidentiality of data. . U.S. Department of Defense National Industrial Security Program - Operating Manual (DoD 5220.22-M), explaining rules and policies for handling classified information. Information sensitivity
253111
https://en.wikipedia.org/wiki/ARPANET
ARPANET
The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Building on the ideas of J. C. R. Licklider, Bob Taylor initiated the ARPANET project in 1966 to enable access to remote computers. Taylor appointed Larry Roberts as program manager. Roberts made the key decisions about the network design. He incorporated Donald Davies’ concepts and designs for packet switching, and sought input from Paul Baran. ARPA awarded the contract to build the network to Bolt Beranek & Newman who developed the first protocol for the network. Roberts engaged Leonard Kleinrock at UCLA to develop mathematical methods for analyzing the packet network technology. The first computers were connected in 1969 and the Network Control Program was implemented in 1970. The network was declared operational in 1971. Further software development enabled remote login, file transfer and email. The network expanded rapidly and operational control passed to the Defense Communications Agency in 1975. Internetworking research in the early 1970s led by Bob Kahn at DARPA and Vint Cerf at Stanford University and later DARPA formulated the Transmission Control Program, which incorporated concepts from the French CYCLADES project. As this work progressed, a protocol was developed by which multiple separate networks could be joined into a network of networks. Version 4 of TCP/IP was installed in the ARPANET for production use in January 1983 after the Department of Defense made it standard for all military computer networking. Access to the ARPANET was expanded in 1981, when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In the early 1980s, the NSF funded the establishment of national supercomputing centers at several universities, and provided network access and network interconnectivity with the NSFNET project in 1986. The ARPANET was formally decommissioned in 1990, after partnerships with the telecommunication and computer industry had assured private sector expansion and future commercialization of an expanded world-wide network, known as the Internet. History Inspiration Historically, voice and data communications were based on methods of circuit switching, as exemplified in the traditional telephone network, wherein each telephone call is allocated a dedicated, end to end, electronic connection between the two communicating stations. The connection is established by switching systems that connected multiple intermediate call legs between these systems for the duration of the call. The traditional model of the circuit-switched telecommunication network was challenged in the early 1960s by Paul Baran at the RAND Corporation, who had been researching systems that could sustain operation during partial destruction, such as by nuclear war. He developed the theoretical model of distributed adaptive message block switching. However, the telecommunication establishment rejected the development in favor of existing models. Donald Davies at the United Kingdom's National Physical Laboratory (NPL) independently arrived at a similar concept in 1965. The earliest ideas for a computer network intended to allow general communications among computer users were formulated by computer scientist J. C. R. Licklider of Bolt, Beranek and Newman (BBN), in April 1963, in memoranda discussing the concept of the "Intergalactic Computer Network". Those ideas encompassed many of the features of the contemporary Internet. In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at the Defense Department's Advanced Research Projects Agency (ARPA). He convinced Ivan Sutherland and Bob Taylor that this network concept was very important and merited development, although Licklider left ARPA before any contracts were assigned for development. Sutherland and Taylor continued their interest in creating the network, in part, to allow ARPA-sponsored researchers at various corporate and academic locales to utilize computers provided by ARPA, and, in part, to quickly distribute new software and other computer science results. Taylor had three computer terminals in his office, each connected to separate computers, which ARPA was funding: one for the System Development Corporation (SDC) Q-32 in Santa Monica, one for Project Genie at the University of California, Berkeley, and another for Multics at the Massachusetts Institute of Technology. Taylor recalls the circumstance: "For each of these three terminals, I had three different sets of user commands. So, if I was talking online with someone at S.D.C., and I wanted to talk to someone I knew at Berkeley, or M.I.T., about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. I said, "Oh Man!", it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go. That idea is the ARPANET". Donald Davies' work caught the attention of ARPANET developers at Symposium on Operating Systems Principles in October 1967. He gave the first public presentation, having coined the term packet switching, in August 1968 and incorporated it into the NPL network in England. The NPL network and ARPANET were the first two networks in the world to use packet switching, and were themselves interconnected in 1973. Roberts said the ARPANET and other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design. Creation In February 1966, Bob Taylor successfully lobbied ARPA's Director Charles M. Herzfeld to fund a network project. Herzfeld redirected funds in the amount of one million dollars from a ballistic missile defense program to Taylor's budget. Taylor hired Larry Roberts as a program manager in the ARPA Information Processing Techniques Office in January 1967 to work on the ARPANET. Roberts asked Frank Westervelt to explore the initial design questions for a network. In April 1967, ARPA held a design session on technical standards. The initial standards for identification and authentication of users, transmission of characters, and error checking and retransmission procedures were discussed. Roberts' proposal was that all mainframe computers would connect to one another directly. The other investigators were reluctant to dedicate these computing resources to network administration. Wesley Clark proposed minicomputers should be used as an interface to create a message switching network. Roberts modified the ARPANET plan to incorporate Clark's suggestion and named the minicomputers Interface Message Processors (IMPs). The plan was presented at the inaugural Symposium on Operating Systems Principles in October 1967. Donald Davies' work on packet switching and the NPL network, presented by a colleague (Roger Scantlebury), came to the attention of the ARPA investigators at this conference. Roberts applied Davies' concept of packet switching for the ARPANET, and sought input from Paul Baran. The NPL network was using line speeds of 768 kbit/s, and the proposed line speed for the ARPANET was upgraded from 2.4 kbit/s to 50 kbit/s. By mid-1968, Roberts and Barry Wessler wrote a final version of the IMP specification based on a Stanford Research Institute (SRI) report that ARPA commissioned to write detailed specifications describing the ARPANET communications network. Roberts gave a report to Taylor on 3 June, who approved it on 21 June. After approval by ARPA, a Request for Quotation (RFQ) was issued for 140 potential bidders. Most computer science companies regarded the ARPA proposal as outlandish, and only twelve submitted bids to build a network; of the twelve, ARPA regarded only four as top-rank contractors. At year's end, ARPA considered only two contractors, and awarded the contract to build the network to Bolt, Beranek and Newman Inc. (BBN) on 7 April 1969. The initial, seven-person BBN team were much aided by the technical specificity of their response to the ARPA RFQ, and thus quickly produced the first working system. This team was led by Frank Heart and included Robert Kahn. The BBN-proposed network closely followed Roberts' ARPA plan: a network composed of small computers called Interface Message Processors (or IMPs), similar to the later concept of routers, that functioned as gateways interconnecting local resources. At each site, the IMPs performed store-and-forward packet switching functions, and were interconnected with leased lines via telecommunication data sets (modems), with initial data rates of . The host computers were connected to the IMPs via custom serial communication interfaces. The system, including the hardware and the packet switching software, was designed and installed in nine months. The BBN team continued to interact with the NPL team with meetings between them taking place in the U.S. and the U.K. The first-generation IMPs were built by BBN Technologies using a rugged computer version of the Honeywell DDP-516 computer, configured with of expandable magnetic-core memory, and a 16-channel Direct Multiplex Control (DMC) direct memory access unit. The DMC established custom interfaces with each of the host computers and modems. In addition to the front-panel lamps, the DDP-516 computer also features a special set of 24 indicator lamps showing the status of the IMP communication channels. Each IMP could support up to four local hosts, and could communicate with up to six remote IMPs via early Digital Signal 0 leased telephone lines. The network connected one computer in Utah with three in California. Later, the Department of Defense allowed the universities to join the network for sharing hardware and software resources. Debate on design goals According to Charles Herzfeld, ARPA Director (1965–1967): Nonetheless, according to Stephen J. Lukasik, who as Deputy Director (1967–1970) and Director of DARPA (1970–1975) was "the person who signed most of the checks for Arpanet's development": The ARPANET incorporated distributed computation, and frequent re-computation, of routing tables. This increased the survivability of the network in the face of significant interruption. Automatic routing was technically challenging at the time. The ARPANET was designed to survive subordinate-network losses, since the principal reason was that the switching nodes and network links were unreliable, even without any nuclear attacks. The Internet Society agrees with Herzfeld in a footnote in their online article, A Brief History of the Internet: Paul Baran, the first to put forward a theoretical model for communication using packet switching, conducted the RAND study referenced above. Though the ARPANET did not exactly share Baran's project's goal, he said his work did contribute to the development of the ARPANET. Minutes taken by Elmer Shapiro of Stanford Research Institute at the ARPANET design meeting of 9–10 October 1967 indicate that a version of Baran's routing method ("hot potato") may be used, consistent with the NPL team's proposal at the Symposium on Operating System Principles in Gatlinburg. Implementation The first four nodes were designated as a testbed for developing and debugging the 1822 protocol, which was a major undertaking. While they were connected electronically in 1969, network applications were not possible until the Network Control Program was implemented in 1970 enabling the first two host-host protocols, remote login (Telnet) and file transfer (FTP) which were specified and implemented between 1969 and 1973. The network was declared operational in 1971. Network traffic began to grow once email was established at the majority of sites by around 1973. Initial four hosts The first four IMPs were: University of California, Los Angeles (UCLA), where Leonard Kleinrock had established a Network Measurement Center, with an SDS Sigma 7 being the first computer attached to it; The Augmentation Research Center at Stanford Research Institute (now SRI International), where Douglas Engelbart had created the new NLS system, an early hypertext system, and would run the Network Information Center (NIC), with the SDS 940 that ran NLS, named "Genie", being the first host attached; University of California, Santa Barbara (UCSB), with the Culler-Fried Interactive Mathematics Center's IBM 360/75, running OS/MVT being the machine attached; The University of Utah School of Computing, where Ivan Sutherland had moved, running a DEC PDP-10 operating on TENEX. The first successful host to host connection on the ARPANET was made between Stanford Research Institute (SRI) and UCLA, by SRI programmer Bill Duvall and UCLA student programmer Charley Kline, at 10:30 pm PST on 29 October 1969 (6:30 UTC on 30 October 1969). Kline connected from UCLA's SDS Sigma 7 Host computer (in Boelter Hall room 3420) to the Stanford Research Institute's SDS 940 Host computer. Kline typed the command "login," but initially the SDS 940 crashed after he typed two characters. About an hour later, after Duvall adjusted parameters on the machine, Kline tried again and successfully logged in. Hence, the first two characters successfully transmitted over the ARPANET were "lo". The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the initial four-node network was established. Elizabeth Feinler created the first Resource Handbook for ARPANET in 1969 which led to the development of the ARPANET directory. The directory, built by Feinler and a team made it possible to navigate the ARPANET. Growth and evolution Roberts engaged Howard Frank to consult on the topological design of the network. Frank made recommendations to increase throughput and reduce costs in a scaled-up network. By March 1970, the ARPANET reached the East Coast of the United States, when an IMP at BBN in Cambridge, Massachusetts was connected to the network. Thereafter, the ARPANET grew: 9 IMPs by June 1970 and 13 IMPs by December 1970, then 18 by September 1971 (when the network included 23 university and government hosts); 29 IMPs by August 1972, and 40 by September 1973. By June 1974, there were 46 IMPs, and in July 1975, the network numbered 57 IMPs. By 1981, the number was 213 host computers, with another host connecting approximately every twenty days. Support for inter-IMP circuits of up to 230.4 kbit/s was added in 1970, although considerations of cost and IMP processing power meant this capability was not actively used. Larry Roberts saw the ARPANET and NPL projects as complementary and sought in 1970 to connect them via a satellite link. Peter Kirstein's research group at University College London (UCL) was subsequently chosen in 1971 in place of NPL for the UK connection. In June 1973, a transatlantic satellite link connected ARPANET to the Norwegian Seismic Array (NORSAR), via the Tanum Earth Station in Sweden, and onward via a terrestrial circuit to a TIP at UCL. UCL provided a gateway for an interconnection with the NPL network, the first interconnected network, and subsequently the SRCnet, the forerunner of UK's JANET network. 1971 saw the start of the use of the non-ruggedized (and therefore significantly lighter) Honeywell 316 as an IMP. It could also be configured as a Terminal Interface Processor (TIP), which provided terminal server support for up to 63 ASCII serial terminals through a multi-line controller in place of one of the hosts. The 316 featured a greater degree of integration than the 516, which made it less expensive and easier to maintain. The 316 was configured with 40 kB of core memory for a TIP. The size of core memory was later increased, to 32 kB for the IMPs, and 56 kB for TIPs, in 1973. In 1975, BBN introduced IMP software running on the Pluribus multi-processor. These appeared in a few sites. In 1981, BBN introduced IMP software running on its own C/30 processor product. Network performance In 1968, Roberts contracted with Kleinrock to measure the performance of the network and find areas for improvement. Building on his earlier work on queueing theory, Kleinrock specified mathematical models of the performance of packet-switched networks, which underpinned the development of the ARPANET as it expanded rapidly in the early 1970s. Operation The ARPANET was a research project that was communications-oriented, rather than user-oriented in design. Nonetheless, in the summer of 1975, the ARPANET was declared "operational". The Defense Communications Agency took control since ARPA was intended to fund advanced research. At about this time, the first ARPANET encryption devices were deployed to support classified traffic. The transatlantic connectivity with NORSAR and UCL later evolved into the SATNET. The ARPANET, SATNET and PRNET were interconnected in 1977. The ARPANET Completion Report, published in 1981 jointly by BBN and ARPA, concludes that: CSNET, expansion Access to the ARPANET was expanded in 1981, when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). Adoption of TCP/IP The DoD made TCP/IP standard for all military computer networking in 1980. NORSAR and University College London left the ARPANET and began using TCP/IP over SATNET in early 1982. On January 1, 1983, known as flag day, TCP/IP protocols became the standard for the ARPANET, replacing the earlier Network Control Program. MILNET, phasing out In September 1984 work was completed on restructuring the ARPANET giving U.S. military sites their own Military Network (MILNET) for unclassified defense department communications. Both networks carried unclassified information, and were connected at a small number of controlled gateways which would allow total separation in the event of an emergency. MILNET was part of the Defense Data Network (DDN). Separating the civil and military networks reduced the 113-node ARPANET by 68 nodes. After MILNET was split away, the ARPANET would continue be used as an Internet backbone for researchers, but be slowly phased out. Decommissioning In 1985, the National Science Foundation (NSF) funded the establishment of national supercomputing centers at several universities, and provided network access and network interconnectivity with the NSFNET project in 1986. NSFNET became the Internet backbone for government agencies and universities. The ARPANET project was formally decommissioned in 1990. The original IMPs and TIPs were phased out as the ARPANET was shut down after the introduction of the NSFNet, but some IMPs remained in service as late as July 1990. In the wake of the decommissioning of the ARPANET on 28 February 1990, Vinton Cerf wrote the following lamentation, entitled "Requiem of the ARPANET": Legacy The ARPANET was related to many other research projects, which either influenced the ARPANET design, or which were ancillary projects or spun out of the ARPANET. Senator Al Gore authored the High Performance Computing and Communication Act of 1991, commonly referred to as "The Gore Bill", after hearing the 1988 concept for a National Research Network submitted to Congress by a group chaired by Leonard Kleinrock. The bill was passed on 9 December 1991 and led to the National Information Infrastructure (NII) which Gore called the information superhighway. Inter-networking protocols developed by ARPA and implemented on the ARPANET paved the way for future commercialization of a new world-wide network, known as the Internet. The ARPANET project was honored with two IEEE Milestones, both dedicated in 2009. Software and protocols IMP functionality Because it was never a goal for the ARPANET to support IMPs from vendors other than BBN, the IMP-to-IMP protocol and message format were not standardized. However, the IMPs did nonetheless communicate amongst themselves to perform link-state routing, to do reliable forwarding of messages, and to provide remote monitoring and management functions to ARPANET's Network Control Center. Initially, each IMP had a 6-bit identifier, and supported up to 4 hosts, which were identified with a 2-bit index. An ARPANET host address, therefore, consisted of both the port index on its IMP and the identifier of the IMP, which was written with either port/IMP notation or as a single byte; for example, the address of MIT-DMG (notable for hosting development of Zork) could be written as either 1/6 or 70. An upgrade in early 1976 extended the host and IMP numbering to 8-bit and 16-bit, respectively. In addition to primary routing and forwarding responsibilities, the IMP ran several background programs, titled TTY, DEBUG, PARAMETER-CHANGE, DISCARD, TRACE, and STATISTICS. These were given host numbers in order to be addressed directly and provided functions independently of any connected host. For example, "TTY" allowed an on-site operator to send ARPANET packets manually via the teletype connected directly to the IMP. 1822 protocol The starting point for host-to-host communication on the ARPANET in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The message format was designed to work unambiguously with a broad range of computer architectures. An 1822 message essentially consisted of a message type, a numeric host address, and a data field. To send a data message to another host, the transmitting host formatted a data message containing the destination host's address and the data message being sent, and then transmitted the message through the 1822 hardware interface. The IMP then delivered the message to its destination address, either by delivering it to a locally connected host, or by delivering it to another IMP. When the message was ultimately delivered to the destination host, the receiving IMP would transmit a Ready for Next Message (RFNM) acknowledgement to the sending, host IMP. Network Control Program Unlike modern Internet datagrams, the ARPANET was designed to reliably transmit 1822 messages, and to inform the host computer when it loses a message; the contemporary IP is unreliable, whereas the TCP is reliable. Nonetheless, the 1822 protocol proved inadequate for handling multiple connections among different applications residing in a host computer. This problem was addressed with the Network Control Program (NCP), which provided a standard method to establish reliable, flow-controlled, bidirectional communications links among different processes in different host computers. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept later incorporated in the OSI model. NCP was developed under the leadership of Stephen D. Crocker, then a graduate student at UCLA. Crocker created and led the Network Working Group (NWG) which was made up of a collection of graduate students at universities and research laboratories sponsored by ARPA to carry out the development of the ARPANET and the software for the host computers that supported applications. The various application protocols such as TELNET for remote time-sharing access, File Transfer Protocol (FTP) and rudimentary electronic mail protocols were developed and eventually ported to run over the TCP/IP protocol suite or replaced in the case of email by the Simple Mail Transport Protocol. TCP/IP Steve Crocker formed a "Networking Working Group" in 1969 with Vint Cerf, who also joined an International Networking Working Group in 1972. These groups considered how to interconnect packet switching networks with different specifications, that is, internetworking. Stephen J. Lukasik directed DARPA to focus on internetworking research in the early 1970s. Research led by Bob Kahn at DARPA and Vint Cerf at Stanford University and later DARPA resulted in the formulation of the Transmission Control Program, which incorporated concepts from the French CYCLADES project directed by Louis Pouzin. Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974 (). The following year, testing began through concurrent implementations at Stanford, BBN and University College London. At first a monolithic design, the software was redesigned as a modular protocol stack in version 3 in 1978. Version 4 was installed in the ARPANET for production use in January 1983, replacing NCP. The development of the complete Internet protocol suite by 1989, as outlined in and , and partnerships with the telecommunication and computer industry laid the foundation for the adoption of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet. Network applications NCP provided a standard set of network services that could be shared by several applications running on a single host computer. This led to the evolution of application protocols that operated, more or less, independently of the underlying network service, and permitted independent advances in the underlying protocols. Telnet was developed in 1969 beginning with RFC 15, extended in RFC 855. The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as on 16 April 1971. By 1973, the File Transfer Protocol (FTP) specification had been defined () and implemented, enabling file transfers over the ARPANET. In 1971, Ray Tomlinson, of BBN sent the first network e-mail (, ). Within a few years, e-mail came to represent a very large part of the overall ARPANET traffic. The Network Voice Protocol (NVP) specifications were defined in 1977 (), and implemented. But, because of technical shortcomings, conference calls over the ARPANET never worked well; the contemporary Voice over Internet Protocol (packet voice) was decades away. Password protection The Purdy Polynomial hash algorithm was developed for the ARPANET to protect passwords in 1971 at the request of Larry Roberts, head of ARPA at that time. It computed a polynomial of degree 224 + 17 modulo the 64-bit prime p = 264 − 59. The algorithm was later used by Digital Equipment Corporation (DEC) to hash passwords in the VMS operating system and is still being used for this purpose. Rules and etiquette Because of its government funding, certain forms of traffic were discouraged or prohibited. Leonard Kleinrock claims to have committed the first illegal act on the Internet, having sent a request for return of his electric razor after a meeting in England in 1973. At the time, use of the ARPANET for personal reasons was unlawful. In 1978, against the rules of the network, Gary Thuerk of Digital Equipment Corporation (DEC) sent out the first mass email to approximately 400 potential clients via the ARPANET. He claims that this resulted in $13 million worth of sales in DEC products, and highlighted the potential of email marketing. A 1982 handbook on computing at MIT's AI Lab stated regarding network etiquette: In popular culture Computer Networks: The Heralds of Resource Sharing, a 30-minute documentary film featuring Fernando J. Corbató, J. C. R. Licklider, Lawrence G. Roberts, Robert Kahn, Frank Heart, William R. Sutherland, Richard W. Watson, John R. Pasta, Donald W. Davies, and economist, George W. Mitchell. "Scenario", an episode of the U.S. television sitcom Benson (season 6, episode 20—dated February 1985), was the first incidence of a popular TV show directly referencing the Internet or its progenitors. The show includes a scene in which the ARPANET is accessed. There is an electronic music artist known as "Arpanet", Gerald Donald, one of the members of Drexciya. The artist's 2002 album Wireless Internet features commentary on the expansion of the internet via wireless communication, with songs such as NTT DoCoMo, dedicated to the mobile communications giant based in Japan. Thomas Pynchon mentions the ARPANET in his 2009 novel Inherent Vice, which is set in Los Angeles in 1970, and in his 2013 novel Bleeding Edge. The 1993 television series The X-Files featured the ARPANET in a season 5 episode, titled "Unusual Suspects". John Fitzgerald Byers offers to help Susan Modeski (known as Holly ... "just like the sugar") by hacking into the ARPANET to obtain sensitive information. In the spy-drama television series The Americans, a Russian scientist defector offers access to ARPANET to the Russians in a plea to not be repatriated (Season 2 Episode 5 "The Deal"). Episode 7 of Season 2 is named 'ARPANET' and features Russian infiltration to bug the network. In the television series Person of Interest, main character Harold Finch hacked the ARPANET in 1980 using a homemade computer during his first efforts to build a prototype of the Machine. This corresponds with the real life virus that occurred in October of that year that temporarily halted ARPANET functions. The ARPANET hack was first discussed in the episode 2PiR (stylised 2R) where a computer science teacher called it the most famous hack in history and one that was never solved. Finch later mentioned it to Person of Interest Caleb Phipps and his role was first indicated when he showed knowledge that it was done by "a kid with a homemade computer" which Phipps, who had researched the hack, had never heard before. In the third season of the television series Halt and Catch Fire, the character Joe MacMillan explores the potential commercialization of the ARPANET. See also .arpa, a top-level domain used exclusively for technical infrastructure purposes Computer Networks: The Heralds of Resource Sharing—1972 documentary film History of the Internet List of Internet pioneers Usenet, "A Poor Man's ARPAnet" OGAS References Sources Further reading Oral histories Focuses on Kahn's role in the development of computer networking from 1967 through the early 1980s. Beginning with his work at Bolt Beranek and Newman (BBN), Kahn discusses his involvement as the ARPANET proposal was being written and then implemented, and his role in the public demonstration of the ARPANET. The interview continues into Kahn's involvement with networking when he moves to IPTO in 1972, where he was responsible for the administrative and technical evolution of the ARPANET, including programs in packet radio, the development of a new network protocol (TCP/IP), and the switch to TCP/IP to connect multiple networks. Cerf describes his involvement with the ARPA network, and his relationships with Bolt Beranek and Newman, Robert Kahn, Lawrence Roberts, and the Network Working Group. Baran describes his work at RAND, and discusses his interaction with the group at ARPA who were responsible for the later development of the ARPANET. Kleinrock discusses his work on the ARPANET. Lukasik discusses his tenure at the Advanced Research Projects Agency (ARPA), the development of computer networks and the ARPANET. Frank describes his work on the ARPANET, including his interaction with Roberts and the IPT Office. Detailed technical reference works NTIS documents may be available from External links Timeline. Personal anecdote of the first message ever sent over the ARPANET American inventions History of the Internet Computer-related introductions in 1969 Internet properties established in 1969 Internet properties disestablished in 1990 1969 establishments in the United States Internet in the United States 1990 disestablishments in the United States
260025
https://en.wikipedia.org/wiki/Niskayuna%2C%20New%20York
Niskayuna, New York
Niskayuna is a town in Schenectady County, New York, United States. The population was 23,278 at the 2020 census. The town is located in the southeast part of the county, east of the city of Schenectady, and is the easternmost town in the county. The current Town Supervisor is Jaime Puccioni. History The Town of Niskayuna was created on March 7, 1809 from the town of Watervliet, with an initial population of 681. The name of town was derived from early patents to Dutch settlers: Nis-ti-go-wo-ne or Co-nis-tig-i-one, both derived from the Mohawk language. The 19th-century historians Howell and Munsell mistakenly identified Conistigione as an Indian tribe, but they were a band of Mohawk people known by the term for this location. The original meaning of the words translate roughly as "extensive corn flats", as the Mohawk for centuries cultivated maize fields in the fertile bottomlands along today's Mohawk River. They were the easternmost of the Five Nations of the Iroquois Confederacy. Among the Mohawk chiefs who lived in the area were Ron-warrigh-woh-go-wa (meaning in English the great fault finder or grumbler), Ka-na-da-rokh-go-wa (a great eater), Ro-ya-na (a chief), As-sa-ve-go (big knife), and A-voon-ta-go-wa (big tree). Of these, Ron-warrigh-woh-go-wa strongly objected to selling communal lands to the whites. He ensured that the Mohawk retained the rights of hunting and fishing on lands they deeded to the Dutch and other whites. He was reported to have said that "after the whites had taken possession of our lands, they will make Kaut-sore [literally spoon-food or soup] of our bodies." He generally aided the settlers during the mid-18th century against the Canadians in the French and Indian War, the North American front of the Seven Years' War. The first European settlers of the town were Dutch colonists who chose to locate outside the manor of Rensselaerwyck to avoid the oversight of the patroons and the trading government of New Netherland. Harmon Vedder obtained a patent for some land in 1664, soon after the founders in 1661 gained land in what developed as the village and city of Schenectady. The traders of Fort Orange retained their monopoly, forbidding the settlers in the Schenectady area from fur trading. They developed mostly as farmers. Among the early settlers were the ethnic Dutch Van Brookhoven, Claase, Clute, Consaul, Groot, Jansen, Krieger (Cregier), Pearse, Tymerson, Vedder, Van Vranken, and Vrooman families. Captain Martin Cregier, the first burgomaster of New Amsterdam, later settled in Niskayuna; he died in 1712. Following the Revolutionary War, Yankee settlers entered New York, settling in the Mohawk Valley and to the west. The Erie Canal of 1825 and later enlargements brought increased traffic and trade through the valley. During the 19th and 20th centuries, industries developed along the Mohawk River, especially concentrated in Schenectady in this county. Farming continued in outlying areas. The headquarters of General Electric and Westinghouse Electric developed in the city of Schenectady, which became a center of broad-reaching innovation in uses of electricity and a variety of consumer products. After World War II, the Knolls Atomic Power Laboratory was opened in 1946 in Niskayuna, under a contract between General Electric and the US government. In 1973, the General Electric Engineering Development Center moved from downtown Schenectady to River Road in Niskayuna. Today, it is one of the two world headquarters of GE Global Research with the other in Bangalore, India. Due to high-level scientific and technological jobs associated with these businesses, Niskayuna has a high level of education among its residents and a high per capita income of towns in the capital area. The following sites in the town are listed on the National Register of Historic Places: George Westinghouse Jones House, Niskayuna Railroad Station, Niskayuna Reformed Church, and Rosendale Common School. Geography The northern and eastern town lines are defined by the Mohawk River with Saratoga County, New York, on the opposite bank. The south town line is the town of Colonie in Albany County. Lock 7 of the Erie Canal is located in the town. The town is bordered by the city of Schenectady to the west. According to the United States Census Bureau, the town has a total area of , of which is land and , or 5.92%, is water. Niskayuna previously received the designation of Tree City USA, though it is not listed on the current Tree City USA roster. Demographics As of the census of 2020, there were 23,278 people, 7,285 single family homes,1,415 apartments, and a small number of town houses and condominiums. The population density was 1,438.3 people per square mile (555.3/km2). There were 8,046 housing units at an average density of 570.2 per square mile (220.2/km2). The town's population was 51.7% female and 48.3% male. The racial makeup of the town was 90.7% White, 6.0% Asian, 1.9% African-American, and 1.6% "Other." There were 7,787 households, out of which 36.2% had children under the age of 18 living with them, 64.2% were married couples living together, 7.6% had a female householder with no husband present, and 25.6% were non-families. 22.1% of all households were made up of individuals, and 11.5% had someone living alone who was 65 years of age or older. The average household size was 2.56 and the average family size was 3.02. In the town, the population was spread out, with 26.1% under the age of 18, 4.2% from 18 to 24, 25.4% from 25 to 44, 27.1% from 45 to 64, and 17.1% who were 65 years of age or older. The median age was 42 years. For every 100 females, there were 93.3 males. For every 100 females age 18 and over, there were 87.8 males. The median income for a household in the town was $93,800, and the median income for a family was $94,539. Males had a median income of $59,738 versus $39,692 for females. The per capita income for the town was $33,257. The Town has many residents who commute about fifteen miles to work in Albany, the capital of New York State. Niska Day Since 1980, the annual community holiday "Niska-Day" (or Nisky-Day) is traditionally celebrated on the first Saturday after the third Friday in May. The festival begins in the early morning with a family foot race. This is followed by a parade and a fair. The day concludes with fireworks (weather permitting). Community groups pick a new theme each year (e.g., in 2007: "Niska-unity"). The town has the goal of bringing the families together for a celebration to help them recognize and appreciate their shared identity as residents of the town of Niskayuna. It takes place at Craig Elementary School Soccer Fields. The celebration was established in 1980 by the Niskayuna Community Action Program (N-CAP), responding to a school district report on mental health needs, to reinforce shared community identification. Unlike many municipalities, the town of Niskayuna does not sponsor an official observation of the Fourth of July. "Niska-Day" serves as the local substitute. Communities and locations Aqueduct – A hamlet at the northern tip of the town. Avon Crest – A large suburban development south of Troy-Schenectady Road. Avon Crest North – An extension of Avon Crest to the north of Troy-Schenectady Road. Catherine's Woods Estates – A small neighborhood on the east side of the town near the Mohawk River. Edison Woods – A small, newer, upscale suburban development near the Knolls Atomic Power Laboratory. Forest Oaks – A small, upscale development off Pearse Road which borders Albany County. Grand Blvd Estates – A neighborhood on either side of the tree-lined Grand Boulevard mall. Its center mall was once the route of a main trolley line from the Hillside Avenue trolley barns to downtown Schenectady. Hawthorne Hill – A suburban community east of Schenectady. Hexam Gardens - a small development between Balltown Rd, Rt. 7, and the Reist Bird Sanctuary. Karen Crest – A small development in the southwestern part of town near Hillside School. Merlin Park - A residential area bordered by Route 7 (south), Rosendale Road (north), Mohawk Road (west) and WTRY Road (east). Niska Isle – A peninsula along the Mohawk River Niskayuna – A hamlet and census-designated place in the southeast part of the town. "Old Niskayuna" – The area of the town west of Balltown Road and north of Union Street. The term is a misnomer; "Old Niskayuna" is actually the Niskayuna hamlet at the east end of town that once extended well into what is now the town of Colonie. Orchard Park – A small neighborhood situated between GE Global Research and Niskayuna High School and bordered by Balltown Road on the west and River Road to the north. Rosendale Estates – A large suburban development in the central part of the town near Rosendale Elementary school and Iroquois Middle School. Stanford Heights – A hamlet in the southwest corner of the town that contains a historic mansion once owned by the parents of Leland Stanford and then later by their son Charles Stanford. Windsor Estates – Another small, upscale development near GE Global Research and the Knolls Atomic Power Laboratory (KAPL). The front entrance leads to Van Antwerp Road, Niskayuna High School, and Town Hall, while the back entrance leads to River Road. Woodcrest – A suburban neighborhood located off VanAntwerp Road connected west of Rosendale Estates. Notable people Colin Angle – co-founder and CEO of iRobot Jeff Blatnick (1957–2012) – wrestler, Olympic gold medalist Brian Chesky – co-founder of Airbnb Joe Crummey – radio talk show host André Davis – NFL wide receiver (played for Houston Texans) William A. Edelstein (1944–2014) – physicist, the primary inventor of spin-warp imaging, which is still used in all commercial MRI systems Kate Fagan - ESPN sports reporter and podcast host Ivar Giaever – physicist, 1972 Nobel laureate Jordan Juron, professional ice hockey player Steve Katz, musician, Blues Project, and Blood, Sweat and Tears Gilbert King (born 1962), author of Devil in the Grove (2012), winning 2013 Pulitzer Prize for General Non-Fiction Marc LaBelle – musician, singer, songwriter, Dirty Honey Lyn Lifshin - poet Clarence Linder (1903–1994) – General Electric executive Jean Mulder – linguist specializing in Australian English Maureen O'Sullivan (1911–1998) – actress, mother of Mia Farrow Gabriella Pizzolo – Actress who appeared in Matilda on Broadway and as Suzie from the hit Netflix series Stranger Things. Ron Rivest – cryptographer, co-inventor of the RSA encryption algorithm Kayla Treanor - Women lacrosse player for Syracuse and USA Lacrosse Garrett Whitley – baseball prospect, drafted in first round of 2015 MLB Draft by the Tampa Bay Rays Andrew Yang – Politician and entrepreneur, and Democratic presidential candidate Literary references Herman Melville, in his novel Moby Dick, refers to a sailor on the ship Jeroboam who, according to a story relayed by Stubb, the second mate on the Pequod, "had been originally nurtured among the crazy society of Neskyeuna Shakers, where he had been a great prophet.” In popular culture Niskayuna appears in a driving montage in The Simpsons episode "D'oh Canada." Notes External links Niskayuna Central School District Towns in New York (state) Towns in Schenectady County, New York Populated places established in 1809 1809 establishments in New York (state)
264466
https://en.wikipedia.org/wiki/Electronic%20business
Electronic business
Electronic business (or "Online Business" or "e-business") is any kind of business or commercial transaction that includes sharing information across the internet. Commerce constitutes the exchange of products and services between businesses, groups, and individuals and can be seen as one of the essential activities of any business. Electronic commerce focuses on the use of information and communication technology to enable the external activities and relationships of the business with individuals, groups, and other businesses, while e-business refers to business with help of the internet. Electronic business differs from electronic commerce as it does not only deal with online transactions of selling and buying of a product and/or service but also enables to conduct of business processes (inbound/outbound logistics, manufacturing & operations, marketing and sales, customer service) within the value chain through internal or external networks. The term "e-business" was coined by IBM's marketing and Internet team in 1996. Market participants in Electronic Business Electronic business can take place between a very large number of market participants; it can be between business and consumer, private individuals, public administrations, or any other organizations such as NGOs. These various market participants can be divided into three main groups: 1) Business (B) 2) Consumer (C) 3) Administration (A) All of them can be either buyers or service providers within the market. There are nine possible combinations for electronic business relationships. B2C and B2B belong to E-commerce, while A2B and A2A belong to the E-government sector that is also a part of the electronic business. Supply chain management and E-business With the development of the e-commerce industry, business activities are becoming more and more complex to coordinate, so efficiency in the business is crucial for the success of e-commerce. Hence, well-developed supply chain management is the key component of e-commerce, because the e-commerce industry is not focusing only on building appropriate web site but it also focuses on suitable infrastructure, well-developed supply chain strategy, etc. By definition, supply chain management refers to the management of the flow of goods and services, and all activities connected with transforming the raw materials into final products. The goal of the business is to maximize customer value and gain a competitive advantage over others. The supply chain management in the e-commerce industry mainly focuses on manufacturing, supplying the raw materials, managing the supply and demand, distribution, etc. Effective supply chain management in e-commerce often gives an advantage for the companies to positively leverage new opportunities for maximizing their profit by satisfying and meeting the customer's expectations. With the well-developed supply chain management, the company has a chance for better success by forming the right partnership and supply network, automating the business, etc. The enabling role of e-business technologies in supply chain organizations on the development of intra and inter organizational collaboration, and its impact on performance is discussed in depth in an article by Nada Sanders. To sum up, effective supply chain management in the e-commerce industry is needed for three main reasons: -Ensuring high service levels and stock availability; -Encouraging positive customer experience and reviews, and building a brand reputation; -Cost efficiency; History One of the founding pillars of electronic business was the development of the Electronic Data Interchange (EDI) electronic data interchange. This system replaced traditional mailing and faxing of documents with a digital transfer of data from one computer to another, without any human intervention. Michael Aldrich is considered the developer of the predecessor to online shopping. In 1979, the entrepreneur connected a television set to a transaction processing computer with a telephone line and called it "teleshopping", meaning shopping at distance. From the mid-nineties, major advancements were made in the commercial use of the Internet. Amazon, which launched in 1995, started as an online bookstore and grew to become nowadays the largest online retailer worldwide, selling food, toys, electronics, apparel and more. Other successful stories of online marketplaces include eBay or Etsy. In 1994, IBM, with its agency Ogilvy & Mather, began to use its foundation in IT solutions and expertise to market itself as a leader of conducting business on the Internet through the term "e-business." Then CEO Louis V. Gerstner, Jr. was prepared to invest $1 billion to market this new brand. After conducting worldwide market research in October 1997, IBM began with an eight-page piece in The Wall Street Journal that would introduce the concept of "e-business" and advertise IBM's expertise in the new field. IBM decided not to trademark the term "e-business" in the hopes that other companies would use the term and create an entirely new industry. However, this proved to be too successful and by 2000, to differentiate itself, IBM launched a $300 million campaign about its "e-business infrastructure" capabilities. Since that time, the terms, "e-business" and "e-commerce" have been loosely interchangeable and have become a part of the common vernacular. According to the U.S. Department Of Commerce, the estimated retail e-commerce sales in Q1 2020 were representing almost 12% of total U.S. retail sales, against 4% for Q1 2010. Business model The transformation toward e-business is complex and in order for it to succeed, there is a need to balance between strategy, an adapted business model (e-intermediary, marketplaces), right processes (sales, marketing) and technology ( Supply Chain Management, Customer Relationship Management). . When organizations go online, they have to decide which e-business models best suit their goals. A business model is defined as the organization of product, service and information flows, and the source of revenues and benefits for suppliers and customers. The concept of the e-business model is the same but used in the online presence. Revenue model A key component of the business model is the revenue model or profit model, which is a framework for generating revenues. It identifies which revenue source to pursue, what value to offer, how to price the value, and who pays for the value. It is a key component of a company's business model. It primarily identifies what product or service will be created in order to generate revenues and the ways in which the product or service will be sold. Without a well-defined revenue model, that is, a clear plan of how to generate revenues, new businesses will more likely struggle due to costs which they will not be able to sustain. By having a revenue model, a business can focus on a target audience, fund development plans for a product or service, establish marketing plans, begin a line of credit and raise capital. E-commerce E-commerce (short for "electronic commerce") is trading in products or services using computer networks, such as the Internet. Electronic commerce draws on technologies such as mobile commerce, electronic funds transfer, supply in chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection. Modern electronic commerce typically uses the World Wide Web for at least one part of the transaction's life cycle, although it may also use other technologies such as e-mail. Customer Relationship Management in e-business Customer Relationship Management (CRM) is the strategy that is used to build relationships and interactions with customers and potential/future customers. CRM provides better customer service, allowing companies to analyze their past, current and future customers on a variety of levels. It is one of the elements which is essential for any business, including e-commerce because it allows companies to grow and succeed. It cannot be done without technology. It is the formation of bonds between customers and the company. CRM impacts e-commerce sites by becoming an essential part of business successes. Interactively collecting and considering customer data helps to build a company's e-CRM capability, which then leads to the company's corporate success. The goal of CRM is to establish a profitable, long term 1-1 relationship with customers, by understanding their needs and expectations. This strategy is using 2 different approaches: software applications and software as a service. E-commerce CRM (e-CRM) primarily focuses on customer experiences and sales that are conducted online. Most e- CRM software has the ability to analyze customer information, sales patterns, record and store data, and website's metrics, for example: Conversion rates Customer click-through rate E-mail subscription opt-ins Which products customers are interested in Concerns While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has recently focused on the economic effects of consolidation from Internet businesses, since these businesses employ much fewer people per dollar of sales than traditional retailers. Security E-business systems naturally have greater security risks than traditional business systems, therefore it is important for e-business systems to be fully protected against these risks. A far greater number of people have access to e-businesses through the internet than would have access to a traditional business. Customers, suppliers, employees, and numerous other people use any particular e-business system daily and expect their confidential information to stay secure. Hackers are one of the great threats to the security of e-businesses. Some common security concerns for e-Businesses include keeping business and customer information private and confidential, the authenticity of data, and data integrity. Some of the methods of protecting e-business security and keeping information secure include physical security measures as well as data storage, data transmission, anti-virus software, firewalls, and encryption to list a few. Privacy and confidentiality Confidentiality is the extent to which businesses makes personal information available to other businesses and individuals. With any business, confidential information must remain secure and only be accessible to the intended recipient. However, this becomes even more difficult when dealing with e-businesses specifically. To keep such information secure means protecting any electronic records and files from unauthorized access, as well as ensuring safe transmission and data storage of such information. Tools such as encryption and firewalls manage this specific concern within e-business. Authenticity E-business transactions pose greater challenges for establishing authenticity due to the ease with which electronic information may be altered and copied. Both parties in an e-business transaction want to have the assurance that the other party is who they claim to be, especially when a customer places an order and then submits a payment electronically. One common way to ensure this is to limit access to a network or trusted parties by using a virtual private network (VPN) technology. The establishment of authenticity is even greater when a combination of techniques are used, and such techniques involve checking "something you know" (i.e. password or PIN), "something you need" (i.e. credit card), or "something you are" (i.e. digital signatures or voice recognition methods). Many times in e-business, however, "something you are" is pretty strongly verified by checking the purchaser's "something you have" (i.e. credit card) and "something you know" (i.e. card number). Data integrity Data integrity answers the question "Can the information be changed or corrupted in any way?" This leads to the assurance that the message received is identical to the message sent. A business needs to be confident that data is not changed in transit, whether deliberately or by accident. To help with data integrity, firewalls protect stored data against unauthorized access, while simply backing up data allows recovery should the data or equipment be damaged. Non-repudiation This concern deals with the existence of proof in a transaction. A business must have the assurance that the receiving party or purchaser cannot deny that a transaction has occurred, and this means having sufficient evidence to prove the transaction. One way to address non-repudiation is using digital signatures. A digital signature not only ensures that a message or document has been electronically signed by the person, but since a digital signature can only be created by one person, it also ensures that this person cannot later deny that they provided their signature. Access control When certain electronic resources and information is limited to only a few authorized individuals, a business and its customers must have the assurance that no one else can access the systems or information. There are a variety of techniques to address this concern including firewalls, access privileges, user identification and authentication techniques (such as passwords and digital certificates), Virtual Private Networks (VPN), and much more. Availability This concern is specifically pertinent to a business's customers as certain information must be available when customers need it. Messages must be delivered in a reliable and timely fashion, and information must be stored and retrieved as required. Because the availability of service is important for all e-business websites, steps must be taken to prevent disruption of service by events such as power outages and damage to physical infrastructure. Examples to address this include data backup, fire-suppression systems, Uninterrupted Power Supply (UPS) systems, virus protection, as well as making sure that there is sufficient capacity to handle the demands posed by heavy network traffic. Cost Structure The business internet which supports e-business has a cost to maintain of about $2 trillion in outsourced IT dollars just in the United States alone. With each website custom crafted and maintained in code, the maintenance burden is enormous. In the twenty-first century, look for new businesses that will help standardize the look and feel of the internet presence of a business to be more uniform in nature to help reduce the cost of maintenance. The cost structure for e-businesses varies a lot from the industry they operate in. There are two major categories that have commune characteristics. The first group is fully digital businesses that don't provide any product or services outside of the digital world. This includes for example software companies, social networks, etc. For those, the most significant operational cost is the maintenance of the platform. Those costs are almost unrelated to every additional customer the business acquires, making the marginal cost almost equal to zero. This is one of the major advantages of that kind of business. The second group are businesses that provide services or products outside of the digital world, like online shops, for those costs are much harder to determine. Some common advantages over traditional businesses are lower marketing cost, lower inventory cost, lower payroll, lower rent, etc. Security solutions When it comes to security solutions, sustainable electronic business requires support for data integrity, strong authentication, and privacy. Numerous things can be done in order to protect our E-Business. Starting off with basic things like switch to HTTPS from old outdated HTTP protocol which is more vulnerable to attacks. Furthermore, the other things that require full attention are securing servers and admin panels, payment gateway security, antivirus and anti-malware software, using firewalls is also a must, regular updates, and back up our data. Access and data integrity There are several different ways to prevent access to the data that is kept online. One way is to use anti-virus software. This is something that most people use to protect their networks regardless of the data they have. E-businesses should use this because they can then be sure that the information sent and received to their system is clean. A second way to protect the data is to use firewalls and network protection. A firewall is used to restrict access to private networks, as well as public networks that a company may use. The firewall also has the ability to log attempts into the network and provide warnings as it is happening. They are very beneficial to keep third parties out of the network. Businesses that use Wi-Fi need to consider different forms of protection because these networks are easier for someone to access. They should look into protected access, virtual private networks, or internet protocol security. Another option they have is an intrusion detection system. This system alerts when there are possible intrusions. Some companies set up traps or "hot spots" to attract people and are then able to know when someone is trying to hack into that area. Encryption Encryption, which is actually a part of cryptography, involves transforming texts or messages into a code that is unreadable. These messages have to be decrypted in order to be understandable or usable for someone. There is a key that identifies the data to a certain person or company. With public-key encryption, there are actually two keys used. One is public and one is private. The public one is used for encryption and the private one for decryption. The level of the actual encryption can be adjusted and should be based on the information. The key can be just a simple slide of letters or a completely random mix-up of letters. This is relatively easy to implement because there is software that a company can purchase. A company needs to be sure that its keys are registered with a certificate authority. Digital certificates The point of a digital certificate is to identify the owner of a document. This way the receiver knows that it is an authentic document. Companies can use these certificates in several different ways. They can be used as a replacement for user names and passwords. Each employee can be given these to access the documents that they need from wherever they are. These certificates also use encryption. They are a little more complicated than normal encryption, however. They actually used important information within the code. They do this in order to assure the authenticity of the documents as well as confidentiality and data integrity which always accompany encryption. Digital certificates are not commonly used because they are confusing for people to implement. There can be complications when using different browsers, which means they need to use multiple certificates. The process is being adjusted so that it is easier to use. Digital signatures A final way to secure information online would be to use a digital signature. If a document has a digital signature on it, no one else is able to edit the information without being detected. That way if it is edited, it may be adjusted for reliability after the fact. In order to use a digital signature, one must use a combination of cryptography and a message digest. A message digest is used to give the document a unique value. That value is then encrypted with the sender's private key. Advantages & Disadvantages Advantages When looking at e-Business we have many advantages, which are mostly connected to making doing business easier. The benefits of implementing e-Business tools are in the streamlining of business processes and not so much in the use of technology. Here are some: Easy to set up: electronic business is easy to set up even from home, the only requirements are software, a device and internet connection. Flexible Business Hours: There are no time barriers that a location-based business can encounter since the internet is available to everyone all the time. Your products and services can be accessed by everyone with an internet connection. Cheaper than Traditional Business: Electronic business is less costly than a traditional business, but it is more expensive to set up. Transactions cost are also cheaper. No Geographical Boundaries: The greatest benefit is the possibility of geographical dispersion. Anyone can order anything from anywhere at any time.    Government Subsidies: Digitalisation is very encouraged by the government and they provide the necessary support. Newmarket entry: It has a great potential to enable an entry to a previously unknown market that a traditional business could not. Lower levels of inventory: Electronic business enables companies to lower their level of inventory by digitalizing their assets. (i.e.: Netflix does not sell anymore physical DVD's but proposes online streaming content instead). Lower costs of marketing and sales: E-commerce allows the actors of the industry to advertise for their product/service offer (i.e.: house rental) at generally lower costs than by promoting physically their business. Disadvantages Despite all the limits , there are also some disadvantages that we need to address. The most common limitations of electronic business are: Lack of Personal Touch: The products cannot be examined or felt before the final purchase.  In the traditional model, we have a more personal customer experience, while in electronic business that is mostly not the case. Another missing factor of personal touch could also be in online transactions. Delivery Time: Traditional business enables instant satisfaction as you obtain the product the moment you purchase it, while in electronic business that is not possible. There will always be a waiting period before you receive the product. For example, Amazon assures one-day delivery. This does not resolve the issue completely, but it is an improvement. Security Issues: Scams could be mentioned as a factor for people's distrust in electronic business. Hackers can easily get customers' financial and personal details. Some customer still finds it hard to trust electronic businesses because of the lack of security, reliability and integrity issues. See also Electronic commerce Electronic Commerce Modeling Language Very Large Business Applications Digital economy Types of E-commerce Shopping cart software References Business terms Web development Web applications E-commerce
264628
https://en.wikipedia.org/wiki/Premier%20Election%20Solutions
Premier Election Solutions
Premier Election Solutions, formerly Diebold Election Systems, Inc. (DESI), was a subsidiary of Diebold that made and sold voting machines. In 2009, it was sold to competitor ES&S. In 2010, Dominion Voting Systems purchased the primary assets of Premier, including all intellectual property, software, firmware and hardware for Premier's current and legacy optical scan, central scan, and touch screen voting systems, and all versions of the GEMS election management system from ES&S. At the time ES&S spun off the company due to monopoly charges its systems were in use in 1,400 jurisdictions in 33 states and serving nearly 28 million people. History DESI was run by Bob Urosevich, starting in 1976. In 1979, Bob Urosevich founded, and served as the President (through 1992) of, American Information Systems, now known as Election Systems & Software, Inc. (ES&S), becoming a chief competitor to DESI. Todd Urosevich, Bob's brother, was Vice President, Aftermarket Sales, of Election Systems & Software, Inc. In 1995, Bob Urosevich started I-Mark Systems, whose product was a touch screen voting system utilizing a smart card and biometric encryption authorization technology. Global Election Systems, Inc. (GES) acquired I-Mark in 1997, and on 31 July 2000, Bob Urosevich was promoted from Vice President of Sales and Marketing and New Business Development, to President and Chief Operating Officer. On January 22, 2002, Diebold announced the acquisition of GES, then a manufacturer and supplier of electronic voting terminals and solutions. The total purchase price, in stock and cash, was $24.7 million. Global Election Systems subsequently changed its name to Diebold Election Systems, Inc. Name change In late 2006, Diebold decided to remove its name from the front of the voting machines in what its spokesperson called "a strategic decision on the part of the corporation". In August 2007 Diebold Election Systems changed its name to "Premier Election Solutions" ("PES"). Acquisition by Election Systems & Software Election Systems & Software (ES&S) acquired Premier Election Solutions on September 3, 2009. ES&S President and CEO Aldo Tesi said combining the two companies would result in better products and services for customers and voters. Acquisition by Dominion Following the acquisition, the Department of Justice and 14 individual states launched investigations into the transaction on antitrust grounds. In March 2010, the Department of Justice filed a civil antitrust lawsuit against ES&S, requiring it to divest voting equipment systems assets it acquired from Premier Election Solutions in order to restore competition. The company sold the assets to Dominion Voting Systems. Dominion Voting Systems acquired Premier on May 19, 2010. "We are extremely pleased to conclude this transaction, which will restore much-needed competition to the American voting systems market and will allow Dominion to expand its capabilities and operational footprint to every corner of the United States," said John Poulos, CEO of Dominion. The transaction was approved by the Department of Justice and nine state attorneys general. Controversies O'Dell's fundraising In August 2003, Walden O'Dell, chief executive of Diebold, announced that he had been a top fundraiser for President George W. Bush and had sent a get-out-the-funds letter to Ohio Republicans. In the letters he said he was "committed to helping Ohio deliver its electoral votes to the president next year." Although he clarified his statement as merely a poor choice of words, critics of Diebold and/or the Republican party interpreted this as at minimum an indication of a conflict of interest, at worst implying a risk to the fair counting of ballots. He responded to the critics by pointing out that the company's election machines division is run out of Texas by a registered Democrat. Nonetheless, O'Dell vowed to lower his political profile lest his personal actions harm the company. O'Dell resigned his post of chairman and chief executive of Diebold on December 12, 2005, following reports that the company was facing securities fraud litigation surrounding charges of insider trading. Security and concealment issues In January 2003, Diebold Election Systems' proprietary software, and election files, hardware and software specifications, program files, voting program patches, on its file transfer protocol site, were leaked, later 7 August 2003 leaked to Wired (magazine). In 2004, Avi Rubin, a professor of computer science at Johns Hopkins University and Technical Director of the Information Security Institute, analyzed the source code used in these voting machines and reported "this voting system is far below even the most minimal security standards applicable in other contexts." Following the publication of this paper, the State of Maryland hired Science Applications International Corporation (SAIC) to perform another analysis of the Diebold voting machines. SAIC concluded "[t]he system, as implemented in policy, procedure, and technology, is at high risk of compromise." In January 2004, RABA Technologies, a security company in Columbia, Maryland, did a security analysis of the Diebold AccuVote, confirming many of the problems found by Rubin and finding some new vulnerabilities. In June 2005, the Tallahassee Democrat reported that when given access to Diebold optical scan vote-counting computers, Black Box Voting, a nonprofit election watchdog group founded by Bev Harris, hired Finnish computer expert Harri Hursti and conducted a project in which vote totals were altered, by replacing the memory card that stores voting results with one that had been tampered with. Although the machines are supposed to record changes to data stored in the system, they showed no record of tampering after the memory cards were swapped. In response, a spokesperson for the Florida Department of State said, "Information on a blog site is not viable or credible." In early 2006, a study for the state of California corroborated and expanded on the problem; on page 2 the California report states that: "Memory card attacks are a real threat: We determined that anyone who has access to a memory card of the AV-OS, and can tamper it (i.e. modify its contents), and can have the modified cards used in a voting machine during election, can indeed modify the election results from that machine in a number of ways. The fact that the results are incorrect cannot be detected except by a recount of the original paper ballots" and "Harri Hursti's attack does work: Mr. Hursti's attack on the AV-OS is definitely real. He was indeed able to change the election results by doing nothing more than modifying the contents of a memory card. He needed no passwords, no cryptographic keys, and no access to any other part of the voting system, including the GEMS election management server." A new vulnerability, this time with the TSx DRE machines, was reported in May 2006. According to Professor Rubin, the machines are "much, much easier to attack than anything we've previously said... On a scale of one to 10, if the problems we found before were a six, this is a 10. It's a totally different ballgame." According to Rubin, the system is intentionally designed so that anyone with access can update the machine software, without a pass code or other security protocol. Diebold officials said that although any problem can be avoided by keeping a close watch on the machines, they are developing a fix. Michael I. Shamos, a professor of computer science at Carnegie Mellon University who is a proponent of electronic voting and the examiner of electronic voting systems for Pennsylvania, stated "It's the most severe security flaw ever discovered in a voting system." Douglas W. Jones, a professor of computer science at the University of Iowa, stated "This is the barn door being wide open, while people were arguing over the lock on the front door." Diebold spokesman David Bear played down the seriousness of the situation, asserting that "For there to be a problem here, you're basically assuming a premise where you have some evil and nefarious election officials who would sneak in and introduce a piece of software. I don't believe these evil elections people exist." On October 30, 2006, researchers from the University of Connecticut demonstrated new vulnerabilities in Diebold AccuVote-OS optical scan voting terminal. The system can be compromised even if its removable memory card is sealed in place. On September 13, 2006, Director of the Center for Information and Technology Policy at Princeton University, Professor Edward Felten, and graduate students Ariel Feldman and Alex Halderman discovered severe security flaws in a Diebold AccuVote-TS voting machine. Their findings claimed, "Malicious software running on a single voting machine can steal votes with little if any risk of detection. The malicious software can modify all of the records, audit logs, and counters kept by the voting machine, so that even careful forensic examination of these records will find nothing amiss." On November 2, 2006, HBO premiered a documentary entitled "Hacking Democracy", concerning the vulnerability of electronic voting machines (primarily Diebold) to hacking and inaccurate vote totals. The company argued that the film was factually inaccurate and urged HBO to air a disclaimer explaining that it had not verified any of the claims. However, corroboration and validation for the exploits shown in Hacking Democracy was published in a report for the state of California (see above). In January 2007, a photo of the key used to open Diebold voting machines was posted in the company's website. It was found possible to duplicate the key based on the photo. The key unlocks a compartment which contains a removable memory card, leaving the machine vulnerable to tampering. A report commissioned by Ohio's top elections official on December 15, 2007, found that all five voting systems used in Ohio (made by Elections Systems and Software; Premier Election Solutions (formerly Diebold Election Systems); and Hart InterCivic) have critical flaws that could undermine the integrity of the 2008 general election. On July 17, 2008, Stephen Spoonamore made the claim that he had "fresh evidence regarding election fraud on Diebold electronic voting machines during the 2002 Georgia gubernatorial and senatorial elections." Spoonamore is "the founder and until recently the CEO of Cybrinth LLC, an information technology policy and security firm that serves Fortune 100 companies." He claims that Diebold Election Systems Inc. COO Bob Urosevich personally installed a computer patch on voting machines in two counties in Georgia, and that the patch did not fix the problem it was supposed to fix. Reports have indicated that then Georgia Secretary of State Cathy Cox did not know the patch was installed until after the election. States rejecting Diebold In 2004, after an initial investigation into the company's practices, Secretary of State of California Kevin Shelley issued a ban on one model of Diebold voting machines in that state. California Attorney General Bill Lockyer, joined the state of California into a false claims suit filed in November 2003 by Bev Harris and Alameda County citizen Jim March. The suit charged that Diebold had given false information about the security and reliability of Diebold Election Systems machines that were sold to the state. To settle the case, Diebold agreed to pay $2.6 million and to implement certain reforms. On August 3, 2007, California Secretary of State Debra Bowen decertified Diebold and three other electronic voting systems after a "top-to-bottom review of the voting machines certified for use in California in March 2007." In April 2007, the Maryland General Assembly voted to replace paperless touchscreen voting machines with paper ballots counted by optical scanners, effective in time for the 2010 general (November) elections. The law, signed by the Governor in May 2007, was made contingent on the provision of funding by no later than April 2008. The Governor included such funding in his proposed budget in January 2008, but the funding was defeated by the state House in July 2008. In March 2009, California Secretary of State Debra Bowen decertified Diebold's GEMS version 1.18.19 after the Humboldt County Election Transparency Project discovered that GEMS had silently dropped 197 ballots from its tabulation of a single precinct in Eureka, California. The discovery was made after project members conducted an independent count using the ballot counting program Ballot Browser. Leaked memos In September 2003, a large number of internal Diebold memos, dating back to 1999, were posted to the BlackBoxVoting.org web site, resulting in the site being shut down due to a Diebold cease and desist order. Later, other website organizations Why War? and the Swarthmore Coalition for the Digital Commons, a group of student activists at Swarthmore College posted the memos. U.S. Representative Dennis Kucinich, a Democrat from Ohio, placed portions of the files on his websites. Diebold attempted to stop the publication of these internal memos by sending cease-and-desist letters to each site hosting these documents, demanding that they be removed. Diebold claimed the memos as their copyrighted material, and asserted that anyone who published the memos online was in violation of the Online Copyright Infringement Liability Limitation Act provisions of the Digital Millennium Copyright Act found in §512 of the United States Copyright Act. When it turned out that some of the challenged groups would not back down, Diebold retracted their threat. Those who had been threatened by Diebold then sued for court costs and damages, in OPG v. Diebold. This suit eventually led to a victory for the plaintiffs against Diebold, when in October 2004 Judge Jeremy Fogel ruled that Diebold abused its copyrights in its efforts to suppress the embarrassing memos. Stephen Heller In January and February 2004, a whistleblower named Stephen Heller brought to light memos from Jones Day, Diebold's attorneys, informing Diebold that they were in breach of California law by continuing to use illegal and uncertified software in California voting machines. California Attorney General Bill Lockyer filed civil and criminal suits against the company, which were dropped when Diebold settled out of court for $2.6 million. In February 2006, Heller was charged with three felonies for this action. On November 20, 2006, Heller made a plea agreement to pay $10,000 to Jones Day, write an apology, and receive three years probation. Diebold and Kenneth Blackwell's conflict of interest Ohio State Senator Jeff Jacobson, Republican, asked Ohio Secretary of State Ken Blackwell, also a Republican, in July 2003 to disqualify Diebold's bid to supply voting machines for the state, after security problems were discovered in its software, but was refused. Blackwell had ordered Diebold touch screen voting machines, reversing an earlier decision by the state to purchase only optical scan voting machines which, unlike the touch screen devices, would leave a "paper trail" for recount purposes. Blackwell was found, in April 2006, to own 83 shares of Diebold stock, down from 178 shares purchased in January 2005, which he attributed to an unidentified financial manager at Credit Suisse First Boston who had violated his instructions to avoid potential conflict of interest, without his knowledge. When Cuyahoga county's primary was held on May 2, 2006, officials ordered the hand-counting of more than 18,000 paper ballots after Diebold's new optical scan machines produced inconsistent tabulations, leaving several local races in limbo for days and eventually resulting in a reversal of the outcome of one race for state representative. Blackwell ordered an investigation by the Cuyahoga County Board of Elections; Ohio Democrats demanded that Blackwell, who was also the Republican gubernatorial candidate in 2006, recuse himself from the investigation due to conflicts of interest, but Blackwell did not do so. The Republican head of the Franklin County, Ohio Board of Elections, Matt Damschroder, said a Diebold contractor came to him and bragged of a $50,000 check he had written to Blackwell's "political interests." See also Black box voting ChoicePoint Diebold Electoral fraud Electronic voting Hacking Democracy 2018 United States elections (2018 Georgia elections) Voting machine References External links Official site of Diebold-Procomp Brazil Official site of Dominion Voting Research and reports Security Analysis of the Diebold AccuVote-TS Voting Machine, Princeton University Analysis of an Electronic Voting System, Avi Rubin at Johns Hopkins University The Case of the Diebold FTP Site by Douglas W. Jones, Professor of Computer Science at the University of Iowa voting_system_report by Science Applications International Corporation Maryland Voting Systems Study, RTI International, December 2, 2010 Top-to-Bottom Review of voting systems by the government of California Online Policy Group v. Diebold case file from Electronic Frontier Foundation Diebold takes down blackboxvoting.org, Egan Orion, The Inquirer, September 24, 2003 Con Job at Diebold Subsidiary, Associated Press, Wired, December 17, 2003 2010 mergers and acquisitions 2004 United States election voting controversies Diebold Election technology companies Electronic voting
267624
https://en.wikipedia.org/wiki/Octopus%20card
Octopus card
The Octopus card is a reusable contactless stored value smart card for making electronic payments in online or offline systems in Hong Kong. Launched in September 1997 to collect fares for the territory's mass transit system, the Octopus card system is the second contactless smart card system in the world, after the Korean Upass, and has since grown into a widely used payment system for all public transport in Hong Kong, leading to the development of Navigo card in Paris, Oyster Card in London, Opal Card in New South Wales, NETS FlashPay and EZ-Link in Singapore and many other similar systems around the world. The Octopus card has also grown to be used for payment in many retail shops in Hong Kong, including most convenience stores, supermarkets, and fast food restaurants. Other common Octopus payment applications include parking meters, car parks, petrol stations, vending machines, fee payment at public libraries and swimming pools, and more. The cards are also commonly used for non-payment purposes, such as school attendance and access control for office buildings and housing estates. The Octopus card won the Chairman's Award of the World Information Technology and Services Alliance's 2006 Global IT Excellence Award for, among other things, being the world's leading complex automatic fare collection and contactless smartcard payment system. According to Octopus Cards Limited, operator of the Octopus card system, there are more than 36 million cards in circulation, nearly five times the population of Hong Kong. The cards are used by 98 per cent of the population of Hong Kong aged 15 to 64. The system handles more than 15 million transactions, worth over HK$220 million, on a daily basis. History Prior to the Octopus card, Hong Kong's Mass Transit Railway (MTR) adopted a system to recirculate magnetic plastic cards as fare tickets when it started operations in 1979. Another of the territory's railway networks, the Kowloon-Canton Railway (KCR), adopted the same magnetic cards in 1984, and the stored value version was renamed Common Stored Value Ticket. In 1989, the Common Stored Value Ticket system was extended to Kowloon Motor Bus (KMB) buses providing a feeder service to MTR and KCR stations and to Citybus, and was also extended to a limited number of non-transport applications, such as payments at photobooths and for fast food vouchers. The MTR Corporation eventually decided to adopt more advanced technologies, and in 1993 announced that it would move towards using contactless smartcards. To gain wider acceptance, it partnered with four other major transit companies in Hong Kong to create a joint-venture business to operate the Octopus system in 1994, then known as Creative Star Limited. After three years of trials, the Octopus card was launched on 1 September 1997. Three million cards were issued within the first three months of the system's launch. The quick success of the system was driven by the fact that MTR and KCR required all holders of Common Stored Value Tickets to replace their tickets with Octopus cards within three months or have their tickets made obsolete. Another reason was the coin shortage in Hong Kong in 1997. With the transfer of Hong Kong away from British rule, there was a belief that the older Queen's Head coins in Hong Kong would rise in value, so many people hoarded these older coins and waited for their value to increase. The Octopus system was quickly adopted by other Creative Star joint venture partners, and KMB reported that by 2000, most bus journeys were completed using an Octopus card, with few coins used. Boarding a bus in Hong Kong without using the Octopus card requires giving exact change, making it cumbersome compared to using the Octopus card. By November 1998, 4.6 million cards were issued, and this rose to 9 million by January 2002. In 2000, the Hong Kong Monetary Authority granted a deposit-taking company license to the operator, removing previous restrictions that prohibited Octopus from generating more than 15 percent of its turnover from non-transit-related functions. This allowed the Octopus card to be widely adopted for non-transit-related sales transactions. On 29 June 2003, the Octopus card found another application when the Hong Kong Government started to replace all its 18,000 parking meters with a new Octopus card-operated system. The replacement was completed on 21 November 2004. First Generation Card replacement In order to enhance the level of security and technology, Octopus company launched "replacement of First Generation On-Loan Octopus" programme in 2015. First Generation cards have no bracket in their card number. On successful transactions with first generation cards, the card reader will emit a "Do" sound three times to remind cardholders to replace their cards. In 2015, at the initial stage, card holders of first generation cardholders may voluntarily replace their cards at an Octopus Service Point without charge. In 2017, the final call on card replacements was launched by Octopus company. First generation cards will be unusable by stages starting from January 2018. Cardholders can replace these cards without charge at MTR or KMB Customer Service Centres, and Octopus Service Points. Etymologies and logo The Cantonese name for the Octopus card, Baat Daaht Tùng (), translates literally as "eight-arrived pass", where Baat Daaht may translate as "reaching everywhere". Less literally though the meaning is taken as the "go-everywhere pass". It was selected by the head of the MTR Corporation, the parent company of Octopus Cards Limited, in a naming competition held in 1996. The number eight refers to the cardinal and ordinal directions, and the four-character idiom (), a common expression loosely translated as "reachable in all directions". It is also considered a lucky number in Chinese culture, and the phrase can possibly be associated with the similar-sounding , which means "getting wealthy" () in Cantonese. The English name Octopus card was also selected from the naming competition. Coincidentally, the English name coincides with the number eight in the Chinese name, since an octopus has eight tentacles. The logo used on the card features a Möbius strip in the shape of an infinity symbol. Card usage The Octopus card was originally introduced for fare payment on the MTR; however, the use of the card quickly expanded to other retail businesses in Hong Kong. The card is now commonly used in most public transport, fast food restaurants, supermarkets, vending machines, convenience stores, photo booths, parking meters, car parks and many other retails business where small payments are frequently made by customers. With over 33 million Octopus cards in circulation as of 2018, the Octopus card is used by 99 per cent of Hongkongers. Notable businesses that started accepting Octopus cards at a very early stage include PARKnSHOP, Wellcome, Watsons, 7-Eleven, Starbucks, McDonald's, and Circle K. As of 21 November 2004, all parking meters in Hong Kong were converted. They no longer accept coins, and Octopus became the only form of payment accepted until 2021 when a new parking meter was introduced, which also accepts contactless payment (Visa, MasterCard and UnionPay) , FPS and QR code payment (AliPayHK, WeChat Pay HK and UnionPay QR code). Octopus cards also double as access control cards in buildings and for school administrative functions. At certain office buildings, residential buildings, and schools, use of an Octopus card is required for entry. Payments Making or recording a payment using the card for public transport or purchases at Octopus-enabled retailers can be done by holding the card against or waving it over an Octopus card reader from up to a few centimetres away. The reader will acknowledge payment by emitting a beep, and displaying the amount deducted and the remaining balance of the card. Standard transaction time for readers used for public transport is 0.3 seconds, while that of readers used for retailers is 1 second. When using the MTR heavy rail system, the entry point of commuters is noted when a passenger enters, and the appropriate amount based on distance traveled will be deducted when the users validate their cards again at the exit point. The MTR usually charges less for journeys made using an Octopus card instead of conventional single-journey tickets. For example, the adult fare of a single journey from Chai Wan to Tung Chung is HK$25.7 with an Octopus card, and HK$28.5 with a single journey ticket. Other public transport operators also offer intermittent discounts for using Octopus cards on higher fares and round-trip transits on select routes. On 6 November 2005, Octopus Cards Limited launched Octopus Rewards, a program that allows cardholders to earn rewards at merchants that are partners in the program. Participating merchants provide consumers with tailor-made offers and privileges. The rewards that the program offers are in the form of points, or reward dollars, stored on the card. Once a card is registered for the program, the cardholder may accumulate reward points by making purchases at participating merchants, and payments may be made in the form of cash, credit cards, or Octopus cards themselves. The rate at which reward points are earned per dollar-amount purchase differs by the merchant at which that the purchases are made. At Wellcome, for example, one point is earned for every purchase of HK$200; and at Watsons, points are earned at a rate of 0.5 percent per dollar amount of a purchase. Once these reward dollars are accumulated, they may be redeemed as payment for purchases at partner merchants for at least HK$1 per reward dollar. To redeem the accumulated reward dollars, cardholders must use the entire value amount in whole, and may not elect to use it partially. If the purchase price is lower than the amount of reward dollars available, the amount difference remains stored on the card. Founding partners for the Octopus Rewards program include HSBC, UA Cinemas and Wellcome. Taxis Most taxis in Hong Kong do not accept the Octopus card in the past. On 27 June 2006, after 10 years of negotiations between Octopus Cards Limited and the taxi industry, the first trial of taxis equipped with Octopus card readers was launched in the New Territories with taxis operated by the Yellow Taxi Group. But it was reported on 30 October that of the 20 taxis that participated in the trial, eight had dropped out. Part of the reason was technicaldrivers must return to the office every day for accounting. The Octopus card company said it would be upgrading the system to allow automatic account updating in the future. Wong Yu-ting, managing director of the Yellow Taxi Group, also noted that they had been "trying to convince restaurants and retailers" to offer discounts to Octopus taxi passengers, but the Transport Department had been a major obstacle. The Transport Department is against this approach for legal reasons, as taxi fare discount is illegal in Hong Kong. Also, other than taxi corporation overseas, most drivers are self-employed. They and their taxi owner prefer to account their profit and rent in daily basis, while Octopus requires to transfer money through bank after a working day. Plus, installation and service fee are also their concerns. In March 2018, Octopus Cards Limited announced its plans to re-enter the taxi payment market with their new mobile app for taxi drivers. The mobile app is able to receive funds by tapping the passenger's Octopus card to the device's NFC reader, or by allowing passengers to scan the O! ePay driver QR-code. In October 2020, Octopus Cards Limited announced the launch of Octopus Mobile POS, a more compact version of the Octopus reader to helping taxi drivers and small and medium-sized retail merchants accept cashless payments. The new Octopus Mobile POS, working seamlessly with the mobile app was especially timely during the COVID-19 pandemic, as it can provide merchants and customers with peace of mind in avoiding potential virus transmission. By July 2021, over 15,000 taxi drivers have installed Octopus Mobile POS. Outside Hong Kong Usage of the Octopus card was extended to Macau and the Chinese city of Shenzhen in 2006. In collaboration with China UnionPay, Octopus Cards Limited introduced Octopus card usage to two Fairwood restaurants in Shenzhen in August 2006. In 2008, five Café de Coral locations in Shenzhen also started accepting Octopus. Value cannot be reloaded to Octopus cards in Shenzhen, but the Automatic Add Value Service is available to automatically deduct money value from a customer's credit card to reload an Octopus card. The two Fairwood restaurants in Shenzhen that were enabled for Octopus card payments are located at Luohu Commercial City and Shenzhen railway station. Shenzhen became the first city outside Hong Kong in which Octopus cards may be accepted as payment. In Macau, the Octopus card was introduced in December 2006 when two Kentucky Fried Chicken restaurants in the territory adopted its usage as payment. Similar to its usage in Shenzhen, an Octopus card may not be reloaded in Macau, and the currency exchange rate between the Macanese pataca and the Hong Kong dollar when using an Octopus card is MOP1:HKD1. The two Kentucky Fried Chicken restaurants in Macau that adopted the Octopus card for payment are located at the Rua Do Campo and the Sands Casino. Shenzhen Tong cards are now widely used in Shenzhen instead, and a combined Shenzhen Tong – Hong Kong Octopus card is available, called the Hu Tong Xing, with RMB & HKD in different purses. The Macau Pass is now widely used in Macau. Balance enquiries, reloading and refunds Balance enquiry Enquiry machines are available in all MTR station and Octopus Service Point in some MTR station. Place any Octopus products on these machines which display the balance along with a history of last ten usages. Card users may also check the balance and last 3-month transactions on mobile with Octopus app. For some Octopus products, like the 20th Anniversary Special Edition, up to 40 transactions can be checked on the app. After each payment, the remaining value and cash deducted are also shown on the card reader as well as the receipt. Top-up Money can be credited to the card through a number of ways. In general, all cards can top-up in multiples of HK$50. As for Elder cards, Personalised Octopus Cards with "Student" or "Person with Disabilities" identity, these cards can be topped-up in multiples of HK$10, but this has to be done at MTR Customer Service Centre only. "Add Value Machines" are installed at all MTR stations and they accept cash only. Alternatively, cards may be topped up with cash at authorised service providers such as PARKnSHOP, Wellcome, Watsons, 7-Eleven, Circle K, and Café de Coral, and also at customer service centres and ticketing offices at transport stations. In selected store, change can also be put into Octopus card after a cash payment. Other than that, a large number of spare coins can also be added into Octopus card on "Coin Carts", a vehicle operated by Hong Kong Monetary Authority. The Octopus "Automatic Add Value Service" (AAVS) is an automatic top-up Octopus method. This service allows for money to be automatically deducted from a credit card and credited to an Octopus card when the value of the Octopus card is less than zero. The credit card used must be one offered by one of 22 financial institutions that participate in AAVS. Participating banks include HSBC, Bank of China, and Hang Seng Bank. Depend on applicants' default, HK$150/250/500 is added to the card each time value is automatically added. The "O! ePay" mobile wallets, which is also run by Octopus company, also allows users to transfer money between itself and Octopus cards. The transfer amount can be down to 10 cents. An Octopus card may store a maximum value of HK$3,000, with an On-Loan card having an initial deposit value of HK$50 and a Sold card having no initial deposit value. Negative value is incurred on a card if it is used with insufficient fundsboth types of cards may carry a maximum negative value of HK$35 or HK$50(depends on card issue date) before value needs to be added to them again for use. At the time, the maximum cost of a trip on any of the rail networks except the Airport Express and first class of the MTR East Rail line was HK$34.8, the cost of travelling between East Tsim Sha Tsui station and Lo Wu station; the current maximum cost is HK$55.3, the cost of travelling between Disneyland Resort station and either Lo Wu station or Lok Ma Chau station. Card refund An Octopus card may be returned to any MTR Customer Service Centre for a refund of the remaining credit stored on it. A handling fee may be charged for the refundHK$11 for an anonymous On-Loan card that had been in use for fewer than 90 days, and HK$10 for a Personalised On-Loan card that was issued on or after 1 November 2004. A refund is immediately provided at the time an anonymous card is returned, unless it has more than HK$500 stored on it. A Personalised On-Loan card or an anonymous On-Loan card with more than HK$500 stored on it needs to be sent back to Octopus Cards Limited for refund processing, in which case, the refund for a Personalised On-Loan card would be available in eight days, and that of an anonymous On-Loan card would be available in five days. If a damaged On-Loan card is returned for refund, a HK$30 levy would be charged to the cardholder. Octopus Administrative Fee Start from 1 October 2020, Octopus will charge a HK$15 Inactive Octopus Administrative Fee every year, until the load and deposit are all deducted. The card will then decommissioned and cannot be reactivated. For those On-Loan Adult Octopus issued on or after 1 October 2017, and has not had any add value or payment transaction for three years will become an Inactive Octopus. The Administrative Fee will only apply to Anonymous On-Loan Adult Octopus and Personalised On-Loan Octopus for customers aged 18 – 59 without concession. Types of cards There are two main types of Octopus card (On-Loan and Sold), and two less common types (the Airport Express Tourist and the MTR Airport Staff). Main types of cards On-Loan cards On-Loan cards are issued for usage in day-to-day functions, primarily for fare payment in transport systems. They are further classified into Child, Adult, Elder, and Personalised categories, with the first three based on age and different amounts of fare concession. With the exception of the Personalised cards, On-Loan cards are anonymous; no personal information, bank account, or credit card details are stored on the card, and no identification is required for the purchase of these cards. If an owner loses a card, only the stored value and the deposit of the card are lost. On-Loan Octopus cards may be purchased at all MTR stations, the KMB Customer Service Centre, New World First Ferry (NWFF) Octopus Service Centres, and the New World First Bus (NWFB) Customer Service Centre. A student on-loan Octopus Card was initially issued, but was discontinued in 2005. Personalised cards The Personalised card is available on registration. The name and, if opted, a photo of the holder are imprinted on the cards. They can function automatically as a Child, Adult, or Elder card by recognising the cardholder's age stored on the card, hence accounting for different concessionary fares. As of 2003, there were 380,000 holders of Personalised Octopus cards. In addition to all the functions of an ordinary card, this card can be used as a key card for access to residential and office buildings. If a Personalised card is lost, the holder may report the loss by phone or on the web to prevent unauthorised use of the card. A refund or a new personalised card, depend on holder choice, will then be mailed to the holder of the card. The refund contains the deposit and the value that remained on the card three hours after the loss is reported, minus a HK$30 card cost and a HK$20 handling fee. A Personalised card with "student status" is available for students in Hong Kong. To be eligible for this card, the applicant must be a full-time Hong Kong student aged between 12 and 25. This type of Personalised card is automatically issued to a student who applies for student concessionary privileges. Additionally, they can be used for school administrative tasks such as the recording of student attendance and the management of library loans. Sold cards In contrast to On-Loan cards, Sold cards are sponsored and branded cards. They are souvenir cards that are frequently released by Octopus Cards Limited. The designs for these cards usually come from fictional characters in popular culture, or they are inspired by Chinese cultural events such as Chinese New Year. These cards are sold at a premium, have limited or no initial stored value, and cannot be refunded, but they can otherwise be used as ordinary cards. An example of the Sold card is the McMug and McDull collection. It was launched at the end of January 2007 to coincide with the beginning of the Year of the Pig, it features two differently designed versions of the card and is sold for HK$138 per set. Each set comes with an Adult Octopus card, with a pouch for the card, a matching strap and a Mcmug or Mcdull ornament. Octopus Cards Limited has launched new collections of these cards for such occasions as the Mid-Autumn Festival, the passing of the year 2004, and the release of the movie DragonBlade. Sold Octopus cards may be purchased at selected MTR stations, and all 7-Eleven stores. Alternative designs (Sold cards) Other than the Octopus card itself, operator Octopus Cards Limited also sells watches and mobile phone covers that function as anonymous Octopus cards. The types of watches available include wrist watches, pocket watches, and watch key chains. The mobile phone covers were specifically designed for Nokia models 3310 and 3330, and iPhone 4 and 4S. As with the card itself, these products are used by waving them over a card reader. They may be reloaded with money value the same way as the card itself, including automatic reloading via the Automatic Add Value Service, with the exception that they cannot be reloaded at Add Value Machines due to their shapes. An Octopus watch or mobile phone cover may be stored with a maximum of HK$3,000, but do not have any initial stored value at the time of its purchase. It may have a maximum negative value of HK$35 as with an Octopus card. These products are not refundable for their costs, but the remaining value stored on them may be refunded if they are damaged, with the damaged product itself also returned to the customer. In June 2007, a new set of limited edition products was announced, featuring Mini Octopus cards and Child Octopus Wristbands. The Mini Octopus cards, available in Adult and Elder editions, measure 4.7 cm by 3 cm (1.85 in by 1.18 in) and work as regular (anonymous) Adult and Elder, respectively, Octopus cards. The Child Octopus Wristbands are plastic wristbands with a watch-like round face and work as regular Child Octopus cards. The same value-adding abilities and limitations as the aforementioned watches and mobile phone apply. Special-purpose cards The special-purpose card, Airport Express Tourist Octopus, was introduced by Octopus Cards Limited to target tourists in Hong Kong. Two versions of this card are offered, a HK$250 card with a free single ride on the Airport Express, the Mass Transit Railway (MTR) train line that runs between the Hong Kong International Airport and the urban areas of Hong Kong, and a HK$350 card with two free single rides included. The airport journeys are valid for 180 days from the date of purchase. Both versions allow three days of unlimited rides on the MTR and include a HK$50 refundable deposit. Usable value on these cards may be added if necessary. These tourist Octopus cards may be used only by tourists staying in Hong Kong for 14 or fewer days; users may be required to produce a passport showing their arrival date in Hong Kong. Airport Express Tourist Octopus is available for purchase at all MTR stations. The other special-purpose card, the MTR Airport Staff Octopus, is available for the staff of Hong Kong International Airport and AsiaWorld-Expo, a convention centre close to the airport, for commuting at a reduced fare between the airport and MTR stations via the Airport Express. Staff who apply for the card may use it for a discount of up to 64 percent for Airport Express single journey fares. The MTR Airport Staff Octopus is available upon application via the company for which that a staff member works. Some local banks in Hong Kong have integrated the Octopus Card into their ATM cards and credit cards. Samsung Pay Since 14 December 2017, the cardless Octopus, named "Smart Octopus", is available with Samsung Pay, a mobile payment platform provided by Samsung. By using the phone's NFC function and magnetic secure transmission (MST) technology, users can tap their selected Samsung devices on Octopus readers, paying in a similar way as a normal physical Octopus cards. Users can choose to transfer their card data from an existing anonymous On-Loan Adult or Elder Octopus to the Smart Octopus. All card value and reward points would be transferred and held in the Samsung Pay app. However, the physical card will then be deactivated and can no longer be used. Users can also choose to purchase a new Adult or Elder Smart Octopus in the app. Smart Octopus provides features like instant transaction notification and in-app top-up function, that were not available originally when using a physical card. Yet, such in-app top-up method will charge 2.5% as handling fee. Such fee was removed on June 2020, when the support of Apple Pay was launched. Apple Pay Since 2 June 2020, Octopus cards can be added to Apple Pay in iOS 13.5. It was first discovered by developers that found code associating to Octopus with Apple Pay. On 11 July 2019, it was announced that Octopus would be coming to iPhone and Apple Watch in late 2019, but the official launch was delayed till June 2020. As Octopus cards uses FeliCa technology, only Apple Watch 3 or onwards and iPhone 8 or above are supported. Octopus for Tourists was launched on Aug 2020. Users can choose to create an Octopus card inside Apple Pay by topping up with the credit cards inside, or to transfer data from an existing physical Octopus card. It supports Apple Pay's Express Transit function, which allows payments to be made from the iPhone or Apple Watch without needing to switch on the phone or authenticating the payment with Face ID, Touch ID, or a password. Technology The Octopus system was designed by Australia-based company ERG Group (now Vix Technology). The company was selected in 1994 to lead the development of the Octopus project and was responsible for the building and installation of the components of the Octopus system. Operations, maintenance and development was undertaken by Octopus Cards Limited, and in 2005, it replaced the central transaction clearing house with its own system. The Octopus card uses the Sony 13.56 MHz FeliCa radio frequency identification (RFID) chip, with Hong Kong being the world's first major public transport system using this technology. It is a "touch and go" system, so users need only hold the card in close proximity of the reader, and thus physical contact is not required. Data is transmitted at up to 212 kbit/s (the maximum speed for Sony FeliCa chips), compared to 9.6 kbit/s for other smart card systems like Mondex and Visa Cash. The card has a storage capacity of 1 KB to 64 KB compared to the 125 bytes provided by traditional magnetic stripe card. Octopus uses a nonstandard system for RFID instead of the more popular ISO/IEC 14443 standards, since there were no standards in the nascent industry during its development in 1997. The operating range of the reader/writer is between 30 and 100 mm (1.18 and 3.94 in) depending on the type of model being used. Octopus is specifically designed so that card transactions are relayed for clearing on a store and forward basis, without any requirement for reader units to have realtime round-trip communications with a central database or computer. The stored data about the transaction may be transmitted by network after hours, or in the case of offline mobile readers may be retrieved by a hand held device, for example a Pocket PC. In practice, different data collection mechanisms are used by different transport operators, depending on the nature of their business. The MTR equips its stations with local area networks that connect the components that deal with Octopus cardsturnstiles, Add Value Machines, value-checking machines and customer service terminals. Transactions from these stations are relayed to the MTR's Kowloon Bay headquarters through a frame relay wide area network, and hence onwards to the central clearing house system (CCHS) for clearing. Similar arrangements are in place for retailers such as 7-Eleven. Handheld devices are used to scan offline mobile readers, including those installed on minibuses. Buses either use handheld devices or a wireless system, depending on operator. Security The Octopus card uses encryption for all airborne communication and performs mutual authentication between the card and reader based on the ISO 9798-2 three-pass mutual authentication protocol. In other words, data communications are only established when the card and reader have mutually authenticated based on a shared secret access key. This means that the security of the Octopus card system would be jeopardized should the access key be exposed. A stolen Octopus card reader could be used with stolen Octopus software, for example, to add value (up to HK$3,000) to any Octopus card without authorization. Nevertheless, as of 2003, the Octopus card and system had never been hacked. Octopus card reader includes a fail-safe that prevents reader from initiating transaction when more than one card is being detected at the same time. On 11 February 2009, Sing Tao Daily reported that the fail-safe has been abused for fare evasion through the railway station turnstile. A large number of dishonest mainland Chinese passengers at Sheung Shui station and Lo Wu station were stacking up 4 or more cards before breaking through the turnstile, pretending their cards have been touched with the reader correctly but triggering the fail-safe deliberately to avoid card value deduction. Because of this, if they get caught by the station staff, they can make an excuse of a hardware malfunction and offer the Octopus card with an unsuccessful transaction. Operator The Octopus card system is owned and operated by Octopus Cards Limited, a wholly owned subsidiary of Octopus Holdings Limited. The company was founded as Creative Star Limited in 1994 to oversee the development and implementation of the Octopus card system, and was renamed as its current name of Octopus Cards Limited in 2002. Creative Star was formed as a joint-venture company by five major transit companies in Hong KongMTR Corporation, Kowloon-Canton Railway Corporation, Kowloon Motor Bus, Citybus, and Hongkong and Yaumati Ferry. In January 2001, the shares of Hongkong and Yaumati Ferry in the company was transferred to New World First Bus and New World First Ferry. In the same year, together with MTR Corporation, the company was transformed from its previous non-profit making status to a profit making enterprise. Due to the expansion of the company's businesses, Octopus Holdings Limited was established in 2005 and Octopus Cards Limited was restructured as its subsidiary. The business of Octopus Cards Limited, being a payment business, is regulated by the Hong Kong Monetary Authority, while Octopus' non-payment businesses are not subjected to such regulation and are operated by other subsidiaries of Octopus Holdings Limited that are independent of Octopus Cards Limited. As of 2007, Octopus Holdings Limited was a joint-venture business owned by five transport companies in Hong Kong; 57.4 per cent by the MTR Corporation, 22.1 per cent by the Kowloon-Canton Railway Corporation, 12.4 per cent by Kowloon Motor Bus, 5 per cent by Citybus, and 3.1 per cent by New World First Bus. Since the Government of Hong Kong owns 76.54 per cent of the MTR Corporation (as of 31 December 2005) and wholly owns the Kowloon-Canton Railway Corporation, it is the biggest effective shareholder of Octopus Holdings Limited, and thus also the biggest effective shareholder of Octopus Cards Limited. Initially, Octopus Cards Limited, then known as Creative Star Limited, was restricted to having at most 15 per cent of Octopus card transactions for non-transport transactions, as it operated under the Banking Ordinance. On 20 April 2000, the Hong Kong Monetary Authority authorised the company for deposit-taking, which allowed for 50 per cent of Octopus card transactions to be unrelated to transport. About HK$416 million was deposited in the Octopus system at any given time as of 2000. Awards The Octopus card is recognised internationally, winning the Chairman's Award of the World Information Technology and Services Alliance's 2006 Global IT Excellence Award for being the world's leading complex automatic fare collection and contactless smartcard payment system, and for its innovative use of technologies. Issues EPS add-value glitch In February 2007 it was found that when customers added value to their cards at self-service add-value points located in MTR and Light rail stations, their bank accounts were debited even if the transactions had been cancelled. Octopus Cards Limited claimed that the fault was due to an upgrade of communication systems. Initially, two cases were reported. The company then announced that the use of the payment system, Electronic Payment Services (EPS), at add-value service points would be suspended until further notice, and that it had started an investigation into the reasons for the problem. On 27 July 2007 it was announced that the faulty transactions could be traced back to 2000, and that a total of 3.7 million Hong Kong dollars had been wrongly deducted in 15,270 cases. The company reported that there might be cases dating to before 2000, but that only transactions from the past seven years were kept. The company stated that it would co-operate with EPS Company Limited, operator of Electronic Payment Services, and banks, to contact customers involved and arrange a refund within ten weeks' time. On 21 December 2007 the company announced that it would permanently cease all transactions using EPS because it could not guarantee that such problems would not occur again. Privacy abuse On 15 July 2010, despite Octopus' claims to have never sold data, a former employee of the China insurance company claimed China purchased records for 2.4 million Octopus users. On 20 July, Octopus acknowledged selling customers' personal details to China and CPP, and started an internal review of their data practices. Octopus Holdings made 44 million Hong Kong dollars ($5.7M USD) over 4.5 years. Roderick Woo Bun, Hong Kong's Privacy Commissioner for Personal Data, gave radio interviews and called for transparent investigation, but his term expired at the end of July 2010. Allan Chiang Yam-wang was announced as the incoming Privacy Commissioner. This news was met with protests and international outrage, due to his prior history of privacy invasions involving cameras used to spy on his employees at the Post Office, and disclosing hundreds of job applicants' personal data to corporations. Outgoing Privacy Commissioner Woo pledged to finish a preliminary report on the Octopus privacy abuse before his term ends, and called for a new law making it a criminal offence for companies to sell personal data. Technology As NFC and cardless payment are more popular in mainland China, some opinions question the slow technology development of Octopus company. Among these commentaries, some suggest market monopoly and government attitude to Octopus company are the contributing factor, as the social transportation subsidies are solely through Octopus system. Octopus company CEO Sunny Cheung reassured that new technology would be launched in short period of time, such as QR code payment, integration of Octopus card with Samsung Pay and Peer-to-peer lending. See also Digital currency List of smart cards References External links Financial services companies established in 1997 Contactless smart cards Fare collection systems in China Hong Kong brands Products introduced in 1997 Transport in Hong Kong Stored-value payment card
267720
https://en.wikipedia.org/wiki/Cordell%20Hull
Cordell Hull
Cordell Hull (October 2, 1871July 23, 1955) was an American politician from Tennessee and the longest-serving U.S. Secretary of State, holding the position for 11 years (1933–1944) in the administration of President Franklin Delano Roosevelt during most of World War II. Before that appointment he represented Tennessee for two years in the United States Senate and twenty-two years in the House of Representatives. Hull received the Nobel Peace Prize in 1945 for his role in establishing the United Nations, and was referred to by President Roosevelt as the "Father of the United Nations". Early life and education Cordell Hull was born in a log cabin in Olympus, Tennessee, which is now part of Pickett County, Tennessee, but was then part of Overton County. He was the third of the five sons of William Paschal Hull (1840–1923) and Mary Elizabeth Hull (née Riley) (1841–1903). His brothers were named Orestes (1868), Sanadius (1870), Wyoming (1875), and Roy (1881). Hull's father reportedly tracked down and killed a man because of a blood feud. His mother was a descendant of Isaac Riley who was granted in Pickett County near Byrdstown for Revolutionary War service, as well as Samuel Wood who emigrated from Leicestershire, England on the ship Hopewell and fought in the Virginia Militia. Hull's mother's family (Riley-Wood) had numerous ancestors who fought in the Revolutionary War. Hull devoted a section in his memoirs "Cabin on the Hill" to dispelling an old rumor that his mother was part Cherokee Indian, and subsequent documented family history has confirmed his ancestry. Hull attended college from 1889 until 1890. He gave his first speech at the age of 16. At the age of 19, Hull became the elected chairman of the Clay County Democratic Party. Hull studied at National Normal University (later merged with Wilmington College, Ohio) from 1889 until 1890. In 1891, he graduated from Cumberland School of Law at Cumberland University and was admitted to the bar. Early career Hull served in the Tennessee House of Representatives from 1893 until 1897. During the Spanish–American War, he served in Cuba as a captain in the Fourth Regiment of the Tennessee Volunteer Infantry. From 1903 to 1907, Hull served as a local judge; later he was elected to the United States House of Representatives where he served 11 terms (1907–1921 and 1923–1931) totaling 22 years. As a member of the powerful Ways and Means committee, he fought for low tariffs and claimed authorship of the federal income tax laws of 1913 and 1916 and the inheritance tax of 1916. After his defeat in the congressional election of 1920, he served as chairman of the Democratic National Committee. He was one of several candidates for president at the 1928 Democratic National Convention, which ultimately chose Al Smith as nominee. Hull was influential in advising Albert Gore, Sr. to run for the U.S. Congress in 1938. Hull recorded twenty-five years of combined service in the House and the Senate. Secretary of State Hull won election to the Senate in 1930, but resigned from the Senate in 1933 to become Secretary of State. Roosevelt named him Secretary of State and appointed him to lead the American delegation to the London Economic Conference, but it collapsed when Roosevelt publicly rejected the main plans. In 1943, Hull served as United States delegate to the Moscow Conference. At all times, his main objective was to enlarge foreign trade and lower tariffs. The more important issue of the American role in World War II was handled by Roosevelt who worked through Sumner Welles, the second-ranking official at the State Department. Hull did not attend the summit meetings that Roosevelt had with Churchill and Stalin. In 1943, Hull finally destroyed Welles's career by threatening to expose his homosexuality. In a speech in 1937, New York City Mayor Fiorello H. La Guardia said that brown-shirted Nazis ought to be featured as the "climax" of a chamber of horrors in the upcoming World's Fair. The Nazi government organ, Der Angriff called the mayor a "Jewish Ruffian" who had been bribed by Jewish and Communistic agents and was a criminal disguised as an officeholder. In the ensuing exchanges, Hull sent a letter of regret to Berlin for intemperate comments on both sides, but he also explained the principle of freedom of speech. As the response of Nazi propaganda organs rose in pitch to include characterizing American women as "prostitutes," Hull sent a letter of protest to Berlin, which elicited an "explanation" but no apology. In 1938, Hull engaged in a famous dialog with Mexican Foreign Minister Eduardo Hay concerning the failure of Mexico to compensate Americans who lost farmlands during agrarian reforms in the late 1920s. He insisted that compensation must be "prompt, adequate and effective." Though the Mexican Constitution guaranteed compensation for expropriation or nationalization, nothing had yet been paid. While Hay admitted Mexico's responsibility, he replied that there is "no rule universally accepted in theory nor carried out in practice which makes obligatory the payment of immediate compensation...." The so-called "Hull formula" has been adopted in many treaties concerning international investment but is still controversial, especially in Latin American countries, which have historically subscribed to the Calvo doctrine, which suggests that compensation is to be decided by the host country and that as long as there is equality between nationals and foreigners and no discrimination, there can be no claim in international law. The tension between the Hull formula and the Calvo doctrine is still important in the law of international investment. Hull pursued the "Good Neighbor Policy" with Latin American nations, which has been credited with preventing Nazi subterfuge in that region. Hull and Roosevelt also maintained relations with Vichy France, which Hull credited with allowing General Henri Giraud's forces to join allied forces in the North African campaign against Germany and Italy. Hull also handled formal statements with foreign governments. Notably he sent the Hull note just prior to the Pearl Harbor attack, which was formally titled "Outline of proposed Basis for Agreement Between The United States and Japan." Hull received news of the attack while he was outside his office. The Japanese ambassador Kichisaburō Nomura and Japan's special envoy Saburō Kurusu were waiting to see Hull with a 14-part message from the Japanese government that officially notified of a breakdown in negotiations. The United States had broken Japanese encryption, and Hull knew the message contents. He blasted the diplomats: "In all my fifty years of public service, I have never seen such a document that was more crowded with infamous falsehood and distortion." Hull chaired the Advisory Committee on Postwar Foreign Policy, which was created in February 1942. When the Free French Forces of Charles de Gaulle occupied the islands of Saint-Pierre and Miquelon, south of Newfoundland, in December 1941, Hull lodged a very strong protest and even went as far as referring to the Gaullist naval forces as "the so-called Free French." His request to have the Vichy governor reinstated was met with strong criticism in the American press. However, the islands remained under the Free French until the end of the war. Jews and SS St. Louis incident In 1939, Hull advised President Roosevelt to reject the SS St. Louis, a German ocean liner carrying 936 Jews seeking asylum from Germany. Hull's decision sent the Jews back to Europe on the eve of the Holocaust. Some historians estimate that 254 of the passengers were ultimately murdered by the Nazis. . . . there were two conversations on the subject between (Secretary of the Treasury) Morgenthau and Secretary of State Cordell Hull. In the first, 3:17 PM on 5 June 1939, Hull made it clear to Morgenthau that the passengers could not legally be issued U.S. tourist visas as they had no return addresses. Furthermore, Hull made it clear to Morgenthau that the issue at hand was between the Cuban government and the passengers. The U.S., in effect, had no role. In the second conversation at 3:54 PM on June 6, 1939, Morgenthau said they did not know where the ship was and he inquired whether it was "proper to have the Coast Guard look for it". Hull responded by saying that he didn't see any reason why it could not. Hull then informed him that he did not think that Morgenthau would want the search for the ship to get into the newspapers. Morgenthau said "Oh no. No, no. They would just—oh, they might send a plane to do patrol work. There would be nothing in the papers." Hull responded "Oh, that would be all right." In September 1940, First Lady Eleanor Roosevelt maneuvered with another State Department official to bypass Hull's refusal to allow Jewish refugees aboard a Portuguese ship, the SS Quanza, to receive visas to enter the U.S. Through her efforts, the Jewish refugees disembarked on September 11, 1940, in Virginia. In a similar incident, American Jews sought to raise money to prevent the mass murder of Romanian Jews but were blocked by the State Department. "In wartime, in order to send money out of the United States, two government agencies had to sign a simple release- the Treasury Department under Henry Morgenthau and the State Department under Secretary Cordell Hull. Morgenthau signed immediately. The State Department delayed, delayed, and delayed, as more Jews were dying in the Transnistria camps." In 1940 Jewish representatives in the USA lodged an official complaint against the discriminatory policies the State Department was using against the Jews. The results were fatal: the Secretary of State Cordell Hull gave strict orders to every USA consulate worldwide forbidding the issuing of visas to Jews ... At the same time a Jewish congressman petitioned President Roosevelt, requesting his permission to allow twenty thousand Jewish children from Europe to enter the USA. The President totally ignored this petition as well as its sender. Establishing the United Nations Hull was the underlying force and architect in the creation of the United Nations, as recognized by the 1945 Nobel Prize for Peace, an honor for which Franklin D. Roosevelt nominated him. During World War II, Hull and Roosevelt had worked toward the development of a world organization to prevent a third World War. Hull and his staff drafted the "Charter of the United Nations" in mid-1943. Later years Hull resigned on November 30, 1944 due to failing health. To this day he remains the longest-serving US Secretary of State, having served for eleven years and nine months in the post. Roosevelt described Hull upon his departure as "the one person in all the world who has done his most to make this great plan for peace (the United Nations) an effective fact". The Norwegian Nobel Committee honored Hull with the Nobel Peace Prize in 1945 in recognition of his efforts for peace and understanding in the Western Hemisphere, his trade agreements, and his work to establish the United Nations. In January 1948, Hull published his two-volume memoirs, an excerpt from which appeared in the New York Times. Personal life and death At the age of 45, in 1917, he married a widow Rose Frances (Witz) Whitney Hull (1875–1954), of an Austrian Jewish family of Staunton, Virginia. The couple had no children. Mrs. Hull died at age 79, in Staunton, Virginia, in 1954. She is buried in Washington D.C. at Washington National Cathedral. Hull died on July 23, 1955, at age 83, at his home in Washington, D.C., after a lifelong struggle with familial remitting-relapsing sarcoidosis (often confused with tuberculosis). He is buried in the vault of the Chapel of St. Joseph of Arimathea in the Washington National Cathedral. Legacy Hull's memory is preserved by Cordell Hull Dam on the Cumberland River near Carthage, Tennessee. The dam impounds Cordell Hull Lake, covering approximately 12,000 acres (49 km2). His law school, Cumberland School of Law, continues to honor him with a Cordell Hull Speaker's Forum and the Moot Court Room. Cordell Hull Birthplace State Park, near Byrdstown, Tennessee, was established in 1997 to preserve Hull's birthplace and various personal effects Hull had donated to the citizens of Pickett County, including his Nobel Peace Prize. A segment of Kentucky highway routes 90, 63, and 163, from Interstate 65 at Mammoth Cave National Park south to the Tennessee State Line, is named "Cordell Hull Highway". The Cordell Hull Building, on Capital Hill in Nashville, Tennessee, is a secure 10 story building that contains the offices of the Tennessee Legislature. The Eisenhower Executive Office Building (formerly the Old Executive Office Building) in Washington, DC, next to the White House, contains the ornately decorated "Cordell Hull Room" on the second floor, which is used for meetings. The room was Cordell Hull's office when he served as U.S. Secretary of State. Hull is one of the presidential cabinet members who are characters in the musical Annie. References Sources Primary Cordell Hull. Memoirs (January 1948), Volume I, Volume II The Papers of Cordell Hull Secondary Dallek, Robert. Franklin D. Roosevelt and American foreign policy, 1932-1945 (Oxford University Press, 1979). Pratt, Julius W. Cordell Hull, 1933–44, 2 vol. (1964) online, major scholarly biography Biography from U.S. Congress biography page . O'Sullivan, Christopher D. Sumner Welles, Postwar Planning and the Quest for a New World Order. Columbia University Press, 2008. Gellman, Irwin F., Secret Affairs: FDR, Cordell Hull, and Sumner Welles'', Enigma Books, 2002. . Woolner, David B. "The Frustrated Idealists: Cordell Hull, Anthony Eden and the Search for Anglo-American Cooperation, 1933– 1938" (PhD dissertation, McGill U, 1996)online bibliography pp 373–91. External links The Cordell Hull Foundation, a non-profit NGO, based around furthering international peace and co-operation. The Cordell Hull Institute, a U.S. think-tank focusing on furthering debate in international economic development and trade. The Cordell Hull Museum, located in Byrdstown, Tennessee, focusing on Hull's life and work. Cordell Hull Birthplace State Park |- |- |- |- |- |- 1871 births 1955 deaths 20th-century American politicians American military personnel of the Spanish–American War American Nobel laureates Burials at Washington National Cathedral Candidates in the 1928 United States presidential election Candidates in the 1940 United States presidential election Cumberland School of Law alumni Democratic Party members of the United States House of Representatives Democratic National Committee chairs Democratic Party United States senators Franklin D. Roosevelt administration cabinet members Members of the Tennessee House of Representatives Members of the United States House of Representatives from Tennessee Military personnel from Tennessee Nobel Peace Prize laureates People from Pickett County, Tennessee Tennessee Democrats Tennessee state court judges United States Secretaries of State United States senators from Tennessee World War II political leaders
268622
https://en.wikipedia.org/wiki/Antivirus%20software
Antivirus software
Antivirus software, or antivirus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware. Antivirus software was originally developed to detect and remove computer viruses, hence the name. However, with the proliferation of other malware, antivirus software started to protect from other computer threats. In particular, modern antivirus software can protect users from malicious browser helper objects (BHOs), browser hijackers, ransomware, keyloggers, backdoors, rootkits, trojan horses, worms, malicious LSPs, dialers, fraud tools, adware, and spyware. Some products also include protection from other computer threats, such as infected and malicious URLs, spam, scam and phishing attacks, online identity (privacy), online banking attacks, social engineering techniques, advanced persistent threat (APT), and botnet DDoS attacks. History 1949–1980 period (pre-antivirus days) Although the roots of the computer virus date back as early as 1949, when the Hungarian scientist John von Neumann published the "Theory of self-reproducing automata", the first known computer virus appeared in 1971 and was dubbed the "Creeper virus". This computer virus infected Digital Equipment Corporation's (DEC) PDP-10 mainframe computers running the TENEX operating system. The Creeper virus was eventually deleted by a program created by Ray Tomlinson and known as "The Reaper". Some people consider "The Reaper" the first antivirus software ever written – it may be the case, but it is important to note that the Reaper was actually a virus itself specifically designed to remove the Creeper virus. The Creeper virus was followed by several other viruses. The first known that appeared "in the wild" was "Elk Cloner", in 1981, which infected Apple II computers. In 1983, the term "computer virus" was coined by Fred Cohen in one of the first ever published academic papers on computer viruses. Cohen used the term "computer virus" to describe programs that: "affect other computer programs by modifying them in such a way as to include a (possibly evolved) copy of itself." (note that a more recent, and precise, definition of computer virus has been given by the Hungarian security researcher Péter Szőr: "a code that recursively replicates a possibly evolved copy of itself"). The first IBM PC compatible "in the wild" computer virus, and one of the first real widespread infections, was "Brain" in 1986. From then, the number of viruses has grown exponentially. Most of the computer viruses written in the early and mid-1980s were limited to self-reproduction and had no specific damage routine built into the code. That changed when more and more programmers became acquainted with computer virus programming and created viruses that manipulated or even destroyed data on infected computers. Before internet connectivity was widespread, computer viruses were typically spread by infected floppy disks. Antivirus software came into use, but was updated relatively infrequently. During this time, virus checkers essentially had to check executable files and the boot sectors of floppy disks and hard disks. However, as internet usage became common, viruses began to spread online. 1980–1990 period (early days) There are competing claims for the innovator of the first antivirus product. Possibly, the first publicly documented removal of an "in the wild" computer virus (i.e. the "Vienna virus") was performed by Bernd Fix in 1987. In 1987, Andreas Lüning and Kai Figge, who founded G Data Software in 1985, released their first antivirus product for the Atari ST platform. In 1987, the Ultimate Virus Killer (UVK) was also released. This was the de facto industry standard virus killer for the Atari ST and Atari Falcon, the last version of which (version 9.0) was released in April 2004. In 1987, in the United States, John McAfee founded the McAfee company (was part of Intel Security) and, at the end of that year, he released the first version of VirusScan. Also in 1987 (in Czechoslovakia), Peter Paško, Rudolf Hrubý, and Miroslav Trnka created the first version of NOD antivirus. In 1987, Fred Cohen wrote that there is no algorithm that can perfectly detect all possible computer viruses. Finally, at the end of 1987, the first two heuristic antivirus utilities were released: Flushot Plus by Ross Greenberg and Anti4us by Erwin Lanting. In his O'Reilly book, Malicious Mobile Code: Virus Protection for Windows, Roger Grimes described Flushot Plus as "the first holistic program to fight malicious mobile code (MMC)." However, the kind of heuristic used by early AV engines was totally different from those used today. The first product with a heuristic engine resembling modern ones was F-PROT in 1991. Early heuristic engines were based on dividing the binary into different sections: data section, code section (in a legitimate binary, it usually starts always from the same location). Indeed, the initial viruses re-organized the layout of the sections, or overrode the initial portion of a section in order to jump to the very end of the file where malicious code was located—only going back to resume execution of the original code. This was a very specific pattern, not used at the time by any legitimate software, which represented an elegant heuristic to catch suspicious code. Other kinds of more advanced heuristics were later added, such as suspicious section names, incorrect header size, regular expressions, and partial pattern in-memory matching. In 1988, the growth of antivirus companies continued. In Germany, Tjark Auerbach founded Avira (H+BEDV at the time) and released the first version of AntiVir (named "Luke Filewalker" at the time). In Bulgaria, Vesselin Bontchev released his first freeware antivirus program (he later joined FRISK Software). Also Frans Veldman released the first version of ThunderByte Antivirus, also known as TBAV (he sold his company to Norman Safeground in 1998). In Czechoslovakia, Pavel Baudiš and Eduard Kučera started avast! (at the time ALWIL Software) and released their first version of avast! antivirus. In June 1988, in South Korea, Ahn Cheol-Soo released its first antivirus software, called V1 (he founded AhnLab later in 1995). Finally, in the Autumn 1988, in United Kingdom, Alan Solomon founded S&S International and created his Dr. Solomon's Anti-Virus Toolkit (although he launched it commercially only in 1991 – in 1998 Solomon's company was acquired by McAfee). In November 1988 a professor at the Panamerican University in Mexico City named Alejandro E. Carriles copyrighted the first antivirus software in Mexico under the name "Byte Matabichos" (Byte Bugkiller) to help solve the rampant virus infestation among students. Also in 1988, a mailing list named VIRUS-L was started on the BITNET/EARN network where new viruses and the possibilities of detecting and eliminating viruses were discussed. Some members of this mailing list were: Alan Solomon, Eugene Kaspersky (Kaspersky Lab), Friðrik Skúlason (FRISK Software), John McAfee (McAfee), Luis Corrons (Panda Security), Mikko Hyppönen (F-Secure), Péter Szőr, Tjark Auerbach (Avira) and Vesselin Bontchev (FRISK Software). In 1989, in Iceland, Friðrik Skúlason created the first version of F-PROT Anti-Virus (he founded FRISK Software only in 1993). Meanwhile in the United States, Symantec (founded by Gary Hendrix in 1982) launched its first Symantec antivirus for Macintosh (SAM). SAM 2.0, released March 1990, incorporated technology allowing users to easily update SAM to intercept and eliminate new viruses, including many that didn't exist at the time of the program's release. In the end of the 1980s, in United Kingdom, Jan Hruska and Peter Lammer founded the security firm Sophos and began producing their first antivirus and encryption products. In the same period, in Hungary, also VirusBuster was founded (which has recently being incorporated by Sophos). 1990–2000 period (emergence of the antivirus industry) In 1990, in Spain, Mikel Urizarbarrena founded Panda Security (Panda Software at the time). In Hungary, the security researcher Péter Szőr released the first version of Pasteur antivirus. In Italy, Gianfranco Tonello created the first version of VirIT eXplorer antivirus, then founded TG Soft one year later. In 1990, the Computer Antivirus Research Organization (CARO) was founded. In 1991, CARO released the "Virus Naming Scheme", originally written by Friðrik Skúlason and Vesselin Bontchev. Although this naming scheme is now outdated, it remains the only existing standard that most computer security companies and researchers ever attempted to adopt. CARO members includes: Alan Solomon, Costin Raiu, Dmitry Gryaznov, Eugene Kaspersky, Friðrik Skúlason, Igor Muttik, Mikko Hyppönen, Morton Swimmer, Nick FitzGerald, Padgett Peterson, Peter Ferrie, Righard Zwienenberg and Vesselin Bontchev. In 1991, in the United States, Symantec released the first version of Norton AntiVirus. In the same year, in the Czech Republic, Jan Gritzbach and Tomáš Hofer founded AVG Technologies (Grisoft at the time), although they released the first version of their Anti-Virus Guard (AVG) only in 1992. On the other hand, in Finland, F-Secure (founded in 1988 by Petri Allas and Risto Siilasmaa – with the name of Data Fellows) released the first version of their antivirus product. F-Secure claims to be the first antivirus firm to establish a presence on the World Wide Web. In 1991, the European Institute for Computer Antivirus Research (EICAR) was founded to further antivirus research and improve development of antivirus software. In 1992, in Russia, Igor Danilov released the first version of SpiderWeb, which later became Dr. Web. In 1994, AV-TEST reported that there were 28,613 unique malware samples (based on MD5) in their database. Over time other companies were founded. In 1996, in Romania, Bitdefender was founded and released the first version of Anti-Virus eXpert (AVX). In 1997, in Russia, Eugene Kaspersky and Natalya Kaspersky co-founded security firm Kaspersky Lab. In 1996, there was also the first "in the wild" Linux virus, known as "Staog". In 1999, AV-TEST reported that there were 98,428 unique malware samples (based on MD5) in their database. 2000–2005 period In 2000, Rainer Link and Howard Fuhs started the first open source antivirus engine, called OpenAntivirus Project. In 2001, Tomasz Kojm released the first version of ClamAV, the first ever open source antivirus engine to be commercialised. In 2007, ClamAV was bought by Sourcefire, which in turn was acquired by Cisco Systems in 2013. In 2002, in United Kingdom, Morten Lund and Theis Søndergaard co-founded the antivirus firm BullGuard. In 2005, AV-TEST reported that there were 333,425 unique malware samples (based on MD5) in their database. 2005–2014 period In 2007, AV-TEST reported a number of 5,490,960 new unique malware samples (based on MD5) only for that year. In 2012 and 2013, antivirus firms reported a new malware samples range from 300,000 to over 500,000 per day. Over the years it has become necessary for antivirus software to use several different strategies (e.g. specific email and network protection or low level modules) and detection algorithms, as well as to check an increasing variety of files, rather than just executables, for several reasons: Powerful macros used in word processor applications, such as Microsoft Word, presented a risk. Virus writers could use the macros to write viruses embedded within documents. This meant that computers could now also be at risk from infection by opening documents with hidden attached macros. The possibility of embedding executable objects inside otherwise non-executable file formats can make opening those files a risk. Later email programs, in particular Microsoft's Outlook Express and Outlook, were vulnerable to viruses embedded in the email body itself. A user's computer could be infected by just opening or previewing a message. In 2005, F-Secure was the first security firm that developed an Anti-Rootkit technology, called BlackLight. Because most users are usually connected to the Internet on a continual basis, Jon Oberheide first proposed a Cloud-based antivirus design in 2008. In February 2008 McAfee Labs added the industry-first cloud-based anti-malware functionality to VirusScan under the name Artemis. It was tested by AV-Comparatives in February 2008 and officially unveiled in August 2008 in McAfee VirusScan. Cloud AV created problems for comparative testing of security software – part of the AV definitions was out of testers control (on constantly updated AV company servers) thus making results non-repeatable. As a result, Anti-Malware Testing Standards Organisation (AMTSO) started working on method of testing cloud products which was adopted on May 7, 2009. In 2011, AVG introduced a similar cloud service, called Protective Cloud Technology. 2014–present (rise of next-gen) Following the 2013 release of the APT 1 report from Mandiant, the industry has seen a shift towards signature-less approaches to the problem capable of detecting and mitigating zero-day attacks. Numerous approaches to address these new forms of threats have appeared, including behavioral detection, artificial intelligence, machine learning, and cloud-based file detonation. According to Gartner, it is expected the rise of new entrants, such Carbon Black, Cylance and Crowdstrike will force EPP incumbents into a new phase of innovation and acquisition. One method from Bromium involves micro-virtualization to protect desktops from malicious code execution initiated by the end user. Another approach from SentinelOne and Carbon Black focuses on behavioral detection by building a full context around every process execution path in real time, while Cylance leverages an artificial intelligence model based on machine learning. Increasingly, these signature-less approaches have been defined by the media and analyst firms as "next-generation" antivirus and are seeing rapid market adoption as certified antivirus replacement technologies by firms such as Coalfire and DirectDefense. In response, traditional antivirus vendors such as Trend Micro, Symantec and Sophos have responded by incorporating "next-gen" offerings into their portfolios as analyst firms such as Forrester and Gartner have called traditional signature-based antivirus "ineffective" and "outdated". Identification methods One of the few solid theoretical results in the study of computer viruses is Frederick B. Cohen's 1987 demonstration that there is no algorithm that can perfectly detect all possible viruses. However, using different layers of defense, a good detection rate may be achieved. There are several methods which antivirus engines can use to identify malware: Sandbox detection: a particular behavioural-based detection technique that, instead of detecting the behavioural fingerprint at run time, it executes the programs in a virtual environment, logging what actions the program performs. Depending on the actions logged, the antivirus engine can determine if the program is malicious or not. If not, then, the program is executed in the real environment. Albeit this technique has shown to be quite effective, given its heaviness and slowness, it is rarely used in end-user antivirus solutions. Data mining techniques: one of the latest approaches applied in malware detection. Data mining and machine learning algorithms are used to try to classify the behaviour of a file (as either malicious or benign) given a series of file features, that are extracted from the file itself. Signature-based detection Traditional antivirus software relies heavily upon signatures to identify malware. Substantially, when a malware sample arrives in the hands of an antivirus firm, it is analysed by malware researchers or by dynamic analysis systems. Then, once it is determined to be a malware, a proper signature of the file is extracted and added to the signatures database of the antivirus software. Although the signature-based approach can effectively contain malware outbreaks, malware authors have tried to stay a step ahead of such software by writing "oligomorphic", "polymorphic" and, more recently, "metamorphic" viruses, which encrypt parts of themselves or otherwise modify themselves as a method of disguise, so as to not match virus signatures in the dictionary. Heuristics Many viruses start as a single infection and through either mutation or refinements by other attackers, can grow into dozens of slightly different strains, called variants. Generic detection refers to the detection and removal of multiple threats using a single virus definition. For example, the Vundo trojan has several family members, depending on the antivirus vendor's classification. Symantec classifies members of the Vundo family into two distinct categories, Trojan.Vundo and Trojan.Vundo.B. While it may be advantageous to identify a specific virus, it can be quicker to detect a virus family through a generic signature or through an inexact match to an existing signature. Virus researchers find common areas that all viruses in a family share uniquely and can thus create a single generic signature. These signatures often contain non-contiguous code, using wildcard characters where differences lie. These wildcards allow the scanner to detect viruses even if they are padded with extra, meaningless code. A detection that uses this method is said to be "heuristic detection." Rootkit detection Anti-virus software can attempt to scan for rootkits. A rootkit is a type of malware designed to gain administrative-level control over a computer system without being detected. Rootkits can change how the operating system functions and in some cases can tamper with the anti-virus program and render it ineffective. Rootkits are also difficult to remove, in some cases requiring a complete re-installation of the operating system. Real-time protection Real-time protection, on-access scanning, background guard, resident shield, autoprotect, and other synonyms refer to the automatic protection provided by most antivirus, anti-spyware, and other anti-malware programs. This monitors computer systems for suspicious activity such as computer viruses, spyware, adware, and other malicious objects. Real-time protection detects threats in opened files and scans apps in real-time as they are installed on the device. When inserting a CD, opening an email, or browsing the web, or when a file already on the computer is opened or executed. Issues of concern Unexpected renewal costs Some commercial antivirus software end-user license agreements include a clause that the subscription will be automatically renewed, and the purchaser's credit card automatically billed, at the renewal time without explicit approval. For example, McAfee requires users to unsubscribe at least 60 days before the expiration of the present subscription while BitDefender sends notifications to unsubscribe 30 days before the renewal. Norton AntiVirus also renews subscriptions automatically by default. Rogue security applications Some apparent antivirus programs are actually malware masquerading as legitimate software, such as WinFixer, MS Antivirus, and Mac Defender. Problems caused by false positives A "false positive" or "false alarm" is when antivirus software identifies a non-malicious file as malware. When this happens, it can cause serious problems. For example, if an antivirus program is configured to immediately delete or quarantine infected files, as is common on Microsoft Windows antivirus applications, a false positive in an essential file can render the Windows operating system or some applications unusable. Recovering from such damage to critical software infrastructure incurs technical support costs and businesses can be forced to close whilst remedial action is undertaken. Examples of serious false-positives: May 2007: a faulty virus signature issued by Symantec mistakenly removed essential operating system files, leaving thousands of PCs unable to boot. May 2007: the executable file required by Pegasus Mail on Windows was falsely detected by Norton AntiVirus as being a Trojan and it was automatically removed, preventing Pegasus Mail from running. Norton AntiVirus had falsely identified three releases of Pegasus Mail as malware, and would delete the Pegasus Mail installer file when that happened. In response to this Pegasus Mail stated: April 2010: McAfee VirusScan detected svchost.exe, a normal Windows binary, as a virus on machines running Windows XP with Service Pack 3, causing a reboot loop and loss of all network access. December 2010: a faulty update on the AVG anti-virus suite damaged 64-bit versions of Windows 7, rendering it unable to boot, due to an endless boot loop created. October 2011: Microsoft Security Essentials (MSE) removed the Google Chrome web browser, rival to Microsoft's own Internet Explorer. MSE flagged Chrome as a Zbot banking trojan. September 2012: Sophos' anti-virus suite identified various update-mechanisms, including its own, as malware. If it was configured to automatically delete detected files, Sophos Antivirus could render itself unable to update, required manual intervention to fix the problem. September 2017: the Google Play Protect anti-virus started identifying Motorola's Moto G4 Bluetooth application as malware, causing Bluetooth functionality to become disabled. System and interoperability related issues Running (the real-time protection of) multiple antivirus programs concurrently can degrade performance and create conflicts. However, using a concept called multiscanning, several companies (including G Data Software and Microsoft) have created applications which can run multiple engines concurrently. It is sometimes necessary to temporarily disable virus protection when installing major updates such as Windows Service Packs or updating graphics card drivers. Active antivirus protection may partially or completely prevent the installation of a major update. Anti-virus software can cause problems during the installation of an operating system upgrade, e.g. when upgrading to a newer version of Windows "in place"—without erasing the previous version of Windows. Microsoft recommends that anti-virus software be disabled to avoid conflicts with the upgrade installation process. Active anti-virus software can also interfere with a firmware update process. The functionality of a few computer programs can be hampered by active anti-virus software. For example, TrueCrypt, a disk encryption program, states on its troubleshooting page that anti-virus programs can conflict with TrueCrypt and cause it to malfunction or operate very slowly. Anti-virus software can impair the performance and stability of games running in the Steam platform. Support issues also exist around antivirus application interoperability with common solutions like SSL VPN remote access and network access control products. These technology solutions often have policy assessment applications that require an up-to-date antivirus to be installed and running. If the antivirus application is not recognized by the policy assessment, whether because the antivirus application has been updated or because it is not part of the policy assessment library, the user will be unable to connect. Effectiveness Studies in December 2007 showed that the effectiveness of antivirus software had decreased in the previous year, particularly against unknown or zero day attacks. The computer magazine c't found that detection rates for these threats had dropped from 40-50% in 2006 to 20–30% in 2007. At that time, the only exception was the NOD32 antivirus, which managed a detection rate of 68%. According to the ZeuS tracker website the average detection rate for all variants of the well-known ZeuS trojan is as low as 40%. The problem is magnified by the changing intent of virus authors. Some years ago it was obvious when a virus infection was present. At the time, viruses were written by amateurs and exhibited destructive behavior or pop-ups. Modern viruses are often written by professionals, financed by criminal organizations. In 2008, Eva Chen, CEO of Trend Micro, stated that the anti-virus industry has over-hyped how effective its products are—and so has been misleading customers—for years. Independent testing on all the major virus scanners consistently shows that none provides 100% virus detection. The best ones provided as high as 99.9% detection for simulated real-world situations, while the lowest provided 91.1% in tests conducted in August 2013. Many virus scanners produce false positive results as well, identifying benign files as malware. Although methods may differ, some notable independent quality testing agencies include AV-Comparatives, ICSA Labs, West Coast Labs, Virus Bulletin, AV-TEST and other members of the Anti-Malware Testing Standards Organization. New viruses Anti-virus programs are not always effective against new viruses, even those that use non-signature-based methods that should detect new viruses. The reason for this is that the virus designers test their new viruses on the major anti-virus applications to make sure that they are not detected before releasing them into the wild. Some new viruses, particularly ransomware, use polymorphic code to avoid detection by virus scanners. Jerome Segura, a security analyst with ParetoLogic, explained: A proof of concept virus has used the Graphics Processing Unit (GPU) to avoid detection from anti-virus software. The potential success of this involves bypassing the CPU in order to make it much harder for security researchers to analyse the inner workings of such malware. Rootkits Detecting rootkits is a major challenge for anti-virus programs. Rootkits have full administrative access to the computer and are invisible to users and hidden from the list of running processes in the task manager. Rootkits can modify the inner workings of the operating system and tamper with antivirus programs. Damaged files If a file has been infected by a computer virus, anti-virus software will attempt to remove the virus code from the file during disinfection, but it is not always able to restore the file to its undamaged state. In such circumstances, damaged files can only be restored from existing backups or shadow copies (this is also true for ransomware); installed software that is damaged requires re-installation (however, see System File Checker). Firmware infections Any writeable firmware in the computer can be infected by malicious code. This is a major concern, as an infected BIOS could require the actual BIOS chip to be replaced to ensure the malicious code is completely removed. Anti-virus software is not effective at protecting firmware and the motherboard BIOS from infection. In 2014, security researchers discovered that USB devices contain writeable firmware which can be modified with malicious code (dubbed "BadUSB"), which anti-virus software cannot detect or prevent. The malicious code can run undetected on the computer and could even infect the operating system prior to it booting up. Performance and other drawbacks Antivirus software has some drawbacks, first of which that it can impact a computer's performance. Furthermore, inexperienced users can be lulled into a false sense of security when using the computer, considering their computers to be invulnerable, and may have problems understanding the prompts and decisions that antivirus software presents them with. An incorrect decision may lead to a security breach. If the antivirus software employs heuristic detection, it must be fine-tuned to minimize misidentifying harmless software as malicious (false positive). Antivirus software itself usually runs at the highly trusted kernel level of the operating system to allow it access to all the potential malicious process and files, creating a potential avenue of attack. The US National Security Agency (NSA) and the UK Government Communications Headquarters (GCHQ) intelligence agencies, respectively, have been exploiting anti-virus software to spy on users. Anti-virus software has highly privileged and trusted access to the underlying operating system, which makes it a much more appealing target for remote attacks. Additionally anti-virus software is "years behind security-conscious client-side applications like browsers or document readers. It means that Acrobat Reader, Microsoft Word or Google Chrome are harder to exploit than 90 percent of the anti-virus products out there", according to Joxean Koret, a researcher with Coseinc, a Singapore-based information security consultancy. Alternative solutions Antivirus software running on individual computers is the most common method employed of guarding against malware, but it is not the only solution. Other solutions can also be employed by users, including Unified Threat Management (UTM), hardware and network firewalls, Cloud-based antivirus and online scanners. Hardware and network firewall Network firewalls prevent unknown programs and processes from accessing the system. However, they are not antivirus systems and make no attempt to identify or remove anything. They may protect against infection from outside the protected computer or network, and limit the activity of any malicious software which is present by blocking incoming or outgoing requests on certain TCP/IP ports. A firewall is designed to deal with broader system threats that come from network connections into the system and is not an alternative to a virus protection system. Cloud antivirus Cloud antivirus is a technology that uses lightweight agent software on the protected computer, while offloading the majority of data analysis to the provider's infrastructure. One approach to implementing cloud antivirus involves scanning suspicious files using multiple antivirus engines. This approach was proposed by an early implementation of the cloud antivirus concept called CloudAV. CloudAV was designed to send programs or documents to a network cloud where multiple antivirus and behavioral detection programs are used simultaneously in order to improve detection rates. Parallel scanning of files using potentially incompatible antivirus scanners is achieved by spawning a virtual machine per detection engine and therefore eliminating any possible issues. CloudAV can also perform "retrospective detection," whereby the cloud detection engine rescans all files in its file access history when a new threat is identified thus improving new threat detection speed. Finally, CloudAV is a solution for effective virus scanning on devices that lack the computing power to perform the scans themselves. Some examples of cloud anti-virus products are Panda Cloud Antivirus and Immunet. Comodo Group has also produced cloud-based anti-virus. Online scanning Some antivirus vendors maintain websites with free online scanning capability of the entire computer, critical areas only, local disks, folders or files. Periodic online scanning is a good idea for those that run antivirus applications on their computers because those applications are frequently slow to catch threats. One of the first things that malicious software does in an attack is disable any existing antivirus software and sometimes the only way to know of an attack is by turning to an online resource that is not installed on the infected computer. Specialized tools Virus removal tools are available to help remove stubborn infections or certain types of infection. Examples include Avast Free Anti- Malware, AVG Free Malware Removal Tools, and Avira AntiVir Removal Tool. It is also worth noting that sometimes antivirus software can produce a false positive result, indicating an infection where there is none. A rescue disk that is bootable, such as a CD or USB storage device, can be used to run antivirus software outside of the installed operating system, in order to remove infections while they are dormant. A bootable antivirus disk can be useful when, for example, the installed operating system is no longer bootable or has malware that is resisting all attempts to be removed by the installed antivirus software. Examples of some of these bootable disks include the Bitdefender Rescue CD, Kaspersky Rescue Disk 2018, and Windows Defender Offline (integrated into Windows 10 since the Anniversary Update). Most of the Rescue CD software can also be installed onto a USB storage device, that is bootable on newer computers. Usage and risks According to an FBI survey, major businesses lose $12 million annually dealing with virus incidents. A survey by Symantec in 2009 found that a third of small to medium-sized business did not use antivirus protection at that time, whereas more than 80% of home users had some kind of antivirus installed. According to a sociological survey conducted by G Data Software in 2010 49% of women did not use any antivirus program at all. See also Anti-virus and anti-malware software CARO, the Computer Antivirus Research Organization Comparison of antivirus software Comparison of computer viruses EICAR, the European Institute for Computer Antivirus Research Firewall software Internet security Linux malware Quarantine (computing) Sandbox (computer security) Timeline of computer viruses and worms Virus hoax Citations General bibliography Utility software types
271151
https://en.wikipedia.org/wiki/Rich%20Skrenta
Rich Skrenta
Richard Skrenta (born June 6, 1967 in Pittsburgh, Pennsylvania) is a computer programmer and Silicon Valley entrepreneur who created the web search engine blekko. Biography Richard J Skrenta Jr was born in Pittsburgh on June 6, 1967. In 1982, at age 15, as a high school student at Mt. Lebanon High School, Skrenta wrote the Elk Cloner virus that infected Apple II machines. It is widely believed to have been one of the first large-scale self-spreading personal computer viruses ever created. In 1989, Skrenta graduated with a B.A. in computer science from Northwestern University. Between 1989 and 1991, Skrenta worked at Commodore Business Machines with Amiga Unix. In 1989, Skrenta started working on a multiplayer simulation game. In 1994, it was launched under the name Olympia as a pay-for-play PBEM game by Shadow Island Games. Between 1991 and 1995, Skrenta worked at Unix System Labs and from 1996 to 1998 with IP-level encryption at Sun Microsystems. He later left Sun and became one of the founders of DMOZ. He stayed on board after the Netscape acquisition, and continued to work on the directory as well as Netscape Search, AOL Music, and AOL Shopping. After his stint at AOL, Skrenta went on to cofound Topix LLC, a Web 2.0 company in the news aggregation & forums market. In 2005, Skrenta and his fellow cofounders sold a 75% share of Topix to a newspaper consortium made up of Tribune, Gannett, and Knight Ridder. In the late 2000s, Skrenta headed the startup company Blekko Inc, which was an Internet search engine. Blekko received early investment support from Marc Andreessen and began public beta testing on November 1, 2010. In 2015, IBM acquired both the Blekko company and search engine for their Watson computer system. Skrenta was involved in the development of VMS Monster, an old MUD for VMS. VMS Monster was part of the inspiration for TinyMUD. He is also known for his role in developing TASS, an ancestor of tin, the popular threaded Usenet newsreader for Unix systems. References External links Skrenta.com American computer programmers MUD developers 1967 births Living people Northwestern University alumni People from Mt. Lebanon, Pennsylvania DMOZ sv:Rich Skrenta
272097
https://en.wikipedia.org/wiki/In-band%20on-channel
In-band on-channel
In-band on-channel (IBOC) is a hybrid method of transmitting digital radio and analog radio broadcast signals simultaneously on the same frequency. The name refers to the new digital signals being broadcast in the same AM or FM band (in-band), and associated with an existing radio channel (on-channel). By utilizing additional digital subcarriers or sidebands, digital information is "multiplexed" on existing signals, thus avoiding re-allocation of the broadcast bands. IBOC relies on unused areas of the existing spectrum to send its signals. This is particularly useful in North America style FM, where channels are widely spaced at 200 kHz but use only about 50 kHz of that bandwidth for the audio signal. In most countries, FM channel spacing may be as close as 100 kHz, and on AM it is only 10 kHz. While these all offer some room for additional digital broadcasts, most attention on IBOC is in the FM band in North American systems; in Europe and many other countries, entirely new bands were allocated for all-digital systems. Digital radio standards generally allow multiple program channels to be multiplexed into a single digital stream. In North American FM, this normally allows two or three high-fidelity signals in one channel, or one high-fidelity signal and several additional channels at medium-fidelity levels that are much higher quality than AM. For even greater capacity, some existing subcarriers can be taken off the air to make additional bandwidth available in the modulation baseband. On FM for instance, this might mean removing stereo from the analog signal, relying on the digital version of that signal to provide stereo where available, and making room for another digital channel. Due to the lack of available bandwidth in AM, IBOC is incompatible with analog stereo, although this is rarely used today, and additional channels are limited to highly compressed voice such as traffic and weather. Eventually, stations can go from digital/analog-hybrid mode to all-digital, by eliminating the baseband monophonic audio. FM methods On FM there are currently three methods of IBOC broadcasting in use, mainly in the United States. HD Radio Broadcasting The first, and currently only, digital technology approved for use on AM and FM broadcast frequencies by the Federal Communications Commission in the United States, is the proprietary HD Radio system developed by iBiquity Digital Corporation, which transmits energy beyond the allotted ±100 kHz FM channel. This creates potential interference issues with adjacent channels. This is the most widespread system in use, with approximately 1,556 stations transmitting HD radio in the US, plus over 800 new multicast channels (as of Jan 2010). There is a one-time license fee to iBiquity Digital, for the use of its intellectual property, as well as costs for new equipment which range from $50,000 to $100,000 US (2010) per station. FMeXtra The other system is FMeXtra by Digital Radio Express, which instead uses subcarriers within the existing signal. This system was introduced more recently. The system is compatible with HD Radio in hybrid mode, but not in all-digital mode, and with RBDS. The stereo subcarrier can be removed to make more space available for FMeXtra in the modulation baseband. However, the system is not compatible with other existing 67–92 kHz subcarriers which have mostly fallen into disuse. The system is far less expensive and less complicated to implement, needing only to be plugged into the existing exciter, and requiring no licensing fees. FMeXtra has generally all the user features of HD Radio, including multicast capability; the ability to broadcast several different audio programs simultaneously. It uses the aacPlus (HE-AAC) codec. FMeXtra can control listening with conditional access and encryption. DRM Digital Radio Mondiale allows for simultaneous transmission of multiple data streams alongside an audio signal. The DRM mode for VHF provides bandwidths from between 35 kbit/s to 185 kbit/s and up to four simultaneous data streams, allowing 5.1 surround vdv quality audio to be broadcast alongside other multimedia content - images, video or HTML content are typical examples. While it is not backwardly compatible with existing FM receiver equipment, with broadcasts digitally encoded using HE-AAC or xHE-AAC, this ability to operate within the internationally agreed FM spectrum of 88-108 MHz makes DRM a viable candidate for future adoption when countries begin to switch off their analogue broadcasts. AM methods HD Radio Broadcasting iBiquity also created a mediumwave HD Radio system for AM, which is the only system approved by the Federal Communications Commission for digital AM broadcasting in the United States. The HD Radio system employs use of injected digital sidebands above and below the audible portion of the analog audio on the primary carrier. This system also phase modulates the carrier in quadrature and injects more digital information on this phase-modulated portion of the carrier. It is based on the principle of AM stereo where it puts a digital signal where the C-QUAM system would put the analog stereo decoding information. DRM Digital Radio Mondiale has had much more success in creating an AM system, and one that could be much less expensive to implement than any proprietary HD Radio system, although it requires new frequency. It is the only one to have been accepted mediumwave but also shortwave (and possibly longwave) by the International Telecommunication Union (ITU) for use in regions I and III, but not yet in region II, the Americas. The HD Radio system has also been approved by International Telecommunication Union. CAM-D CAM-D is yet another method, though it is more of an extension of the current system. Developed by AM stereo pioneer Leonard R. Kahn, It encodes the treble on very small digital sidebands which do not cause interference to adjacent channels, and mixes it back with the analog baseband. Unlike the other two, it is not intended to be capable of multichannel, opting for quality over quantity. Unlike the HD system iBiquity calls "hybrid digital" the CAM-D system truly is a direct hybrid of both analog and digital. Some engineers believe that CAM-D may be compatible with analog AM stereo with the right engineering. Critics of CAM-D point to several drawbacks: being primarily analog, the system will be just as subject to artificial interference and noise as the current AM system there are virtually no receivers available for the system and at present, no major manufacturer has announced even the intention to begin production of them; and the cost of retrofitting with CAM-D is more than that of simply buying a new, HD-ready solid state transmitter. IBOC Versus DAB While the United Kingdom and many other countries have chosen the Eureka 147 standard of digital audio broadcasting (DAB) for creating a digital radio service, the United States has selected IBOC technology for its digital AM and FM stations. The band commonly used for terrestrial DAB is part of VHF band III, which does not suffer from L-band's significant line-of-sight problems. However, it is not available in North America since that span is occupied by TV channels 7 to 13 and the amateur radio 1.25 meter (222 MHz) band. The stations currently occupying that spectrum did not wish to give up their space, since VHF offers several benefits over UHF: relatively lower power, long distance propagation (up to 100 miles (160 km) with a rooftop antenna), and a longer wavelength that is more robust and less affected by interference. In Canada, the Canadian Radio-television and Telecommunications Commission (CRTC) is continuing to follow the analog standard, so the channels remain unavailable there as well. HD Radio testing has been authorized in Canada, as well as other countries around the world. There was also concern that AM and FM stations' branding, using their current frequencies, would be lost to new channel numbers, though virtual channels such as on digital television would eliminate this. Also, several competing stations would have to share a transmitter that multiplexes them all into one ensemble with the same coverage area (though many FM stations are already diplexed in large cities such as New York). A further concern to FM station operators was that AM stations could suddenly be in competition with the same high audio quality, although FM would still have the advantage of higher data rates (300 kbit/s versus 60 kbit/s in the HD Radio standard) due to greater bandwidth (100 kHz versus 10 kHz). The most significant advantage for IBOC is its relative ease of implementation. Existing analog radios are not rendered obsolete and the consumer and industry may transition to digital at a rational pace. In addition, the technology infrastructure is in place: most major broadcast equipment manufacturers are implementing IBOC technology and 60+ receiver manufacturers are selling IBOC reception devices. In the UK, Denmark, Norway and Switzerland, which are the leading countries with regard to implementing DAB, the first-generation MPEG-1 Audio Layer II (MP2) codec stereo radio stations on DAB have a lower sound-quality than FM, prompting a number of complaints. The typical bandwidth for DAB programs is only 128 kbit/s using the first generation CODEC, the less-robust MP2 standard which requires at least double that rate to be considered near-CD quality. An updated version of the Eureka-147 standard called DAB+ has been implemented. Using the more efficient high quality MPEG-4 CODEC HE-AAC v2, this compression method allows the DAB+ system to carry more channels or have better sound quality at the same bit rate as the original DAB system. It is the DAB+ implementation which will be under consideration for new station designs and not the earlier DAB scheme using the MUSICAM CODEC. The DAB+ system was coordinated and developed by the World DAB Forum, formed in 1997 from the old organization. It gives the Eureka-147 system a similar quality per bit rate as the IBOC system and hence (arguably) a better sound quality than FM. Challenges AM IBOC in the United States still faces some serious technological challenges, including nighttime interference with other stations. iBiquity initially used an audio compression system known as PAC (also used at a higher bitrate in Sirius satellite radio, see Digital Audio Radio Service), but in August 2003 a switch to HDC (based-upon ACC) was made to rectify these problems. HDC has been customized for IBOC, and it is also likely that the patent rights and royalties for every transmitter and receiver can be retained longer by creating a more proprietary system. Digital Radio Mondiale is also developing an IBOC system, likely to be used worldwide with AM shortwave radio, and possibly with broadcast AM and FM. Neither of those have been approved yet for ITU region 2 (the Americas). The system, however, unlike HD Radio, does not permit the existing analog signal and the digital signal to live together in the same channel. DRM requires an additional channel to maintain both signals. Both AM and FM IBOC signals cause interference to adjacent-channel stations, but not within the station's interference-free protected contours designated by the U.S. Federal Communications Commission (FCC). It has led to derogatory terms such as IBAC (In-band adjacent-channel) and IBUZ (since the interference sounds like a buzz.) The range of a station on an HD Radio receiver is somewhat less than its analog signal. In June 2008, a group of US broadcasters and equipment manufacturers requested that the U.S. FCC increase the permissible FM IBOC power from 1% (currently) to a maximum of 10% of the analog power. On January 29, 2010, the FCC approved the request. In addition, tropospheric ducting and e-skip can reduce the range of the digital signal, as well as the analog. IBOC digital radios using iBiquity's standard are being marketed under the brand "HD Radio" to highlight the purported quality of reception. As of June 2008, over 60 different receiver models have been made, and stations have received blanket (no longer individual and experimental) authorization from the U.S. FCC to transmit in a multiplexed multichannel mode on FM. Originally, the use of HD Radio transmission on AM was limited to daytime only, and not allowed at night due to potential problems with skywave radio propagation. The FCC lifted this restriction in early 2007. DRM, however, is being used across Europe on shortwave, which is entirely AM skywave, without issue. With the proper receiver, many of those stations can be heard in North America as well, sans the analog signal. IBOC around the world Argentina HD Radio technology was tested in 2004 with initial trials in Buenos Aires. Further testing of the technology began in early 2007. Bangladesh Government broadcaster BETAR began broadcasting HD Radio on their 100.0 MHz frequency on 9 November 2016 from their Agargaon site in Dhaka. The transmission uses a 10 kW GatesAir system. The 100.0 MHz carries programs from BBC World Service amongst others. HD 1, 2, 3 and 4 are configured. A second transmission will also have HD radio added on 88.8 MHz from the same site. Bosnia Trial and tests of HD Radio technology began in Sarajevo in March 2007. Brazil HD Radio and DRM trials in Brazil started in the mid 2000s. No regular HD Radio or DRM transmissions are allowed in Brazil as the digital radio standard in that country is not yet defined. One or two year experimental licenses were given to some Broadcasters. A joint study by the government (Ministry of Communications and ANATEL) and the National Metrology Institute (Inmetro) was done and the Digital Radio Consultative Council concluded that HD Radio and DRM do not meet the same analog transmission coverage with 20db less power. New trials are expected to occur before any decision about the Brazilian Digital Radio standard. Brazil is considering for adoption Digital Radio Mondiale or HD Radio. Canada After having L-band DAB for several years, the Canadian Radio-Television and Telecommunications Commission (CRTC) and Canadian Broadcasting Corporation (CBC) have also looked at the use of HD Radio, given its gradual progress in the neighbouring U.S. The CBC began HD Radio testing in September 2006, focusing on transmissions from Toronto and Peterborough, Ontario. The CRTC has since revised its policy on digital radio to allow HD Radio operations. Use of HD Radio is now widespread in dense urban markets like Toronto, Vancouver and Ottawa, with some use on the AM band as well. Czech Republic Initial testing of the HD Radio system commenced in Prague in February 2007. China In China, Hunan Broadcasting Company started FMeXtra transmissions in Changsha in April 2007, and plans to put others throughout the Hunan province. SARFT (State Administration for Radio, Film and Television) is currently testing HD Radio in Beijing in contemplation for acceptance in that country. Colombia Caracol Radio began testing of the HD Radio technology in both the AM and FM bands in early 2008. El Salvador El Salvador will be choosing HD Radio as its digital radio standard. Europe In September 2007 the European HD Radio Alliance (EHDRA) was formed by broadcasters and other interested groups to promote the adoption of HD Radio technology by European broadcasters, regulators and standards organizations. France France began broadcasting an HD Radio signal in March 2006 and plans to multicast two or more channels. The radio stations that use IBOC HD in France are SIRTI and NRJ Group. The owner of the transmitter is Towercast. The frequency of IBOC HD radio is 88.2 MHz. In May 2006, The Towercast group added a single channel of digital audio on 93.9 MHz. Germany Radio Regenbogen began HD Radio operations on 102.8 MHz in Heidelberg on December 3, 2007 pursuant to government testing authority. Indonesia Forum Radio Jaringan Indonesia had tested IBOC HD transmission from March 2006 to May 2006. The IBOC HD station in Jakarta was Delta FM (99.1 MHz). In April 2006, Radio Sangkakala (in Surabaya), the first AM HD radio station in Asia, went on the air on 1062 kHz. Jamaica Radio Jamaica began operating full-time with both HD Radio AM and FM signals in the city of Kingston in 2008. Mexico All Mexican radio stations within 320 km of the U.S. border are allowed to transmit their programs on the AM and FM bands utilizing HD Radio technology. Approximately six Mexican AM and FM stations are already operating with HD Radio technology along Mexico’s border area with the US. Grupo Imagen commenced HD Radio transmissions on XHDL-FM and XEDA-FM as well as Instituto Mexicano de la Radio on XHIMR-FM, XHIMER-FM and XHOF-FM in Mexico City in June 2012. New Zealand HD Radio transmission in Auckland, New Zealand was started on October 19, 2005. The frequency of IBOC HD radio is 106.1 MHz. The transmitter is located at Skytower. Following successful testing, the Radio Broadcasters Association (RBA) initiated a comprehensive trial of HD Radio technology in December 2006. The aim of the trial was to assess the coverage potential of the HD Radio system and to make a recommendation on the suitability of the technology for adoption. Philippines The first HD Radio station in the Philippines began broadcasting on November 9, 2005. The Philippines National Telecommunications Commission finalized its rules for FM digital radio operations on November 11, 2007. Poland An HD Radio trial began in Warsaw in 2006 in order to demonstrate the technology to local radio stations. Puerto Rico WPRM FM is the first station in San Juan, Puerto Rico (part of the US) to adopt HD Radio, in April 2005. WRTU in San Juan has also commenced broadcasting in HD Radio technology in 2007. Switzerland FM testing sponsored by Radio Sunshine and Ruoss AG began in Lucerne in April 2006. HD Radio operations in Switzerland continue and are spotlighted each year during “HD Radio Days”, an annual gathering in Lucerne of European broadcasters and manufacturers for the purpose of discussing the rollout of the technology in Europe. Radio Sunshine has now switched to DAB+ due to the high penetration of DAB+ and lack of interest in HD Radio. DAB+ penetration in Switzerland has now reached 99.5% as of 2018. Thailand HD Radio transmission in Thailand was started in April 2006. Radio of Thailand had created a public IBOC HD radio network targeting mass transit commuters in Thailand's capital of Bangkok. To receive the broadcasts, more than 10,000 HD Radio receivers were installed in buses. Ukraine The first FM HD Radio broadcasts in Kiev went on the air in October 2006 on two FM stations operated by the First Ukrainian Radio Group. Vietnam Voice of Vietnam (VOV) commenced AM and FM HD Radio transmissions in Hanoi in June, 2008 including multicasting, in anticipation of making HD Radio technology a standard. United States As of June 2008, more than 1,700 HD Radio stations were broadcasting 2,432 HD Radio channels. HD Radio technology is the only digital technology approved by the FCC for digital AM and FM broadcasting in the US. Over 60 different HD Radio receivers are on sale in over 12,000 stores nationwide, including Apple, Best Buy, Target, and Wal-Mart. As of May 2007, FMeXtra is on several dozen stations. Several hundred stations belonging to the Idea Bank consortium will also have FMeXtra installed. See also Digital Audio Broadcasting Digital Audio Radio Service Digital Radio Mondiale In-band adjacent-channel (IBAC) ISDB References External links FCC info on IBOC IBOC interference recordings Radio broadcasting
274770
https://en.wikipedia.org/wiki/PSK
PSK
PSK may refer to: Organisations Revolutionary Party of Kurdistan (PŞK), a Kurdish Separatist guerrilla group in Turkey Kurdistan Socialist Party (PSK), a Kurdish party in Turkey Phi Sigma Kappa, a fraternity Österreichische Postsparkasse (P.S.K.), a postal savings bank in Austria Post Südstadt Karlsruhe, a German sports club Science and technology Phase-shift keying, a digital modulation technique Pre-shared key, a method to set encryption keys Polysaccharide-K, a protein-bound polysaccharide Other uses "P.S.K. What Does It Mean?", a song by Schoolly D Pekerja Seks Komersial, Indonesian word for a prostitute
285512
https://en.wikipedia.org/wiki/State%20%28computer%20science%29
State (computer science)
In information technology and computer science, a system is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system. The set of states a system can occupy is known as its state space. In a discrete system, the state space is countable and often finite. The system's internal behaviour or interaction with its environment consists of separately occurring individual actions or events, such as accepting input or producing output, that may or may not cause the system to change its state. Examples of such systems are digital logic circuits and components, automata and formal language, computer programs, and computers. The output of a digital circuit or deterministic computer program at any time is completely determined by its current inputs and its state. Digital logic circuit state Digital logic circuits can be divided into two types: combinational logic, whose output signals are dependent only on its present input signals, and sequential logic, whose outputs are a function of both the current inputs and the past history of inputs. In sequential logic, information from past inputs is stored in electronic memory elements, such as flip-flops. The stored contents of these memory elements, at a given point in time, is collectively referred to as the circuit's state and contains all the information about the past to which the circuit has access. Since each binary memory element, such as a flip-flop, has only two possible states, one or zero, and there is a finite number of memory elements, a digital circuit has only a certain finite number of possible states. If N is the number of binary memory elements in the circuit, the maximum number of states a circuit can have is 2N. Program state Similarly, a computer program stores data in variables, which represent storage locations in the computer's memory. The contents of these memory locations, at any given point in the program's execution, is called the program's state. A more specialized definition of state is used for computer programs that operate serially or sequentially on streams of data, such as parsers, firewalls, communication protocols and encryption. Serial programs operate on the incoming data characters or packets sequentially, one at a time. In some of these programs, information about previous data characters or packets received is stored in variables and used to affect the processing of the current character or packet. This is called a stateful protocol and the data carried over from the previous processing cycle is called the state. In others, the program has no information about the previous data stream and starts fresh with each data input; this is called a stateless protocol. Imperative programming is a programming paradigm (way of designing a programming language) that describes computation in terms of the program state, and of the statements which change the program state. In declarative programming languages, the program describes the desired results and doesn't specify changes to the state directly. Finite state machines The output of a sequential circuit or computer program at any time is completely determined by its current inputs and current state. Since each binary memory element has only two possible states, 0 or 1, the total number of different states a circuit can assume is finite, and fixed by the number of memory elements. If there are N binary memory elements, a digital circuit can have at most 2N distinct states. The concept of state is formalized in an abstract mathematical model of computation called a finite state machine, used to design both sequential digital circuits and computer programs. Examples An example of an everyday device that has a state is a television set. To change the channel of a TV, the user usually presses a "channel up" or "channel down" button on the remote control, which sends a coded message to the set. In order to calculate the new channel that the user desires, the digital tuner in the television must have stored in it the number of the current channel it is on. It then adds one or subtracts one from this number to get the number for the new channel, and adjusts the TV to receive that channel. This new number is then stored as the current channel. Similarly, the television also stores a number that controls the level of volume produced by the speaker. Pressing the "volume up" or "volume down" buttons increments or decrements this number, setting a new level of volume. Both the current channel and current volume numbers are part of the TV's state. They are stored in non-volatile memory, which preserves the information when the TV is turned off, so when it is turned on again the TV will return to its previous station and volume level. As another example, the state of a microprocessor is the contents of all the memory elements in it: the accumulators, storage registers, data caches, and flags. When computers such as laptops go into a hibernation mode to save energy by shutting down the processor, the state of the processor is stored on the computer's hard disk, so it can be restored when the computer comes out of hibernation, and the processor can take up operations where it left off. See also Data (computing) References Cognition Models of computation
287831
https://en.wikipedia.org/wiki/Terrestrial%20Trunked%20Radio
Terrestrial Trunked Radio
Terrestrial Trunked Radio (TETRA; formerly known as Trans-European Trunked Radio), a European standard for a trunked radio system, is a professional mobile radio and two-way transceiver specification. TETRA was specifically designed for use by government agencies, emergency services, (police forces, fire departments, ambulance) for public safety networks, rail transport staff for train radios, transport services and the military. TETRA is the European version of trunked radio, similar to Project 25. TETRA is a European Telecommunications Standards Institute (ETSI) standard, first version published 1995; it is mentioned by the European Radiocommunications Committee (ERC). Description TETRA uses time-division multiple access (TDMA) with four user channels on one radio carrier and 25 kHz spacing between carriers. Both point-to-point and point-to-multipoint transfer can be used. Digital data transmission is also included in the standard though at a low data rate. TETRA Mobile Stations (MS) can communicate direct-mode operation (DMO) or using trunked-mode operation (TMO) using switching and management infrastructure (SwMI) made of TETRA base stations (TBS). As well as allowing direct communications in situations where network coverage is not available, DMO also includes the possibility of using a sequence of one or more TETRA terminals as relays. This functionality is called DMO gateway (from DMO to TMO) or DMO repeater (from DMO to DMO). In emergency situations this feature allows direct communications underground or in areas of bad coverage. In addition to voice and dispatch services, the TETRA system supports several types of data communication. Status messages and short data services (SDS) are provided over the system's main control channel, while packet-switched data or circuit-switched data communication uses specifically assigned channels. TETRA provides for authentication of terminals towards infrastructure and vice versa. For protection against eavesdropping, air interface encryption and end-to-end encryption is available. The common mode of operation is in a group calling mode in which a single button push will connect the user to the users in a selected call group and/or a dispatcher. It is also possible for the terminal to act as a one-to-one walkie talkie but without the normal range limitation since the call still uses the network. TETRA terminals can act as mobile phones (cell phones), with a full-duplex direct connection to other TETRA Users or the PSTN. Emergency buttons, provided on the terminals, enable the users to transmit emergency signals, to the dispatcher, overriding any other activity taking place at the same time. Advantages The main advantages of TETRA over other technologies (such as GSM) are: The much lower frequency used gives longer range, which in turn permits very high levels of geographic coverage with a smaller number of transmitters, thus cutting infrastructure costs. During a voice call, the communications are not interrupted when moving to another network site. This is a unique feature, which dPMR networks typically provide, that allows a number of fall-back modes such as the ability for a base station to process local calls. So called 'mission critical' networks can be built with TETRA where all aspects are fail-safe/multiple-redundant. In the absence of a network, mobiles/portables can use 'direct mode' whereby they share channels directly (walkie-talkie mode). Gateway mode - where a single mobile with connection to the network can act as a relay for other nearby mobiles that are out of range of the infrastructure. A dedicated transponder system isn't required in order to achieve this functionality, unlike with analogue radio systems. TETRA also provides a point-to-point function that traditional analogue emergency services radio systems did not provide. This enables users to have a one-to-one trunked 'radio' link between sets without the need for the direct involvement of a control room operator/dispatcher. Unlike cellular technologies, which connect one subscriber to one other subscriber (one-to-one), TETRA is built to do one-to-one, one-to-many and many-to-many. These operational modes are directly relevant to the public safety and professional users. Security TETRA supports terminal registration, authentication, air-interface encryption and end-to-end encryption. Rapid deployment (transportable) network solutions are available for disaster relief and temporary capacity provision. Network solutions are available in both reliable circuit-switched (telephone like) architectures and flat, IP architectures with soft (software) switches. Further information is available from the TETRA Association (formerly TETRA MoU) and the standards can be downloaded for free from ETSI. Disadvantages Its main disadvantages are: Requires a linear amplifier to meet the stringent RF specifications that allow it to exist alongside other radio services. Data transfer is slow by modern standards. Up to 7.2 kbit/s per timeslot, in the case of point-to-point connections, and 3.5 kbit/s per timeslot in case of IP encapsulation. Both options permit the use of between one and four timeslots. Different implementations include one of the previous connectivity capabilities, both, or none, and one timeslot or more. These rates are ostensibly faster than the competing technologies DMR, dPMR, and P25 are capable of. Latest version of standard supports 115.2 kbit/s in 25 kHz or up to 691.2 kbit/s in an expanded 150 kHz channel. To overcome the limitations many software vendors have begun to consider hybrid solutions where TETRA is used for critical signalling while large data synchronization and transfer of images and video is done over 3G / LTE. Usage there were 114 countries using TETRA systems in Europe, Middle East, Africa, Asia Pacific, Caribbean and Latin America. The TETRA-system is in use by the public sector in the following countries. Only TETRA network infrastructure installations are listed. TETRA being an open standard, each of these networks can use any mix of TETRA mobile terminals from a wide range of suppliers. Technical details Radio aspects For its modulation TETRA, uses differential quadrature phase-shift keying. The symbol (baud) rate is 18,000 symbols per second, and each symbol maps to 2 bits, thus resulting in 36,000 bit/s gross. As a form of phase shift keying is used to transmit data during each burst, it would seem reasonable to expect the transmit power to be constant. However it is not. This is because the sidebands, which are essentially a repetition of the data in the main carrier's modulation, are filtered off with a sharp filter so that unnecessary spectrum is not used up. This results in an amplitude modulation and is why TETRA requires linear amplifiers. The resulting ratio of peak to mean (RMS) power is 3.65 dB. If non-linear (or not-linear enough) amplifiers are used, the sidebands re-appear and cause interference on adjacent channels. Commonly used techniques for achieving the necessary linearity include Cartesian loops, and adaptive predistortion. The base stations normally transmit continuously and (simultaneously) receive continuously from various mobiles on different carrier frequencies; hence the TETRA system is a frequency-division duplex (FDD) system. TETRA also uses FDMA/TDMA (see above) like GSM. The mobiles normally only transmit on 1 slot/4 and receive on 1 slot/4 (instead of 1 slot/8 for GSM). Speech signals in TETRA are sampled at 8 kHz and then compressed with a vocoder using algebraic code-excited linear prediction (ACELP). This creates a data stream of 4.567 kbit/s. This data stream is error-protection encoded before transmission to allow correct decoding even in noisy (erroneous) channels. The data rate after coding is 7.2 kbit/s. The capacity of a single traffic slot when used 17/18 frames. A single slot consists of 255 usable symbols, the remaining time is used up with synchronisation sequences and turning on/off, etc. A single frame consists of 4 slots, and a multiframe (whose duration is 1.02 seconds) consists of 18 frames. Hyperframes also exist, but are mostly used for providing synchronisation to encryption algorithms. The downlink (i.e., the output of the base station) is normally a continuous transmission consisting of either specific communications with mobile(s), synchronisation or other general broadcasts. All slots are usually filled with a burst even if idle (continuous mode). Although the system uses 18 frames per second only 17 of these are used for traffic channels, with the 18th frame reserved for signalling, Short Data Service messages (like SMS in GSM) or synchronisation. The frame structure in TETRA (17.65 frames per second), consists of 18,000 symbols/s; 255 symbols/slot; 4 slots/frame, and is the cause of the perceived "amplitude modulation" at 17 Hz and is especially apparent in mobiles/portables which only transmit on one slot/4. They use the remaining three slots to switch frequency to receive a burst from the base station two slots later and then return to their transmit frequency (TDMA). Radio frequencies Air interface encryption To provide confidentiality the TETRA air interface is encrypted using one of the TETRA Encryption Algorithm (TEA) ciphers. The encryption provides confidentiality (protect against eavesdropping) as well as protection of signalling. Currently 4 different ciphers are defined. These TEA ciphers should not be confused with the block cipher Tiny Encryption Algorithm. The TEA ciphers have different availability due to export and use restrictions. Few details are published concerning these proprietary ciphers. Riess mentions in early TETRA design documents that encryption should be done with a stream cipher, due to the property of not propagating transmission errors. Parkinson later confirms this and explains that TEA is a stream cipher with 80-bit keys. TEA1 and TEA4 provide basic level security, and are meant for commercial use. The TEA2 cipher is restricted to European public safety organisations. The TEA3 cipher is for situations where TEA2 is suitable but not available. Cell selection Cell re-selection (or hand-over) in images This first representation demonstrates where the slow reselect threshold (SRT), the fast reselect threshold (FRT), and propagation delay exceed parameters are most likely to be. These are represented in association with the decaying radio carrier as the distance increases from the TETRA base station. From this illustration, these SRT and FRT triggering points are associated to the decaying radio signal strength of the respective cell carriers. The thresholds are situated so that the cell reselection procedures occur on time and assure communication continuity for on-going communication calls. Initial cell selection The next diagram illustrates where a given TETRA radio cell initial selection. The initial cell selection is performed by procedures located in the MLE and in the MAC. When the cell selection is made, and possible registration is performed, the mobile station (MS) is said to be attached to the cell. The mobile is allowed to initially select any suitable cell that has a positive C1 value; i.e., the received signal level is greater than the minimum receive level for access parameter. The initial cell selection procedure shall ensure that the MS selects a cell in which it can reliably decode downlink data (i.e., on a main control channel/MCCH), and which has a high probability of uplink communication. The minimum conditions that shall have to be met are that C1 > 0. Access to the network shall be conditional on the successful selection of a cell. At mobile switch on, the mobile makes its initial cell selection of one of the base stations, which indicates the initial exchanges at activation. Refer to EN 300 392 2 16.3.1 Activation and control of underlying MLE service Note 18.5.12 Minimum RX access level The minimum receive access level information element shall indicate the minimum received signal level required at the SwMI in a cell, either the serving cell or a neighbour cell as defined in table 18.24. Cell improvable The next diagram illustrates where a given TETRA radio cell becomes improvable. The serving cell becomes improvable when the following occurs: the C1 of the serving cell is below the value defined in the radio network parameter cell reselection parameters, slow reselect threshold for a period of 5 seconds, and the C1 or C2 of a neighbour cell exceeds the C1 of the serving cell by the value defined in the radio network parameter cell reselection parameters, slow reselect hysteresis for a period of 5 seconds. Cell usable The next diagram illustrates where a given TETRA radio cell becomes usable. A neighbour cell becomes radio usable when the cell has a downlink radio connection of sufficient quality. The following conditions must be met in order to declare a neighbour cell radio usable: The neighbour cell has a path loss parameter C1 or C2 that is, for a period of 5 seconds, greater than the fast reselect threshold plus the fast reselect threshold, and the service level provided by the neighbour cell is higher than that of the serving cell. No successful cell reselection shall have taken place within the previous 15 seconds unless MM requests a cell reselection. The MS-MLE shall check the criterion for serving cell relinquishment as often as one neighbour cell is scanned or monitored. The following conditions will cause the MS to rate the neighbour cell to have higher service level than the current serving cell: The MS subscriber class is supported on the neighbour cell but not on the serving cell. The neighbour cell is a priority cell and the serving cell is not. The neighbour cell supports a service (that is, TETRA standard speech, packet data, or encryption) that is not supported by the serving cell and the MS requires that service to be available. The cell service level indicates that the neighbour cell is less loaded than the serving cell. Cell relinquishable (abandonable) The next diagram illustrates where a given TETRA radio cell becomes relinquishable (abandonable). The serving cell becomes relinquishable when the following occurs: the C1 of the serving cell is below the value defined in the radio network parameter cell reselection parameters, fast reselect threshold, for a period of 5 seconds, and the C1 or C2 of a neighbour cell exceeds the C1 of the serving cell by the value defined in the radio network parameter cell reselection parameters, fast reselect hysteresis, for a period of 5 seconds. No successful cell reselection shall have taken place within the previous 15 seconds unless Mobility Management (MM) requests a cell reselection. The MS-MLE shall check the criterion for serving cell relinquishment as often as one neighbour cell is scanned or monitored. Radio down-link failure When the FRT threshold is breached, the MS is in a situation where it is essential to relinquish (or abandon) the serving cell and obtain another of at least usable quality. That is to say, the mobile station is aware that the radio signal is decaying rapidly, and must cell reselect rapidly, before communications are terminated because of radio link failure. When the mobile station radio-signal breaches the minimum receive level, the radio is no longer in a position to maintain acceptable communications for the user, and the radio link is broken. Radio link failure: (C1 < 0). Using the suggested values, this would be satisfied with the serving cell level below −105 dBm. Cell reselection procedures are then activated in order to find a suitable radio base station. Man-machine interface (MMI) Virtual MMI for terminals Any given TETRA radio terminal using Java (Java ME/CLDC) based technology, provides the end user with the communication rights necessary to fulfil his or her work role on any short duration assignment. For dexterity, flexibility, and evolution ability, the public transportation radio engineering department, have chosen to use the open sources, Java language specification administered by Sun and the associated work groups in order to produce a transport application tool kit. Service acquisition admits different authorised agents to establish communication channels between different services by calling the service identity, and without possessing the complete knowledge of the ISSI, GSSI, or any other TETRA related communication establishment numbering plan. Service acquisition is administered through a communication rights centralised service or roll allocation server, interfaced into the TETRA core network. In summary, the TETRA MMI aims are to: Allow any given agent while in exercise, to exploit any given radio terminal without materiel constraint. Provide specific transportation application software to the end-user agents (service acquisition, fraud, and aggression control). This transport application tool-kit has been produced successfully and with TETRA communication technology and assures for the public transport application requirements for the future mentioned hereafter. The home (main) menu presents the end user with three possibilities: Service acquisition, Status SDS, End-user parameters. Service acquisition provides a means of virtually personalising the end user to any given radio terminal and onto TETRA network for the duration the end user conserves the terminal under his possession. Status SDS provides the end user with a mechanism for generating a 440 Hz repeating tone that signals a fraud occurrence to members within the same (dynamic or static) Group Short Subscriber Identity (GSSI) or to a specific Individual Short Subscriber Identity (ISSI) for the duration of the assignment (an hour, a morning patrol or a given short period allocated to the assignment). The advantage being that each of the end users may attach themselves to any given terminal, and group for short durations without requiring any major reconfiguration by means of radio software programming tools. Similarly, the aggression feature functions, but with a higher tone frequency (880 Hz), and with a quicker repetitious nature, so to highlight the urgency of the alert. The parameters tab provides an essential means to the terminal end-user allowing them to pre-configure the target (preprogrammed ISSI or GSSI ) destination communication number. With this pre-programmed destination number, the end-user shall liaise with the destination radio terminal or roll allocation server, and may communicate, in the group, or into a dedicated server to which the service acquisition requests are received, preprocessed, and ultimately dispatched though the TETRA core network. This simplifies the reconfiguration or recycling configuration process allowing flexibility on short assignments. The parameters tab also provides a means of choosing between preselected tones to match the work group requirements for the purposes of fraud and aggression alerts. A possibility of selecting any given key available from the keypad to serve as an aggression or fraud quick key is also made possible though the transport application software tool kit. It is recommend to use the asterisk and the hash keys for the fraud and aggression quick keys respectively. For the fraud and aggression tones, it is also recommend to use 440 Hz slow repeating tone (blank space 500 milli-seconds) and 880 Hz fast repeating tone (blank space 250 milliseconds) respectively. The tone options are as follows: 440 Hz, 620 Hz, 880 Hz, and 1060 Hz. The parameters page provides an aid or help menu and the last tab within parameters describes briefly the tool kit the version and the history of the transport application tool kit to date. TETRA Enhanced Data Service (TEDS) The TETRA Association, working with ETSI, developed the TEDS standard, a wideband data solution, which enhances TETRA with a much higher capacity and throughput for data. In addition to those provided by TETRA, TEDS uses a range of adaptive modulation schemes and a number of different carrier sizes from 25 kHz to 150 kHz. Initial implementations of TEDS will be in the existing TETRA radio spectrum, and will likely employ 50 kHz channel bandwidths as this enables an equivalent coverage footprint for voice and TEDS services. TEDS performance is optimised for wideband data rates, wide area coverage and spectrum efficiency. Advances in DSP technology have led to the introduction of multi-carrier transmission standards employing QAM modulation. WiMAX, Wi-Fi and TEDS standards are part of this family. Refer also to: JSR-118; Mobile Information Device Profile, JSR-37; Wireless Messaging API, JSR120; Connected Limited Device Configuration, JSR-139; and Technology for the Wireless Industry, JTWI-185. Comparison to Project 25 Project 25 and TETRA are utilised for the public safety Radio network and Private Sector Radio network worldwide however, it has some differences in technical features and capacities. TETRA: It is optimized for high population density areas, with spectral efficiency (4 time slots in 25 kHz: four communications channels per 25 kHz channel, an efficient use of spectrum). It is suitable for high population density areas and Supports full duplex voice, data and messaging. but, it is generally unavailable for simulcast, VHF band - however particular vendors have introduced Simulcast and VHF into their TETRA platform.. P25: it is optimized for wider area coverage with low population density, and support for simulcast. however, it is limited to data support. (Phase 1 P25 radio systems operate in a 12.5 kHz analogue, digital or mixed mode, and P25 Phase II will use a 2-timeslot TDMA structure in 12.5 kHz channels. Currently, P25 deployed to more than 53 countries and TETRA deployed to more than 114 countries. See also Digital mobile radio, a TDMA digital radio standard from ETSI Digital private mobile radio (dPMR), an FDMA digital radio standard from ETSI NXDN, a two-way FDMA digital radio protocol from Icom and JVC Kenwood P25 (Project 25), a TIA APCO standard (USA) TETRAPOL (previously MATRA) References External links Report on the health effects of TETRA prepared for the Home Office TETRA in use by radio amateurs TETRA and Critical Communications Association (TCCA) Radiocommunication objectives and requirements for Public Protection and Disaster Relief (PPDR) Trunked radio systems Mobile telecommunications standards Emergency communication Rail transport mobile telecommunications standards Mass media companies established in 1995 Mass media companies Turkish companies established in 1995 British companies established in 1995
288521
https://en.wikipedia.org/wiki/Apple%20Mail
Apple Mail
Apple Mail (officially known as simply Mail) is an email client included by Apple Inc. with its operating systems macOS, iOS, iPadOS and watchOS. Apple Mail grew out of NeXTMail, which was originally developed by NeXT as part of its NeXTSTEP operating system, after Apple's acquisition of NeXT in 1997. The current version of Apple Mail utilizes SMTP for message sending, POP3, Exchange and IMAP for message retrieval and S/MIME for end-to-end message encryption. It is also preconfigured to work with popular email providers, such as Yahoo! Mail, AOL Mail, Gmail, Outlook and iCloud (formerly MobileMe) and it supports Exchange. iOS features a mobile version of Apple Mail with added Exchange ActiveSync (EAS) support, though it notoriously missed the functionality of attaching files to reply emails until the release of iOS 9. EAS is not supported in the macOS version of Apple's Mail app, the main issue being that sent messages will incorrectly be duplicated in the sent messages folder, which then propagates via sync to all other devices including iOS. Features of Apple Mail include the ability to configure the software to receive all of a user's email accounts in the one list, ability to file emails into folders, ability to search for emails, and ability to automatically append signatures to outgoing emails. It also integrates with the Contacts list, Calendar, Maps and other apps. History NeXTMail Apple Mail was originally developed by NeXT as NeXTMail, the email application for its NeXTSTEP operating system. It supported rich text formatting with images and voice messaging, and MIME emails. It also supported a text-based user interface (TUI) to allow for backwards compatibility. When Apple began to adapt NeXTSTEP to become Mac OS X, both the operating system and the application went through various stages as it was developed. In a beta version (codenamed "Rhapsody") and various other early pre-releases of Mac OS X, Mail was known as MailViewer. However, with the third developer release of Mac OS X, the application had returned to being known simply as Mail. First release Apple Mail was included in all versions of macOS up to and including Mac OS X Panther, which was released on October 24, 2003. It was integrated with other Apple applications such as Address Book, iChat, and iCal. Some of its features that remain in the most recent version of Mail include rules for mailboxes, junk mail filtering and multiple account management. Mac OS X Tiger In Mac OS X Tiger (version 10.4), Mail version 2 included a proprietary single-message-per-file format (with the filename extension .emlx) to permit indexing by Spotlight. Additional features were: "Smart mailboxes" that used Spotlight technology to sort mail into folders. the ability to flag messages with a low, normal or high priority and to use these priorities in mailbox rules and smart mailboxes. tools for resizing photos before they are sent to avoid oversized email attachments. the ability to view emailed pictures as a full-screen slideshow. parental controls to specify who is allowed to send email to children. HTML message composition. The new version also changed the UI for the buttons in the toolbar. Whereas previous buttons had free-standing defined shapes, the new buttons featured shapes within a lozenge-shaped capsule. According to many users, and even Apple's own human interface guidelines at the time, this was worse for usability. An open-source third-party application that reverted the icons to their former shapes was available. Nevertheless, Apple updated their guidelines to include capsule-shaped buttons, and the new UI persisted. Mac OS X Leopard In Mac OS X Leopard (version 10.5), Mail version 3 included personalized stationery, handled in standard HTML format. In addition, it offered notes and to-dos (which could be synced with iCal) as well as a built-in RSS reader. It also introduced IMAP IDLE support for account inboxes. Mac OS X Snow Leopard Mac OS X Snow Leopard (version 10.6) brought Microsoft Exchange Server 2007 support. Mac OS X Lion In Mac OS X Lion (version 10.7), Mail featured a redesigned iPad-like user interface with full-screen capabilities, an updated message search interface, support for Microsoft Exchange Server 2010 and Yahoo! Mail (via IMAP). Also added was the capability to group messages by subject in a similar fashion to Mail on iOS 4. The bounce function, where unwanted emails could be bounced back to the sender, was dropped, as was support for Exchange push email. OS X Mountain Lion In OS X Mountain Lion (version 10.8), Mail received VIP tagging, Safari-style inline search for words within an email message, the ability to sync with iCloud and new sharing features. Notes was split off into a stand-alone application. The RSS reader and to-dos were discontinued. OS X Mavericks In OS X Mavericks (version 10.9), Mail ceased support for plain-text MIME multipart/alternative messages and solely retained the HTML or rich-text version. OS X Yosemite In OS X Yosemite (version 10.10), Mail introduced Markup (inline annotation of PDF or image files) and Mail Drop (automatically uploads attachments to iCloud, and sends a link in the message instead of the whole file). OS X El Capitan In OS X El Capitan (version 10.11), a filter was added to the message list to filter by various options such as Unread, Flagged, or messages with attachments. The conversation display was also redesigned and various disk space saving optimizations were implemented. Streaming notification support for Exchange accounts was also added. macOS High Sierra In macOS 11.13 (High Sierra) Mail reached version 11.5, a version that was not further upgraded (in High Sierra, at least until 2021). macOS Mojave Support for macOS's new "dark mode" was added to Mail. macOS Catalina Added support for Block Sender, Unsubscribe, Mute Thread and layout options. macOS Big Sur In macOS Big Sur, the Mail logo was changed to be more consistent with the iOS version, depicting a white envelope on a blue background. See also NeXTMail GNUMail Comparison of email clients Comparison of feed aggregators References External links Mail.app 3.0 screen shot of 2007 MacOS email clients MacOS IOS software WatchOS software Software based on WebKit IOS 1997 mergers and acquisitions Mail
288635
https://en.wikipedia.org/wiki/PEP
PEP
PEP may refer to: Computing Packetized Ensemble Protocol, used by Telebit modems pretty Easy privacy (pEp), encryption project Python Enhancement Proposal, for the Python programming language Packet Exchange Protocol in Xerox Network Systems Performance-enhancing proxy, mechanisms to improve end-to-end TCP performance Policy Enforcement Point in XACML Organizations Philippine Entertainment Portal PEP.ph Political and Economic Planning, a British think tank formed in 1931 Politically exposed person, a financial classification Priority Enforcement Program, in US immigration enforcement Promoting Enduring Peace, a UN organization Propellants, Explosives, Pyrotechnics, a journal Provincial Emergency Program (British Columbia) Biology and medicine Polyestradiol phosphate, an estrogen used to treat prostate cancer Polymorphic eruption of pregnancy or pruritic urticarial papules and plaques of pregnancy Post-exposure prophylaxis, preventive medical treatment post-ERCP pancreatitis, a complication after endoscopic retrograde cholangiopancreatography Phosphoenolpyruvic acid, a biochemical compound Physics Pep reaction, proton–electron–proton reaction Peak envelope power of a transmitter People Pep Guardiola, Spanish football manager and former player Other uses Pairwise error probability in digital communications Passaporte Electrónico Português, Portuguese electronic passport New York Stock Exchange symbol for PepsiCo Personal equity plan, a former UK account type Positron-Electron Projects at Stanford Linear Accelerator Center Post – eCommerce – Parcel, Divisions of Deutsche Post Primary Entry Point, a station of the US Emergency Alert System Primate Equilibrium Platform, used in animal experimentation Prototype Electro-Pneumatic family of trains, British Rail Classes 445 and 446 Pulsed energy projectile, a non-lethal weapon See also Pep (disambiguation)
290623
https://en.wikipedia.org/wiki/Scribus
Scribus
Scribus () is free and open-source desktop publishing (DTP) software available for most desktop operating systems. It is designed for layout, typesetting, and preparation of files for professional-quality image-setting equipment. Scribus can also create animated and interactive PDF presentations and forms. Example uses include writing newspapers, brochures, newsletters, posters, and books. The Scribus 1.4 series are the current stable releases, and the 1.5 series where developments are made available in preparation for the next stable release series, version 1.6. Scribus is written in Qt and released under the GNU General Public License. There are native versions available for Unix, Linux, BSD, macOS, Haiku, Microsoft Windows, OS/2 (including ArcaOS and eComStation) operating systems. General feature overview Scribus supports most major bitmap formats, including TIFF, JPEG, and PSD. Vector drawings can be imported or directly opened for editing. The long list of supported formats includes Encapsulated PostScript, SVG, Adobe Illustrator, and Xfig. Professional type/image-setting features include CMYK colors and ICC color management. It has a built-in scripting engine using Python. It is available in 60 languages. High-level printing is achieved using its own internal level 3 PostScript driver, including support for font embedding and sub-setting with TrueType, Type 1, and OpenType fonts. The internal driver supports full Level 2 PostScript constructs and a large subset of Level 3 constructs. PDF support includes transparency, encryption, and a large set of the PDF 1.5 specification including layers (OCG), as well as PDF/X-3, including interactive PDFs form fields, annotations, and bookmarks. The current file format, called SLA, is XML. Old versions of SLA were based on XML. Text can be imported from OpenDocument (ODT) text documents (such as from LibreOffice Writer), OpenOffice.org XML (OpenOffice.org Writer's SXW files), Microsoft Word's DOC, PDB, and HTML formats (although some limitations apply). ODT files can typically be imported along with their paragraph styles, which are then created in Scribus. HTML tags which modify text, such as bold and italic, are supported. Word and PDB documents are only imported as plain text. ScribusGenerator is a mail merge-like extension to Scribus. Forthcoming Scribus 1.6 (by way of Scribus 1.5 development branch) Scribus 1.5.1 added PDF/X-4 support. Initially, Scribus did not properly support complex script rendering and so could not be used with Unicode text for languages written in Arabic, Hebrew, Indic, and South East Asian writing systems, even though it supported Unicode character encoding. In August 2012, it was announced that a third party had developed a system to support complex Indic scripts. In May 2015 it was announced that the ScribusCTL project had started to improve complex layout by integrating the OpenType text-shaping engine HarfBuzz into the official Scribus 1.5.1svn branch. In July 2016 it was announced that the text layout engine had been rewritten from scratch in preparation for support of complex scripts coming in Scribus 1.5.3 and later. In December 2016 Scribus announced they got support for OpenType advanced feature in 1.5.3svn, as well as complex script and RTL direction. Scribus 1.4.7 did not have OpenType alternative glyph support, so ligatures, for example, were not inserted automatically; this became available from v1.5.3. Support for other programs and formats Scribus cannot read or write the native file formats of other DTP programs such as QuarkXPress or InDesign; the developers consider that reverse engineering those file formats would be prohibitively complex and could risk legal action from the makers of those programs. Due to licensing issues, the software package does not include support for the Pantone color matching system (PMS), which is included in some commercial DTP applications. Pantone colors can be obtained and incorporated within Scribus without licensing issues. Scribus is shipped with more than 100 color palettes, most donated by various commercial color vendors, but also including scientific, national, and government color standards. Forthcoming Scribus 1.6 (by way of Scribus 1.5 development branch) Support for importing Microsoft Publisher is incorporated into version 1.5, and QuarkXPress Tag files, InDesign's IDML, as well as InCopy's ICML formats were added to the development branch. Scribus 1.5.3 onwards contains more than 300 color palettes. German Organisation freieFarbe e.V. built last HLC Colour Atlas for real colours based on CIELAB. This free Colour Palette is also available in Scribus 1.5.4+. Scribus 1.5.6 supports native pdf export with embedded open type fonts and pdf 1.6. Python 3 is now default in scripts. Scribus 1.5.7 improves undo and redo action. Qt 5.14 is new base for compilation and 3rd party components have newer versions. Next version is 1.5.8 as perhaps last step before 1.6.0. From view of developers Version 1.5.7 is stable. There are no new Versions in pipe with Backports for the 1.4 tree with the near end of QT4 support in most systems. Books Books about Scribus are available in several languages, including an official manual for v1.3, published through FLES Books in 2009. Significant users Janayugom, a Malayalam daily newspaper in Kerala, India, migrated all desktop publishing to Scribus and Gimp in November 2019, saving over 10 million Indian rupees (approximately US$130,000). References External links Tutorials From Jacci Howard Bear at LifeWire Book_HowToSCRIBUS-Digital.pdf For Scribus 1.4 from American Amateur Press Association Scribus 2013 hexagon Scribus 1.5.5 create a book cover scribus 1.4.6 A. J. Publishing using scribus by Dave Tribby scribus 1.5.5 path and Bézier curves Articles Free Desktop Publishing with Scribus at World Label Open source desktop publishing with Scribus by William von Hagen at IBM Cross-platform software Desktop publishing software for Linux DTP for MacOS DTP for Windows Free desktop publishing software Free educational software Free multilingual software Free PDF software Free software programmed in C++ Free typesetting software Software that uses Cairo (graphics) Software that uses Qt
293355
https://en.wikipedia.org/wiki/Mac%20OS%20X%20Panther
Mac OS X Panther
Mac OS X Panther (version 10.3) is the fourth major release of macOS, Apple's desktop and server operating system. It followed Mac OS X 10.2 and preceded Mac OS X Tiger. It was released on October 24, 2003. System requirements Panther's system requirements are: PowerPC G3, G4, or G5 processor (at least 233 MHz) Built-in USB At least 128 MB of RAM (256 MB recommended, minimum of 96 MB supported unofficially) At least 1.5 GB of available hard disk space CD drive Internet access requires a compatible service provider; iDisk requires a .Mac account Video conferencing requires: 333 MHz or faster PowerPC G3, G4, or G5 processor Broadband internet access (100 kbit/s or faster) Compatible FireWire DV camera or web camera Since a New World ROM was required for Mac OS X Panther, certain older computers (such as beige Power Mac G3s and 'Wall Street' PowerBook G3s) were unable to run Panther by default. Third-party software (such as XPostFacto) can, however, override checks made during the install process; otherwise, installation or upgrades from Jaguar fails on these older machines. Panther still fully supported the Classic environment for running older Mac OS 9 applications, but made Classic application windows double-buffered, interfering with some applications written to draw directly to screen. New and changed features End-user features Apple advertised that Mac OS X Panther had over 150 new features, including: Finder: Updated with a brushed-metal interface, a new live search engine, customizable Sidebar, secure deletion, colored labels (resurrected from classic Mac OS) in the filesystem and Zip support built in. The Finder icon was also changed. Fast user switching: Allows a user to remain logged in while another user logs in, and quickly switch among several sessions. Exposé: Helps the user manage windows by showing them all as thumbnails. TextEdit: TextEdit now is also compatible with Microsoft Word (.doc) documents. Xcode developer tools: Faster compile times with gcc 3.3. Preview: Increased speed of PDF rendering. QuickTime: Now supports the Pixlet high-definition video codec. New applications in Panther Font Book: A font manager which simplifies viewing character maps, and adding new fonts that can be used systemwide. The app also allows the user to organize fonts into collections. FileVault: On-the-fly encryption and decryption of a user's home folder. iChat AV: The new version of iChat. Now with built-in audio- and video conferencing. X11: X11 is built into Panther. Safari: A new web browser that was developed to replace Internet Explorer for Mac when the contract between Apple and Microsoft ended, although Internet Explorer for Mac was still available. Safari 1.0 was included in an update in Jaguar but was used as the default browser in Panther. Other Microsoft Windows interoperability improvements, including out-of-the-box support for Active Directory and SecurID-based VPNs. Built-in fax support. Release history References 3 PowerPC operating systems 2003 software Computer-related introductions in 2003
293363
https://en.wikipedia.org/wiki/RSA%20SecurID
RSA SecurID
RSA SecurID, formerly referred to as SecurID, is a mechanism developed by RSA for performing two-factor authentication for a user to a network resource. Description The RSA SecurID authentication mechanism consists of a "token"—either hardware (e.g. a key fob) or software (a soft token)—which is assigned to a computer user and which creates an authentication code at fixed intervals (usually 60 seconds) using a built-in clock and the card's factory-encoded almost random key (known as the "seed"). The seed is different for each token, and is loaded into the corresponding RSA SecurID server (RSA Authentication Manager, formerly ACE/Server) as the tokens are purchased. On-demand tokens are also available, which provide a tokencode via email or SMS delivery, eliminating the need to provision a token to the user. The token hardware is designed to be tamper-resistant to deter reverse engineering. When software implementations of the same algorithm ("software tokens") appeared on the market, public code had been developed by the security community allowing a user to emulate RSA SecurID in software, but only if they have access to a current RSA SecurID code, and the original 64-bit RSA SecurID seed file introduced to the server. Later, the 128-bit RSA SecurID algorithm was published as part of an open source library. In the RSA SecurID authentication scheme, the seed record is the secret key used to generate one-time passwords. Newer versions also feature a USB connector, which allows the token to be used as a smart card-like device for securely storing certificates. A user authenticating to a network resource—say, a dial-in server or a firewall—needs to enter both a personal identification number and the number being displayed at that moment on their RSA SecurID token. Though increasingly rare, some systems using RSA SecurID disregard PIN implementation altogether, and rely on password/RSA SecurID code combinations. The server, which also has a real-time clock and a database of valid cards with the associated seed records, authenticates a user by computing what number the token is supposed to be showing at that moment in time and checking this against what the user entered. On older versions of SecurID, a "duress PIN" may be used—an alternate code which creates a security event log showing that a user was forced to enter their PIN, while still providing transparent authentication. Using the duress PIN would allow one successful authentication, after which the token will automatically be disabled. The "duress PIN" feature has been deprecated and is not available on currently supported versions. While the RSA SecurID system adds a layer of security to a network, difficulty can occur if the authentication server's clock becomes out of sync with the clock built into the authentication tokens. Normal token clock drift is accounted for automatically by the server by adjusting a stored "drift" value over time. If the out of sync condition is not a result of normal hardware token clock drift, correcting the synchronization of the Authentication Manager server clock with the out of sync token (or tokens) can be accomplished in several different ways. If the server clock had drifted and the administrator made a change to the system clock, the tokens can either be resynchronized one-by-one, or the stored drift values adjusted manually. The drift can be done on individual tokens or in bulk using a command line utility. RSA Security has pushed forth an initiative called "Ubiquitous Authentication", partnering with device manufacturers such as IronKey, SanDisk, Motorola, Freescale Semiconductor, Redcannon, Broadcom, and BlackBerry to embed the SecurID software into everyday devices such as USB flash drives and cell phones, to reduce cost and the number of objects that the user must carry. Theoretical vulnerabilities Token codes are easily stolen, because no mutual-authentication exists (anything that can steal a password can also steal a token code). This is significant, since it is the principal threat most users believe they are solving with this technology. The simplest practical vulnerability with any password container is losing the special key device or the activated smart phone with the integrated key function. Such vulnerability cannot be healed with any single token container device within the preset time span of activation. All further consideration presumes loss prevention, e.g. by additional electronic leash or body sensor and alarm. While RSA SecurID tokens offer a level of protection against password replay attacks, they are not designed to offer protection against man in the middle type attacks when used alone. If the attacker manages to block the authorized user from authenticating to the server until the next token code will be valid, he will be able to log into the server. Risk-based analytics (RBA), a new feature in the latest version (8.0) provides significant protection against this type of attack if the user is enabled and authenticating on an agent enabled for RBA. RSA SecurID does not prevent man in the browser (MitB) based attacks. SecurID authentication server tries to prevent password sniffing and simultaneous login by declining both authentication requests, if two valid credentials are presented within a given time frame. This has been documented in an unverified post by John G. Brainard. If the attacker removes from the user the ability to authenticate however, the SecurID server will assume that it is the user who is actually authenticating and hence will allow the attacker's authentication through. Under this attack model, the system security can be improved using encryption/authentication mechanisms such as SSL. Although soft tokens may be more convenient, critics indicate that the tamper-resistant property of hard tokens is unmatched in soft token implementations, which could allow seed record secret keys to be duplicated and user impersonation to occur. Hard tokens, on the other hand, can be physically stolen (or acquired via social engineering) from end users. The small form factor makes hard token theft much more viable than laptop/desktop scanning. A user will typically wait more than one day before reporting the device as missing, giving the attacker plenty of time to breach the unprotected system. This could only occur, however, if the users UserID and PIN are also known. Risk-based analytics can provide additional protection against the use of lost or stolen tokens, even if the users UserID and PIN are known by the attackers. Batteries go flat periodically, requiring complicated replacement and re-enrollment procedures. Reception and competing products As of 2003, RSA SecurID commanded over 70% of the two-factor authentication market and 25 million devices have been produced to date. A number of competitors, such as VASCO, make similar security tokens, mostly based on the open OATH HOTP standard. A study on OTP published by Gartner in 2010 mentions OATH and SecurID as the only competitors. Other network authentication systems, such as OPIE and S/Key (sometimes more generally known as OTP, as S/Key is a trademark of Telcordia Technologies, formerly Bellcore) attempt to provide the "something you have" level of authentication without requiring a hardware token. March 2011 system compromise On 17 March 2011, RSA announced that they had been victims of "an extremely sophisticated cyber attack". Concerns were raised specifically in reference to the SecurID system, saying that "this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation". However, their formal Form 8-K submission indicated that they did not believe the breach would have a "material impact on its financial results". The breach cost EMC, the parent company of RSA, $66.3 million, which was taken as a charge against second quarter earnings. It covered costs to investigate the attack, harden its IT systems and monitor transactions of corporate customers, according to EMC Executive Vice President and Chief Financial Officer David Goulden, in a conference call with analysts. The breach into RSA's network was carried out by hackers who sent phishing emails to two targeted, small groups of employees of RSA. Attached to the email was a Microsoft Excel file containing malware. When an RSA employee opened the Excel file, the malware exploited a vulnerability in Adobe Flash. The exploit allowed the hackers to use the Poison Ivy RAT to gain control of machines and access servers in RSA's network. There are some hints that the breach involved the theft of RSA's database mapping token serial numbers to the secret token "seeds" that were injected to make each one unique. Reports of RSA executives telling customers to "ensure that they protect the serial numbers on their tokens" lend credibility to this hypothesis. Barring a fatal weakness in the cryptographic implementation of the token code generation algorithm (which is unlikely, since it involves the simple and direct application of the extensively scrutinized AES-128 block cipher ), the only circumstance under which an attacker could mount a successful attack without physical possession of the token is if the token seed records themselves had been leaked. RSA stated it did not release details about the extent of the attack so as to not give potential attackers information they could use in figuring out how to attack the system. On 6 June 2011, RSA offered token replacements or free security monitoring services to any of its more than 30,000 SecurID customers, following an attempted cyber breach on defense customer Lockheed Martin that appeared to be related to the SecurID information stolen from RSA. In spite of the resulting attack on one of its defense customers, company chairman Art Coviello said that "We believe and still believe that the customers are protected". Resulting attacks In April 2011, unconfirmed rumors cited L-3 Communications as having been attacked as a result of the RSA compromise. In May 2011, this information was used to attack Lockheed Martin systems. However Lockheed Martin claims that due to "aggressive actions" by the company's information security team, "No customer, program or employee personal data" was compromised by this "significant and tenacious attack". The Department of Homeland Security and the US Defense Department offered help to determine the scope of the attack. References External links Official RSA SecurID website Technical details Sample SecurID Token Emulator with token Secret Import I.C.Wiener, Bugtraq post. Apparent Weaknesses in the Security Dynamics Client/Server Protocol Adam Shostack, 1996. Usenet thread discussing new SecurID details Vin McLellan, et al., comp.security.misc. Unofficial SecurID information and some reverse-engineering attempts Yahoo Groups securid-users. Analysis of possible risks from 2011 compromise Published attacks against the SecurID hash function Cryptanalysis of the Alleged SecurID Hash Function (PDF) Alex Biryukov, Joseph Lano, and Bart Preneel. Improved Cryptanalysis of SecurID (PDF) Scott Contini and Yiqun Lisa Yin. Fast Software-Based Attacks on SecurID (PDF) Scott Contini and Yiqun Lisa Yin. Password authentication Dell EMC Authentication methods
293450
https://en.wikipedia.org/wiki/Involution%20%28mathematics%29
Involution (mathematics)
In mathematics, an involution, involutory function, or self-inverse function is a function that is its own inverse, for all in the domain of . Equivalently, applying twice produces the original value. The term anti-involution refers to involutions based on antihomomorphisms (see below) such that . General properties Any involution is a bijection. The identity map is a trivial example of an involution. Common examples in mathematics of nontrivial involutions include multiplication by −1 in arithmetic, the taking of reciprocals, complementation in set theory and complex conjugation. Other examples include circle inversion, rotation by a half-turn, and reciprocal ciphers such as the ROT13 transformation and the Beaufort polyalphabetic cipher. The number of involutions, including the identity involution, on a set with elements is given by a recurrence relation found by Heinrich August Rothe in 1800: and for The first few terms of this sequence are 1, 1, 2, 4, 10, 26, 76, 232 ; these numbers are called the telephone numbers, and they also count the number of Young tableaux with a given number of cells. The composition of two involutions f and g is an involution if and only if they commute: . Every involution on an odd number of elements has at least one fixed point. More generally, for an involution on a finite set of elements, the number of elements and the number of fixed points have the same parity. Involution throughout the fields of mathematics Pre-calculus Basic examples of involutions are the functions: ,   or   , as well as their composition These are not the only pre-calculus involutions. Another one within the positive reals is: The graph of an involution (on the real numbers) is line-symmetric over the line . This is due to the fact that the inverse of any general function will be its reflection over the 45° line . This can be seen by "swapping" with . If, in particular, the function is an involution, then it will serve as its own reflection. Other elementary involutions are useful in solving functional equations. Euclidean geometry A simple example of an involution of the three-dimensional Euclidean space is reflection through a plane. Performing a reflection twice brings a point back to its original coordinates. Another involution is reflection through the origin; not a reflection in the above sense, and so, a distinct example. These transformations are examples of affine involutions. Projective geometry An involution is a projectivity of period 2, that is, a projectivity that interchanges pairs of points. Any projectivity that interchanges two points is an involution. The three pairs of opposite sides of a complete quadrangle meet any line (not through a vertex) in three pairs of an involution. This theorem has been called Desargues's Involution Theorem. Its origins can be seen in Lemma IV of the lemmas to the Porisms of Euclid in Volume VII of the Collection of Pappus of Alexandria. If an involution has one fixed point, it has another, and consists of the correspondence between harmonic conjugates with respect to these two points. In this instance the involution is termed "hyperbolic", while if there are no fixed points it is "elliptic". In the context of projectivities, fixed points are called double points. Another type of involution occurring in projective geometry is a polarity which is a correlation of period 2. Linear algebra In linear algebra, an involution is a linear operator T on a vector space, such that . Except for in characteristic 2, such operators are diagonalizable for a given basis with just 1s and −1s on the diagonal of the corresponding matrix. If the operator is orthogonal (an orthogonal involution), it is orthonormally diagonalizable. For example, suppose that a basis for a vector space V is chosen, and that e1 and e2 are basis elements. There exists a linear transformation f which sends e1 to e2, and sends e2 to e1, and which is the identity on all other basis vectors. It can be checked that for all x in V. That is, f is an involution of V. For a specific basis, any linear operator can be represented by a matrix T. Every matrix has a transpose, obtained by swapping rows for columns. This transposition is an involution on the set of matrices. The definition of involution extends readily to modules. Given a module M over a ring R, an R endomorphism f of M is called an involution if f 2 is the identity homomorphism on M. Involutions are related to idempotents; if 2 is invertible then they correspond in a one-to-one manner. Quaternion algebra, groups, semigroups In a quaternion algebra, an (anti-)involution is defined by the following axioms: if we consider a transformation then it is an involution if (it is its own inverse) and (it is linear) An anti-involution does not obey the last axiom but instead This former law is sometimes called antidistributive. It also appears in groups as . Taken as an axiom, it leads to the notion of semigroup with involution, of which there are natural examples that are not groups, for example square matrix multiplication (i.e. the full linear monoid) with transpose as the involution. Ring theory In ring theory, the word involution is customarily taken to mean an antihomomorphism that is its own inverse function. Examples of involutions in common rings: complex conjugation on the complex plane multiplication by j in the split-complex numbers taking the transpose in a matrix ring. Group theory In group theory, an element of a group is an involution if it has order 2; i.e. an involution is an element a such that a ≠ e and a2 = e, where e is the identity element. Originally, this definition agreed with the first definition above, since members of groups were always bijections from a set into itself; i.e., group was taken to mean permutation group. By the end of the 19th century, group was defined more broadly, and accordingly so was involution. A permutation is an involution precisely if it can be written as a finite product of non-overlapping transpositions. The involutions of a group have a large impact on the group's structure. The study of involutions was instrumental in the classification of finite simple groups. An element x of a group G is called strongly real if there is an involution t with xt = x−1 (where xt = t−1⋅x⋅t). Coxeter groups are groups generated by involutions with the relations determined only by relations given for pairs of the generating involutions. Coxeter groups can be used, among other things, to describe the possible regular polyhedra and their generalizations to higher dimensions. Mathematical logic The operation of complement in Boolean algebras is an involution. Accordingly, negation in classical logic satisfies the law of double negation: ¬¬A is equivalent to A. Generally in non-classical logics, negation that satisfies the law of double negation is called involutive. In algebraic semantics, such a negation is realized as an involution on the algebra of truth values. Examples of logics which have involutive negation are Kleene and Bochvar three-valued logics, Łukasiewicz many-valued logic, fuzzy logic IMTL, etc. Involutive negation is sometimes added as an additional connective to logics with non-involutive negation; this is usual, for example, in t-norm fuzzy logics. The involutiveness of negation is an important characterization property for logics and the corresponding varieties of algebras. For instance, involutive negation characterizes Boolean algebras among Heyting algebras. Correspondingly, classical Boolean logic arises by adding the law of double negation to intuitionistic logic. The same relationship holds also between MV-algebras and BL-algebras (and so correspondingly between Łukasiewicz logic and fuzzy logic BL), IMTL and MTL, and other pairs of important varieties of algebras (resp. corresponding logics). In the study of binary relations, every relation has a converse relation. Since the converse of the converse is the original relation, the conversion operation is an involution on the category of relations. Binary relations are ordered through inclusion. While this ordering is reversed with the complementation involution, it is preserved under conversion. Computer science The XOR bitwise operation with a given value for one parameter is an involution. XOR masks were once used to draw graphics on images in such a way that drawing them twice on the background reverts the background to its original state. The NOT bitwise operation is also an involution, and is a special case of the XOR operation where one parameter has all bits set to 1. Another example is a bit mask and shift function operating on color values stored as integers, say in the form RGB, that swaps R and B, resulting in the form BGR. f(f(RGB))=RGB, f(f(BGR))=BGR. The RC4 cryptographic cipher is an involution, as encryption and decryption operations use the same function. Practically all mechanical cipher machines implement a reciprocal cipher, an involution on each typed-in letter. Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way. See also Automorphism Idempotence ROT13 References Further reading Algebraic properties of elements Functions and mappings
293685
https://en.wikipedia.org/wiki/GNUnet
GNUnet
GNUnet is a software framework for decentralized, peer-to-peer networking and an official GNU package. The framework offers link encryption, peer discovery, resource allocation, communication over many transports (such as TCP, UDP, HTTP, HTTPS, WLAN and Bluetooth) and various basic peer-to-peer algorithms for routing, multicast and network size estimation. GNUnet's basic network topology is that of a mesh network. GNUnet includes a distributed hash table (DHT) which is a randomized variant of Kademlia that can still efficiently route in small-world networks. GNUnet offers a "F2F topology" option for restricting connections to only the users' trusted friends. The users' friends' own friends (and so on) can then indirectly exchange files with the users' computer, never using its IP address directly. GNUnet uses Uniform resource identifiers (not approved by IANA, although an application has been made). GNUnet URIs consist of two major parts: the module and the module specific identifier. A GNUnet URI is of form gnunet://module/identifier where module is the module name and identifier is a module specific string. The primary codebase is written in C, but there are bindings in other languages to produce an API for developing extensions in those languages. GNUnet is part of the GNU Project. It has gained interest in the hacker community after the PRISM revelations. GNUnet consists of several subsystems, of which essential ones are Transport and Core subsystems. Transport subsystem provides insecure link-layer communications, while Core provides peer discovery and encryption. On top of the core subsystem various applications are built. GNUnet includes various P2P applications in the main distribution of the framework, including filesharing, chat and VPN; additionally, a few external projects (such as secushare) are also extending the GNUnet infrastructure. GNUnet is unrelated to the older Gnutella P2P protocol. Gnutella is not an official GNU project while GNUnet is. Transport Originally, GNUnet used UDP for underlying transport. Now GNUnet transport subsystem provides multiple options, such as TCP and SMTP. The communication port, officially registered at IANA, is 2086 (tcp + udp). Trust system GNUnet provides trust system based on excess-based economic model. The idea of employing economic system is taken from MojoNation network. GNUnet network has no trusted entities so it is impossible to maintain global reputation. Instead, each peer maintains its own trust for each of its local links. When resources, such as bandwidth and CPU time, are in excess, peer provides them to all requesting neighbors without reducing trust or otherwise charging them. When a node is under stress it drops requests from its neighbor nodes having lower internal trust value. However, when peer has less resources than enough to fulfill everyone's requests, it denies requests of those neighbors that it trusts less and charges others by reducing their trust. File sharing The primary application at this point is anonymous, censorship-resistant file-sharing, allowing users to anonymously publish or retrieve information of all kinds. The GNUnet protocol which provides anonymity is called GAP (GNUnet anonymity protocol). GNUnet FS can additionally make use of GNU libextractor to automatically annotate shared files with metadata. File encoding Files shared with GNUnet are ECRS (An Encoding for Censorship-Resistant Sharing) coded. All content is represented as GBlocks. Each GBlock contains 1024 bytes. There are several types of GBlocks, each of them serves a particular purpose. Any GBlock is uniquely identified by its RIPEMD-160 hash . DBlocks store actual file contents and nothing else. File is split at 1024 byte boundaries and resulting chunks are stored in DBlocks. DBlocks are linked together into Merkle tree by means of IBlocks that store DBlock identifiers. Blocks are encrypted with a symmetric key derived from when they are stored in the network. Queries and replies GNUnet Anonymity Protocol consists of queries and replies. Depending on load of the forwarding node, messages are forwarded to zero or more nodes. Queries are used to search for content and request data blocks. Query contains resource identifier, reply address, priority and TTL (Time-to-Live). Resource identifier of datum is a triple-hash . Peer that replies to query provides to prove that it indeed has the requested resource without providing to intermediate nodes, so intermediate nodes can't decrypt . Reply address is the major difference compared to Freenet protocol. While in Freenet reply always propagates back using the same path as the query, in GNUnet the path may be shorter. Peer receiving a query may drop it, forward it without rewriting reply address or indirect it by replacing reply address with its own address. By indirecting queries peer provides cover traffic for its own queries, while by forwarding them peer avoids being a link in reply propagation and preserves its bandwidth. This feature allows the user to trade anonymity for efficiency. User can specify an anonymity level for each publish, search and download operation. An anonymity level of zero can be used to select non-anonymous file-sharing. GNUnet's DHT infrastructure is only used if non-anonymous file-sharing is specified. The anonymity level determines how much cover traffic a peer must have to hide the user's own actions. Priority specifies how much of its trust user wants to spend in case of resource shortage. TTL is used to prevent queries from staying in the network for too long. File sharing URIs The fs module identifier consists of either chk, sks, ksk or loc followed by a slash and a category specific value. Most URIs contain hashes, which are encoded in base32hex. chk identifies files, typically: gnunet://fs/chk/[file hash].[query hash].[file size in bytes] File hash is the hash of the plaintext file, which allows decrypting it once it is downloaded. Query hash is the hash of topmost GBlock which allows downloading the whole tree of GBlocks that contain encrypted file. File size is required to determine the shape of the tree. sks identifies files within namespaces, typically: gnunet://fs/sks/NAMESPACE/IDENTIFIER ksk identifies search queries, typically: gnunet://fs/ksk/KEYWORD[+KEYWORD]* loc identifies a datum on a specific machine, typically: gnunet://fs/loc/PEER/QUERY.TYPE.KEY.SIZE Examples A type of GNUnet filesharing URI pointing to a specific copy of GNU GPL license text: gnunet://fs/chk/9E4MDN4VULE8KJG6U1C8FKH5HA8C5CHSJTILRTTPGK8MJ6VHORERHE68JU8Q0FDTOH1DGLUJ3NLE99N0ML0N9PIBAGKG7MNPBTT6UKG.1I823C58O3LKS24LLI9KB384LH82LGF9GUQRJHACCUINSCQH36SI4NF88CMAET3T3BHI93D4S0M5CC6MVDL1K8GFKVBN69Q6T307U6O.17992 Another type of GNUnet filesharing URI, pointing to the search results of a search with keyword "gpl": gnunet://fs/ksk/gpl GNU Name System GNUnet includes an implementation of the GNU Name System (GNS), a decentralized and censorship-resistant replacement for DNS. In GNS, each user manages their own zones and can delegate subdomains to zones managed by other users. Lookups of records defined by other users are performed using GNUnet's DHT. Protocol translation GNUnet can tunnel IP traffic over the peer-to-peer network. If necessary, GNUnet can perform IPv4-IPv6 protocol translation in the process. GNUnet provides a DNS Application-level gateway to proxy DNS requests and map addresses to the desired address family as necessary. This way, GNUnet offers a possible technology to facilitate IPv6 transition. Furthermore, in combination with GNS, GNUnet's protocol translation system can be used to access hidden services — IP-based services that run locally at some peer in the network and which can only be accessed by resolving a GNS name. Social API Gabor X Toth published in early September 2013 a thesis to present the design of a social messaging service for the GNUnet peer-to-peer framework that offers scalability, extensibility, and end-to-end encrypted communication. The scalability property is achieved through multicast message delivery, while extensibility is made possible by using PSYC (Protocol for SYnchronous Conferencing), which provides an extensible RPC (Remote Procedure Call) syntax that can evolve over time without having to upgrade the software on all nodes in the network. Another key feature provided by the PSYC layer are stateful multicast channels, which are used to store e.g. user profiles. End-to-end encrypted communication is provided by the mesh service of GNUnet, upon which the multicast channels are built. Pseudonymous users and social places in the system have cryptographical identities — identified by their public key — these are mapped to human memorable names using GNS (GNU Name System), where each pseudonym has a zone pointing to its places. That is the required building block for turning the GNUnet framework into a fully peer-to-peer social networking platform. Chat A chat has been implemented in the CADET module, for which a GTK interface for GNOME exists, specifically designed for the emerging Linux phones (such as the Librem 5 or the PinePhone). See also InterPlanetary File System Comparison of file-sharing applications Synchronous conferencing Notes References Further references External links GNU Project software Free file sharing software Anonymity networks Anonymous file sharing networks Free software programmed in C Cross-platform free software Software using the GNU AGPL license Key-based routing
294065
https://en.wikipedia.org/wiki/Block%20size%20%28cryptography%29
Block size (cryptography)
In modern cryptography, symmetric key ciphers are generally divided into stream ciphers and block ciphers. Block ciphers operate on a fixed length string of bits. The length of this bit string is the block size. Both the input (plaintext) and output (ciphertext) are the same length; the output cannot be shorter than the input this follows logically from the pigeonhole principle and the fact that the cipher must be reversibleand it is undesirable for the output to be longer than the input. Until the announcement of NIST's AES contest, the majority of block ciphers followed the example of the DES in using a block size of 64 bits (8 bytes). However the birthday paradox tells us that after accumulating a number of blocks equal to the square root of the total number possible, there will be an approximately 50% chance of two or more being the same, which would start to leak information about the message contents. Thus even when used with a proper encryption mode (e.g. CBC or OFB), only 232 × 8 B = 32 GB of data can be safely sent under one key. In practice a greater margin of security is desired, restricting a single key to the encryption of much less data say a few hundred megabytes. Once that seemed like a fair amount of data, but today it is easily exceeded. If the cipher mode does not properly randomise the input, the limit is even lower. Consequently, AES candidates were required to support a block length of 128 bits (16 bytes). This should be acceptable for up to 264 × 16 B = 256 exabytes of data, and should suffice for quite a few years to come. The winner of the AES contest, Rijndael, supports block and key sizes of 128, 192, and 256 bits, but in AES the block size is always 128 bits. The extra block sizes were not adopted by the AES standard. Many block ciphers, such as RC5, support a variable block size. The Luby-Rackoff construction and the Outerbridge construction can both increase the effective block size of a cipher. Joan Daemen's 3-Way and BaseKing have unusual block sizes of 96 and 192 bits, respectively. See also Ciphertext stealing Format-preserving encryption Symmetric-key cryptography
294099
https://en.wikipedia.org/wiki/3-Way
3-Way
In cryptography, 3-Way is a block cipher designed in 1994 by Joan Daemen. It is closely related to BaseKing; the two are variants of the same general cipher technique. 3-Way has a block size of 96 bits, notably not a power of two such as the more common 64 or 128 bits. The key length is also 96 bits. The figure 96 arises from the use of three 32 bit words in the algorithm, from which also is derived the cipher's name. When 3-Way was invented, 96-bit keys and blocks were quite strong, but more recent ciphers have a 128-bit block, and few now have keys shorter than 128 bits. 3-Way is an 11-round substitution–permutation network. 3-Way is designed to be very efficient in a wide range of platforms from 8-bit processors to specialized hardware, and has some elegant mathematical features which enable nearly all the decryption to be done in exactly the same circuits as did the encryption. 3-Way, just as its counterpart BaseKing, is vulnerable to related key cryptanalysis. John Kelsey, Bruce Schneier, and David Wagner showed how it can be broken with one related key query and about chosen plaintexts. References External links SCAN's entry for 3-Way Chapter 7 of Daemen's thesis (gzipped Postscript) Broken block ciphers
294108
https://en.wikipedia.org/wiki/SEAL%20%28cipher%29
SEAL (cipher)
In cryptography, SEAL (Software-Optimized Encryption Algorithm) is a stream cipher optimised for machines with a 32-bit word size and plenty of RAM with a reported performance of around 4 cycles per byte. SEAL is actually a pseudorandom function family in that it can easily generate arbitrary portions of the keystream without having to start from the beginning. This makes it particularly well suited for applications like encrypting hard drives. The first version was published by Phillip Rogaway and Don Coppersmith in 1994. The current version, published in 1997, is 3.0. SEAL, covered by two patents in the United States, both of which are assigned to IBM. References "Software-efficient pseudorandom function and the use thereof for encryption" "Computer readable device implementing a software-efficient pseudorandom function encryption" Stream ciphers
294149
https://en.wikipedia.org/wiki/A5/1
A5/1
A5/1 is a stream cipher used to provide over-the-air communication privacy in the GSM cellular telephone standard. It is one of several implementations of the A5 security protocol. It was initially kept secret, but became public knowledge through leaks and reverse engineering. A number of serious weaknesses in the cipher have been identified. History and usage A5/1 is used in Europe and the United States. A5/2 was a deliberate weakening of the algorithm for certain export regions. A5/1 was developed in 1987, when GSM was not yet considered for use outside Europe, and A5/2 was developed in 1989. Though both were initially kept secret, the general design was leaked in 1994 and the algorithms were entirely reverse engineered in 1999 by Marc Briceno from a GSM telephone. In 2000, around 130 million GSM customers relied on A5/1 to protect the confidentiality of their voice communications. Security researcher Ross Anderson reported in 1994 that "there was a terrific row between the NATO signal intelligence agencies in the mid-1980s over whether GSM encryption should be strong or not. The Germans said it should be, as they shared a long border with the Warsaw Pact; but the other countries didn't feel this way, and the algorithm as now fielded is a French design." Description A GSM transmission is organised as sequences of bursts. In a typical channel and in one direction, one burst is sent every 4.615 milliseconds and contains 114 bits available for information. A5/1 is used to produce for each burst a 114 bit sequence of keystream which is XORed with the 114 bits prior to modulation. A5/1 is initialised using a 64-bit key together with a publicly known 22-bit frame number. Older fielded GSM implementations using Comp128v1 for key generation, had 10 of the key bits fixed at zero, resulting in an effective key length of 54 bits. This weakness was rectified with the introduction of Comp128v3 which yields proper 64 bits keys. When operating in GPRS / EDGE mode, higher bandwidth radio modulation allows for larger 348 bits frames, and A5/3 is then used in a stream cipher mode to maintain confidentiality. A5/1 is based around a combination of three linear-feedback shift registers (LFSRs) with irregular clocking. The three shift registers are specified as follows: The bits are indexed with the least significant bit (LSB) as 0. The registers are clocked in a stop/go fashion using a majority rule. Each register has an associated clocking bit. At each cycle, the clocking bit of all three registers is examined and the majority bit is determined. A register is clocked if the clocking bit agrees with the majority bit. Hence at each step at least two or three registers are clocked, and each register steps with probability 3/4. Initially, the registers are set to zero. Then for 64 cycles, the 64-bit secret key K is mixed in according to the following scheme: in cycle , the ith key bit is added to the least significant bit of each register using XOR — Each register is then clocked. Similarly, the 22-bits of the frame number are added in 22 cycles. Then the entire system is clocked using the normal majority clocking mechanism for 100 cycles, with the output discarded. After this is completed, the cipher is ready to produce two 114 bit sequences of output keystream, first 114 for downlink, last 114 for uplink. Security A number of attacks on A5/1 have been published, and the American National Security Agency is able to routinely decrypt A5/1 messages according to released internal documents. Some attacks require an expensive preprocessing stage after which the cipher can be broken in minutes or seconds. Originally, the weaknesses were passive attacks using the known plaintext assumption. In 2003, more serious weaknesses were identified which can be exploited in the ciphertext-only scenario, or by an active attacker. In 2006 Elad Barkan, Eli Biham and Nathan Keller demonstrated attacks against A5/1, A5/3, or even GPRS that allow attackers to tap GSM mobile phone conversations and decrypt them either in real-time, or at any later time. According to professor Jan Arild Audestad, at the standardization process which started in 1982, A5/1 was originally proposed to have a key length of 128 bits. At that time, 128 bits was projected to be secure for at least 15 years. It is now believed that 128 bits would in fact also still be secure until the advent of quantum computing. Audestad, Peter van der Arend, and Thomas Haug says that the British insisted on weaker encryption, with Haug saying he was told by the British delegate that this was to allow the British secret service to eavesdrop more easily. The British proposed a key length of 48 bits, while the West Germans wanted stronger encryption to protect against East German spying, so the compromise became a key length of 54 bits. Known-plaintext attacks The first attack on the A5/1 was proposed by Ross Anderson in 1994. Anderson's basic idea was to guess the complete content of the registers R1 and R2 and about half of the register R3. In this way the clocking of all three registers is determined and the second half of R3 can be computed. In 1997, Golic presented an attack based on solving sets of linear equations which has a time complexity of 240.16 (the units are in terms of number of solutions of a system of linear equations which are required). In 2000, Alex Biryukov, Adi Shamir and David Wagner showed that A5/1 can be cryptanalysed in real time using a time-memory tradeoff attack, based on earlier work by Jovan Golic. One tradeoff allows an attacker to reconstruct the key in one second from two minutes of known plaintext or in several minutes from two seconds of known plain text, but he must first complete an expensive preprocessing stage which requires 248 steps to compute around 300 GB of data. Several tradeoffs between preprocessing, data requirements, attack time and memory complexity are possible. The same year, Eli Biham and Orr Dunkelman also published an attack on A5/1 with a total work complexity of 239.91 A5/1 clockings given 220.8 bits of known plaintext. The attack requires 32 GB of data storage after a precomputation stage of 238. Ekdahl and Johansson published an attack on the initialisation procedure which breaks A5/1 in a few minutes using two to five minutes of conversation plaintext. This attack does not require a preprocessing stage. In 2004, Maximov et al. improved this result to an attack requiring "less than one minute of computations, and a few seconds of known conversation". The attack was further improved by Elad Barkan and Eli Biham in 2005. Attacks on A5/1 as used in GSM In 2003, Barkan et al. published several attacks on GSM encryption. The first is an active attack. GSM phones can be convinced to use the much weaker A5/2 cipher briefly. A5/2 can be broken easily, and the phone uses the same key as for the stronger A5/1 algorithm. A second attack on A5/1 is outlined, a ciphertext-only time-memory tradeoff attack which requires a large amount of precomputation. In 2006, Elad Barkan, Eli Biham, Nathan Keller published the full version of their 2003 paper, with attacks against A5/X сiphers. The authors claim: In 2007 Universities of Bochum and Kiel started a research project to create a massively parallel FPGA-based cryptographic accelerator COPACOBANA. COPACOBANA was the first commercially available solution using fast time-memory trade-off techniques that could be used to attack the popular A5/1 and A5/2 algorithms, used in GSM voice encryption, as well as the Data Encryption Standard (DES). It also enables brute force attacks against GSM eliminating the need of large precomputed lookup tables. In 2008, the group The Hackers Choice launched a project to develop a practical attack on A5/1. The attack requires the construction of a large look-up table of approximately 3 terabytes. Together with the scanning capabilities developed as part of the sister project, the group expected to be able to record any GSM call or SMS encrypted with A5/1, and within about 3–5 minutes derive the encryption key and hence listen to the call and read the SMS in clear. But the tables weren't released. A similar effort, the A5/1 Cracking Project, was announced at the 2009 Black Hat security conference by cryptographers Karsten Nohl and Sascha Krißler. It created the look-up tables using Nvidia GPGPUs via a peer-to-peer distributed computing architecture. Starting in the middle of September 2009, the project ran the equivalent of 12 Nvidia GeForce GTX 260. According to the authors, the approach can be used on any cipher with key size up to 64-bits. In December 2009, the A5/1 Cracking Project attack tables for A5/1 were announced by Chris Paget and Karsten Nohl. The tables use a combination of compression techniques, including rainbow tables and distinguished point chains. These tables constituted only parts of the 1.7 TB completed table and had been computed during three months using 40 distributed CUDA nodes and then published over BitTorrent. More recently the project has announced a switch to faster ATI Evergreen code, together with a change in the format of the tables and Frank A. Stevenson announced breaks of A5/1 using the ATI generated tables. Documents leaked by Edward Snowden in 2013 state that the NSA "can process encrypted A5/1". See also A5/2 KASUMI, also known as A5/3 Cellular Message Encryption Algorithm Notes References External links Stream ciphers Broken stream ciphers Mobile telecommunications standards 3GPP standards GSM standard
295744
https://en.wikipedia.org/wiki/Adaptive%20Server%20Enterprise
Adaptive Server Enterprise
SAP ASE (Adaptive Server Enterprise), originally known as Sybase SQL Server, and also commonly known as Sybase DB or Sybase ASE, is a relational model database server developed by Sybase Corporation, which later became part of SAP AG. ASE was developed for the Unix operating system, and is also available for Microsoft Windows. In 1988, Sybase, Microsoft and Ashton-Tate began development of a version of SQL Server for OS/2, but Ashton-Tate later left the group and Microsoft went on to port the system to Windows NT. When the agreement expired in 1993, Microsoft purchased a license for the source code and began to sell this product as Microsoft SQL Server. MS SQL Server and Sybase SQL Server share many features and syntax peculiarities. History Originally developed for Unix operating system platforms in 1987, Sybase Corporation's primary relational database management system product was initially marketed under the name Sybase SQL Server. In 1988, SQL Server for OS/2 was co-developed for the PC by Sybase, Microsoft, and Ashton-Tate. Ashton-Tate divested its interest and Microsoft became the lead partner after porting SQL Server to Windows NT. Microsoft and Sybase sold and supported the product through version 4.2.1. Sybase released SQL Server 4.2 in 1992. This release included internationalization and localization and support for symmetric multiprocessing systems. In 1993, the co-development licensing agreement between Microsoft and Sybase ended, and the companies parted ways while continuing to develop their respective versions of SQL Server. Sybase released Sybase SQL Server 10.0, which was part of the System 10 product family, which also included Back-up Server, Open Client/Server APIs, SQL Monitor, SA Companion and OmniSQL Gateway. Microsoft continued on with Microsoft SQL Server. Sybase provides native low-level programming interfaces to its database server which uses a protocol called Tabular Data Stream. Prior to version 10, DBLIB (DataBase LIBrary) was used. Version 10 and onwards uses CTLIB (ClienT LIBrary). In 1995, Sybase released SQL Server 11.0. Starting with version 11.5 released in 1996, Sybase moved to differentiate its product from Microsoft SQL Server by renaming it to Adaptive Server Enterprise. Sybase 11.5 added Asynchronous prefetch, case expression in sql, the optimizer can use a descending index to avoid the need for a worktable and a sort. The Logical Process Manager was added to allow prioritization by assigning execution attributes and engine affinity. In 1998, ASE 11.9.2 was rolled out with support for data pages locking, data rows (row-level locking), distributed joins and improved SMP performance. Indexes could now be created in descending order on a column, readpast concurrency option and repeatable read transaction isolation were added. A lock timeout option and task-to-engine affinity were added, query optimization is now delayed until a cursor is opened and the values of the variables are known. In 1999, ASE 12.0 was released, providing support for Java, high availability and distributed transaction management. Merge joins were added, previous all joins were nested loop joins. In addition, cache partitions were added to improve performance. In 2001, ASE 12.5 was released, providing features such as dynamic memory allocation, an EJB container, support for XML, Secure Sockets Layer (SSL) and LDAP. Also added was compressed backups, unichar UTF-16 support and multiple logical page sizes 2K, 4K, 8K, or 16K. In 2005, Sybase released ASE 15.0. It included support for partitioning table rows in a database across individual disk devices, and "virtual columns" which are computed only when required. In ASE 15.0, many parameters that had been static (which required server reboot for the changes to take place) were made dynamic (changes take effect immediately). This improved performance and reduced downtime. For example, one parameter that was made dynamic was the "tape retention in days" (the number of days that the backup is kept on the tape media without overwriting the existing contents in the production environment). On January 27, 2010 Sybase released ASE 15.5. It included support for in-memory and relaxed-durability databases, distributed transaction management in the shared-disk cluster, faster compression for backups as well as Backup Server Support for the IBM Tivoli Storage Manager. Deferred name resolution for user-defined stored procedures, FIPS 140-2 login password encryption, incremental data transfer, bigdatetime and bigtime datatypes and tempdb groups were also added. In July 2010, Sybase became a wholly owned subsidiary of SAP America. On September 13, 2011 Sybase released ASE 15.7 at Techwave. It included support for: New Security features - Application Functionality Configuration Groups, a new threaded kernel, compression for large object (LOB) and regular data, End-to-End CIS Kerberos Authentication, Dual Control of Encryption Keys and Unattended Startup and extension for securing logins, roles, and password management, Login Profiles, ALTER... modify owner, External Passwords and Hidden Text, Abstract Plans in Cached Statements, Shrink Log Space, In-Row Off-Row LOB, using Large Object text, unitext, and image Datatypes in Stored Procedures, Using LOB Locators in Transact-SQL Statements, select for update to exclusively lock rows for subsequent updates within the same transaction, and for update-able cursors, Non-materialized, Non-null Columns with a default value, Fully Recoverable DDL (select into, alter table commands that require data movement, reorg rebuild), merge command, Expanded Variable-Length Rows, Allowing Unicode Noncharacters. In April 2014, SAP released ASE 16. It included support for partition locking, CIS Support for HANA, Relaxed Query Limits, Query Plan Optimization with Star Joins, Dynamic Thread Assignment, Sort and Hash Join Operator improvements, Full-Text Auditing, Auditing for Authorization Checks Inside Stored Procedures, create or replace functionality, Query Plan and Execution Statistics in HTML, Index Compression, Full Database Encryption, Locking, Run-time locking, Metadata and Latch enhancements, Multiple Trigger support, Residual Data Removal, Configuration History Tracking, CRC checks for dump database and the ability to calculate the transaction log growth rate for a specified time period. Structure A single standalone installation of ASE typically comprises one "dataserver" and one corresponding "backup server". In multi server installation many dataservers can share one backup server. A dataserver consists of system databases and user databases. Minimum system databases that are mandatory for normal working of dataserver are 'master', 'tempdb', 'model', 'sybsystemdb' and 'sybsystemprocs'. 'master' database holds critical system related information that includes, logins, passwords, and dataserver configuration parameters. 'tempdb' is used for storage of data that are required for intermediate processing of queries, and temporary data. 'model' is used as a template for creating new databases. 'sybsystemprocs' consists of system supplied stored procedures that queries system tables and manipulates data in them. ASE is a single process multithreaded dataserver application. Editions There are several editions, including an express edition that is free for productive use but limited to four server engines and 50 GB of disk space per server. See also SQL Anywhere Sybase List of relational database management systems Comparison of relational database management systems References External links SAP Sybase ASE official website SAP Sybase ASE online documentation SAP ASE Community What's New from 15.7 to 16.0.3.7 Proprietary database management systems Relational database management systems SAP SE Computer-related introductions in 1987 RDBMS software for Linux
295981
https://en.wikipedia.org/wiki/DOCSIS
DOCSIS
Data Over Cable Service Interface Specification (DOCSIS) is an international telecommunications standard that permits the addition of high-bandwidth data transfer to an existing cable television (CATV) system. It is used by many cable television operators to provide cable Internet access over their existing hybrid fiber-coaxial (HFC) infrastructure. History DOCSIS was originally developed by CableLabs and contributing companies, including Arris, BigBand Networks, Broadcom, Cisco, Comcast, Conexant, Correlant, Cox, Harmonic, Intel, Motorola, Netgear, Terayon, Time Warner Cable, and Texas Instruments. Versions Released in March 1997, DOCSIS 1.0 included functional elements from preceding proprietary cable modems. Released in April 1999, DOCSIS 1.1 standardized quality of service (QoS) mechanisms that were outlined in DOCSIS 1.0. (abbreviated D2) Released in December 2001, DOCSIS 2.0 enhanced upstream data rates in response to increased demand for symmetric services such as IP telephony. (abbreviated D3) Released in August 2006, DOCSIS 3.0 significantly increased data rates (both upstream and downstream) and introduced support for Internet Protocol version 6 (IPv6). First released in October 2013, and subsequently updated several times, the DOCSIS 3.1 suite of specifications support capacities of up to 10 Gbit/s downstream and 1 Gbit/s upstream using 4096 QAM. The new specifications eliminated 6 MHz and 8 MHz wide channel spacing and instead use narrower (25 kHz or 50 kHz wide) orthogonal frequency-division multiplexing (OFDM) subcarriers; these can be bonded inside a block spectrum that could end up being about 200 MHz wide. DOCSIS 3.1 technology also includes power-management features that will enable the cable industry to reduce its energy usage, and the DOCSIS-PIE algorithm to reduce bufferbloat. In the United States, broadband provider Comcast announced in February 2016 that several cities within its footprint will have DOCSIS 3.1 availability before the end of the year. At the end of 2016, Mediacom announced it would become the first major U.S. cable company to fully transition to the DOCSIS 3.1 platform. Improves DOCSIS 3.1 to use the full spectrum of the cable plant (0 MHz to ~1.8 GHz) at the same time in both upstream and downstream directions. This technology enables multi-gigabit symmetrical services while retaining backward compatibility with DOCSIS 3.1. CableLabs released the full specification in October 2017. Previously branded as DOCSIS 3.1 Full Duplex, these technologies have been rebranded as part of DOCSIS 4.0. Cross-version compatibility has been maintained across all versions of DOCSIS, with the devices falling back to the highest supported version in common between both endpoints: cable modem (CM) and cable modem termination system (CMTS). For example, if one has a cable modem that only supports DOCSIS 1.0, and the system is running 2.0, the connection will be established at DOCSIS 1.0 data rates. Comparison In 1994, 802.14 was chartered to develop a media access control over an HFC. In 1995, Multimedia Cable Network System (MCNS) was formed. The original partners were TCI, Time Warner Cable, Comcast, and Cox. Later, Continental Cable and Rogers joined the group. In June 1996, SCTE formed the Data Standards Subcommittee to begin work on establishing national standards for high-speed data over cable plant. July 1997: SCTE DSS voted in the affirmative on document DSS 97-2. This standard is based on the well-known DOCSIS specification. The standard was also submitted to International Telecommunications Union Telecommunications Standardization Sector (ITU-T) and has been adopted as ITU-T J.112 Annex B. European alternative As frequency allocation bandwidth plans differ between United States and European CATV systems, DOCSIS standards earlier than 3.1 have been modified for use in Europe. These modifications were published under the name EuroDOCSIS. The differences between the bandwidths exist because European cable TV conforms to PAL/DVB-C standards of 8 MHz RF channel bandwidth and North American cable TV conforms to NTSC/ATSC standards which specify 6 MHz per channel. The wider channel bandwidth in EuroDOCSIS architectures permits more bandwidth to be allocated to the downstream data path (toward the user). EuroDOCSIS certification testing is executed by Belgian company Excentis (formerly known as tComLabs), while DOCSIS certification testing is executed by CableLabs. Typically, customer premises equipment receives "certification", while CMTS equipment receives "qualification". International standards The ITU Telecommunication Standardization Sector (ITU-T) has approved the various versions of DOCSIS as international standards. DOCSIS 1.0 was ratified as ITU-T Recommendation J.112 Annex B (1998), but it was superseded by DOCSIS 1.1 which was ratified as ITU-T Recommendation J.112 Annex B (2001). Subsequently, DOCSIS 2.0 was ratified as ITU-T Recommendation J.122. Most recently, DOCSIS 3.0 was ratified as ITU-T Recommendation J.222 (J.222.0, J.222.1, J.222.2, J.222.3). Note: While ITU-T Recommendation J.112 Annex B corresponds to DOCSIS/EuroDOCSIS 1.1, Annex A describes an earlier European cable modem system ("DVB EuroModem") based on ATM transmission standards. Annex C describes a variant of DOCSIS 1.1 that is designed to operate in Japanese cable systems. The ITU-T Recommendation J.122 main body corresponds to DOCSIS 2.0, J.122 Annex F corresponds to EuroDOCSIS 2.0, and J.122 Annex J describes the Japanese variant of DOCSIS 2.0 (analogous to Annex C of J.112). Features DOCSIS provides great variety in options available at Open Systems Interconnection (OSI) layers 1 and 2, the physical and data link layers. Physical layer Channel width: Downstream: All versions of DOCSIS earlier than 3.1 use either 6 MHz channels (e.g. North America) or 8 MHz channels ("EuroDOCSIS"). DOCSIS 3.1 uses channel bandwidths of up to 192 MHz in the downstream. Upstream: DOCSIS 1.0/1.1 specifies channel widths between 200 kHz and 3.2 MHz. DOCSIS 2.0 & 3.0 specify 6.4 MHz, but can use the earlier, narrower channel widths for backward compatibility. DOCSIS 3.1 uses channel bandwidths of up to 96 MHz in the upstream. Modulation: Downstream: All versions of DOCSIS prior to 3.1 specify that 64-level or 256-level QAM (64-QAM or 256-QAM) be used for modulation of downstream data, using the ITU-T J.83-Annex B standard for 6 MHz channel operation, and the DVB-C modulation standard for 8 MHz (EuroDOCSIS) operation. DOCSIS 3.1 adds 16-QAM, 128-QAM, 512-QAM, 1024-QAM, 2048-QAM and 4096-QAM, with optional support of 8192-QAM/16384-QAM. Upstream: Upstream data uses QPSK or 16-level QAM (16-QAM) for DOCSIS 1.x, while QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM are used for DOCSIS 2.0 & 3.0. DOCSIS 2.0 & 3.0 also support 128-QAM with trellis coded modulation in S-CDMA mode (with an effective spectral efficiency equivalent to that of 64-QAM). DOCSIS 3.1 supports data modulations from QPSK up to 1024-QAM, with optional support for 2048-QAM and 4096-QAM. Data link layer DOCSIS employs a mixture of deterministic access methods for upstream transmissions, specifically TDMA for DOCSIS 1.0/1.1 and both TDMA and S-CDMA for DOCSIS 2.0 and 3.0, with a limited use of contention for bandwidth requests. Because of this, DOCSIS systems experience relatively few collisions, in contrast to the pure contention-based MAC CSMA/CD employed in older Ethernet systems (of course, there is no contention in switched Ethernet). For DOCSIS 1.1 and above, the data layer also includes extensive quality-of-service (QoS) features that help to efficiently support applications that have specific traffic requirements such as low latency, e.g. voice over IP. DOCSIS 3.0 features channel bonding, which enables multiple downstream and upstream channels to be used together at the same time by a single subscriber. Throughput The first three versions of the DOCSIS standard support a downstream throughput with 256-QAM of up to 42.88 Mbit/s per 6 MHz channel (approximately 38 Mbit/s after overhead), or 55.62 Mbit/s per 8 MHz channel for EuroDOCSIS (approximately 50 Mbit/s after overhead). The upstream throughput possible is 30.72 Mbit/s per 6.4 MHz channel (approximately 27 Mbit/s after overhead), or 10.24 Mbit/s per 3.2 MHz channel (approximately 9 Mbit/s after overhead). DOCSIS 3.1 supports a downstream throughput with 4096-QAM and 25 kHz subcarrier spacing of up to 1.89 Gbit/s per 192 MHz OFDM channel. The upstream throughput possible is 0.94 Gbit/s per 96 MHz OFDMA channel. Network layer DOCSIS modems are managed via an Internet Protocol (IP) address. The 'DOCSIS 2.0 + IPv6' specification allowed support for IPv6 on DOCSIS 2.0 modems via a firmware upgrade. DOCSIS 3.0 added management over IPv6. Throughput Maximum raw throughput including overhead (maximum payload throughput after overhead). Tables assume 256-QAM modulation for downstream and 64-QAM for upstream on DOCSIS 3.0, and 4096-QAM modulation for OFDM/OFDMA (first downstream/upstream methods) on DOCSIS 3.1, although real-world data rates may be lower due to variable modulation depending on SNR. Higher data rates are possible but require higher order QAM schemes which require higher downstream modulation error ratio (MER). DOCSIS 3.1 was designed to support up to 8192-QAM/16,384-QAM, but only support of up through 4096-QAM is mandatory to meet the minimum DOCSIS 3.1 standards. For DOCSIS 3.0, the theoretical maximum throughput for the number of bonded channels are listed in the table below. Note that the number of channels a cable system can support is dependent on how the cable system is set up. For example, the amount of available bandwidth in each direction, the width of the channels selected in the upstream direction, and hardware constraints limit the maximum amount of channels in each direction. Also note that, since in many cases, DOCSIS capacity is shared among multiple users, most cable companies do not sell the maximum technical capacity available as a commercial product, to reduce congestion in case of heavy usage. Note that the maximum downstream bandwidth on all versions of DOCSIS depends on the version of DOCSIS used and the number of upstream channels used if DOCSIS 3.0 is used, but the upstream channel widths are independent of whether DOCSIS or EuroDOCSIS is used. Equipment A DOCSIS architecture includes two primary components: a cable modem located at the customer premises, and a cable modem termination system (CMTS) located at the CATV headend. Cable systems supporting on-demand programming use a hybrid fiber-coaxial system. Fiber optic lines bring digital signals to nodes in the system where they are converted into RF channels and modem signals on coaxial trunk lines. The customer PC and associated peripherals are termed customer-premises equipment (CPE). The CPE are connected to the cable modem, which is in turn connected through the HFC network to the cable modem termination system (CMTS). The CMTS then routes traffic between the HFC and the Internet. Using the CMTS, the cable operator (or Multiple Service Operators — MSO) exercises full control over the cable modem's configuration; the CM configuration is changed to adjust for varying line conditions and customer service requirements. DOCSIS 2.0 is also used over microwave frequencies (10 GHz) in Ireland by Digiweb, using dedicated wireless links rather than HFC network. At each subscriber premises the ordinary CM is connected to an antenna box which converts to/from microwave frequencies and transmits/receives on 10 GHz. Each customer has a dedicated link but the transmitter mast must be in line of sight (most sites are hilltop). The DOCSIS architecture is also used for fixed wireless with equipment using the 2.5–2.7 GHz Multichannel Multipoint Distribution Service (MMDS) microwave band in the U.S. Security DOCSIS includes media access control (MAC) layer security services in its Baseline Privacy Interface specifications. DOCSIS 1.0 used the initial Baseline Privacy Interface (BPI) specification. BPI was later improved with the release of the Baseline Privacy Interface Plus (BPI+) specification used by DOCSIS 1.1 and 2.0. Most recently, a number of enhancements to the Baseline Privacy Interface were added as part of DOCSIS 3.0, and the specification was renamed "Security" (SEC). The intent of the BPI/SEC specifications is to describe MAC layer security services for DOCSIS CMTS to cable modem communications. BPI/SEC security goals are twofold: Provide cable modem users with data privacy across the cable network Provide cable service operators with service protection (i.e. prevent unauthorized modems and users from gaining access to the network's RF MAC services) BPI/SEC is intended to prevent cable users from listening to each other. It does this by encrypting data flows between the CMTS and the cable modem. BPI and BPI+ use 56-bit Data Encryption Standard (DES) encryption, while SEC adds support for 128-bit Advanced Encryption Standard (AES). The AES key, however, is protected only by a 1024-bit RSA key. BPI/SEC is intended to allow cable service operators to refuse service to uncertified cable modems and unauthorized users. BPI+ strengthened service protection by adding digital certificate based authentication to its key exchange protocol, using a public key infrastructure (PKI), based on digital certificate authorities (CAs) of the certification testers, currently Excentis (formerly known as tComLabs) for EuroDOCSIS and CableLabs for DOCSIS. Typically, the cable service operator manually adds the cable modem's MAC address to a customer's account with the cable service operator; and the network allows access only to a cable modem that can attest to that MAC address using a valid certificate issued via the PKI. The earlier BPI specification (ANSI/SCTE 22-2) had limited service protection because the underlying key management protocol did not authenticate the user's cable modem. Security in the DOCSIS network is vastly improved when only business critical communications are permitted, and end user communication to the network infrastructure is denied. Successful attacks often occur when the CMTS is configured for backward compatibility with early pre-standard DOCSIS 1.1 modems. These modems were "software upgradeable in the field", but did not include valid DOCSIS or EuroDOCSIS root certificates. See also Data cable DOCSIS Set-top Gateway Ethernet over coax List of device bandwidths Multimedia over Coax Alliance References External links DOCSIS 1.0 specifications DOCSIS 1.1 specifications DOCSIS 2.0 specifications DOCSIS 3.0 specifications DOCSIS 3.1 specifications DOCSIS 4.0 specifications DOCSIS 3.1 This Rohde & Schwarz application note discusses the fundamental technological advances of DOCSIS 3.1. DOCSIS Tutorial (2009) at Volpe Firm Cable television technology Digital cable ITU-T recommendations Link protocols Telecommunications-related introductions in 1997
296370
https://en.wikipedia.org/wiki/Certificate%20authority
Certificate authority
In cryptography, a certificate authority or certification authority (CA) is an entity that issues digital certificates. A digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or on assertions made about the private key that corresponds to the certified public key. A CA acts as a trusted third party—trusted both by the subject (owner) of the certificate and by the party relying upon the certificate. The format of these certificates is specified by the X.509 or EMV standard. One particularly common use for certificate authorities is to sign certificates used in HTTPS, the secure browsing protocol for the World Wide Web. Another common use is in issuing identity cards by national governments for use in electronically signing documents. Overview Trusted certificates can be used to create secure connections to a server via the Internet. A certificate is essential in order to circumvent a malicious party which happens to be on the route to a target server which acts as if it were the target. Such a scenario is commonly referred to as a man-in-the-middle attack. The client uses the CA certificate to authenticate the CA signature on the server certificate, as part of the authorizations before launching a secure connection. Usually, client software—for example, browsers—include a set of trusted CA certificates. This makes sense, as many users need to trust their client software. A malicious or compromised client can skip any security check and still fool its users into believing otherwise. The clients of a CA are server supervisors who call for a certificate that their servers will bestow to users. Commercial CAs charge money to issue certificates, and their customers anticipate the CA's certificate to be contained within the majority of web browsers, so that safe connections to the certified servers work efficiently out-of-the-box. The quantity of internet browsers, other devices and applications which trust a particular certificate authority is referred to as ubiquity. Mozilla, which is a non-profit business, issues several commercial CA certificates with its products. While Mozilla developed their own policy, the CA/Browser Forum developed similar guidelines for CA trust. A single CA certificate may be shared among multiple CAs or their resellers. A root CA certificate may be the base to issue multiple intermediate CA certificates with varying validation requirements. In addition to commercial CAs, some non-profits issue publicly-trusted digital certificates without charge, for example Let's Encrypt. Some large cloud computing and web hosting companies are also publicly-trusted CAs and issue certificates to services hosted on their infrastructure, for example Amazon Web Services, Cloudflare, and Google Cloud Platform. Large organizations or government bodies may have their own PKIs (public key infrastructure), each containing their own CAs. Any site using self-signed certificates acts as its own CA. Commercial banks that issue EMV payment cards are governed by the EMV Certificate Authority, payment schemes that route payment transactions initiated at Point of Sale Terminals (POS) to a Card Issuing Bank to transfer the funds from the card holder's bank account to the payment recipient's bank account. Each payment card presents along with its card data also the Card Issuer Certificate to the POS. The Issuer Certificate is signed by EMV CA Certificate. The POS retrieves the public key of EMV CA from its storage, validates the Issuer Certificate and authenticity of the payment card before sending the payment request to the payment scheme. Browsers and other clients of sorts characteristically allow users to add or do away with CA certificates at will. While server certificates regularly last for a relatively short period, CA certificates are further extended, so, for repeatedly visited servers, it is less error-prone importing and trusting the CA issued, rather than confirm a security exemption each time the server's certificate is renewed. Less often, trustworthy certificates are used for encrypting or signing messages. CAs dispense end-user certificates too, which can be used with S/MIME. However, encryption entails the receiver's public key and, since authors and receivers of encrypted messages, apparently, know one another, the usefulness of a trusted third party remains confined to the signature verification of messages sent to public mailing lists. Providers Worldwide, the certificate authority business is fragmented, with national or regional providers dominating their home market. This is because many uses of digital certificates, such as for legally binding digital signatures, are linked to local law, regulations, and accreditation schemes for certificate authorities. However, the market for globally trusted TLS/SSL server certificates is largely held by a small number of multinational companies. This market has significant barriers to entry due to the technical requirements. While not legally required, new providers may choose to undergo annual security audits (such as WebTrust for certificate authorities in North America and ETSI in Europe) to be included as a trusted root by a web browser or operating system. , 147 root certificates, representing 52 organizations, are trusted in the Mozilla Firefox web browser, 168 root certificates, representing 60 organizations, are trusted by macOS, and 255 root certificates, representing 101 organizations, are trusted by Microsoft Windows. As of Android 4.2 (Jelly Bean), Android currently contains over 100 CAs that are updated with each release. On November 18, 2014, a group of companies and nonprofit organizations, including the Electronic Frontier Foundation, Mozilla, Cisco, and Akamai, announced Let's Encrypt, a nonprofit certificate authority that provides free domain validated X.509 certificates as well as software to enable installation and maintenance of certificates. Let's Encrypt is operated by the newly formed Internet Security Research Group, a California nonprofit recognized as federally tax-exempt. According to Netcraft in May 2015, the industry standard for monitoring active TLS certificates, "Although the global [TLS] ecosystem is competitive, it is dominated by a handful of major CAs — three certificate authorities (Symantec, Comodo, GoDaddy) account for three-quarters of all issued [TLS] certificates on public-facing web servers. The top spot has been held by Symantec (or VeriSign before it was purchased by Symantec) ever since [our] survey began, with it currently accounting for just under a third of all certificates. To illustrate the effect of differing methodologies, amongst the million busiest sites Symantec issued 44% of the valid, trusted certificates in use — significantly more than its overall market share." the survey company W3Techs, which collects statistics on certificate authority usage among the Alexa top 10 million and the Tranco top 1 million websites, lists the five largest authorities by absolute usage share as below. Validation standards The commercial CAs that issue the bulk of certificates for HTTPS servers typically use a technique called "domain validation" to authenticate the recipient of the certificate. The techniques used for domain validation vary between CAs, but in general domain validation techniques are meant to prove that the certificate applicant controls a given domain name, not any information about the applicant's identity. Many Certificate Authorities also offer Extended Validation (EV) certificates as a more rigorous alternative to domain validated certificates. Extended validation is intended to verify not only control of a domain name, but additional identity information to be included in the certificate. Some browsers display this additional identity information in a green box in the URL bar. One limitation of EV as a solution to the weaknesses of domain validation is that attackers could still obtain a domain validated certificate for the victim domain, and deploy it during an attack; if that occurred, the difference observable to the victim user would be the absence of a green bar with the company name. There is some question as to whether users would be likely to recognise this absence as indicative of an attack being in progress: a test using Internet Explorer 7 in 2009 showed that the absence of IE7's EV warnings were not noticed by users, however Microsoft's current browser, Edge, shows a significantly greater difference between EV and domain validated certificates, with domain validated certificates having a hollow, grey lock. Validation weaknesses Domain validation suffers from certain structural security limitations. In particular, it is always vulnerable to attacks that allow an adversary to observe the domain validation probes that CAs send. These can include attacks against the DNS, TCP, or BGP protocols (which lack the cryptographic protections of TLS/SSL), or the compromise of routers. Such attacks are possible either on the network near a CA, or near the victim domain itself. One of the most common domain validation techniques involves sending an email containing an authentication token or link to an email address that is likely to be administratively responsible for the domain. This could be the technical contact email address listed in the domain's WHOIS entry, or an administrative email like , , , or the domain. Some Certificate Authorities may accept confirmation using , , or in the domain. The theory behind domain validation is that only the legitimate owner of a domain would be able to read emails sent to these administrative addresses. Domain validation implementations have sometimes been a source of security vulnerabilities. In one instance, security researchers showed that attackers could obtain certificates for webmail sites because a CA was willing to use an email address like for domain.com, but not all webmail systems had reserved the "ssladmin" username to prevent attackers from registering it. Prior to 2011, there was no standard list of email addresses that could be used for domain validation, so it was not clear to email administrators which addresses needed to be reserved. The first version of the CA/Browser Forum Baseline Requirements, adopted November 2011, specified a list of such addresses. This allowed mail hosts to reserve those addresses for administrative use, though such precautions are still not universal. In January 2015, a Finnish man registered the username "hostmaster" at the Finnish version of Microsoft Live and was able to obtain a domain-validated certificate for live.fi, despite not being the owner of the domain name. Issuing a certificate A CA issues digital certificates that contain a public key and the identity of the owner. The matching private key is not made available publicly, but kept secret by the end user who generated the key pair. The certificate is also a confirmation or validation by the CA that the public key contained in the certificate belongs to the person, organization, server or other entity noted in the certificate. A CA's obligation in such schemes is to verify an applicant's credentials, so that users and relying parties can trust the information in the issued certificate. CAs use a variety of standards and tests to do so. In essence, the certificate authority is responsible for saying "yes, this person is who they say they are, and we, the CA, certify that". If the user trusts the CA and can verify the CA's signature, then they can also assume that a certain public key does indeed belong to whoever is identified in the certificate. Example Public-key cryptography can be used to encrypt data communicated between two parties. This can typically happen when a user logs on to any site that implements the HTTP Secure protocol. In this example let us suppose that the user logs on to their bank's homepage www.bank.example to do online banking. When the user opens www.bank.example homepage, they receive a public key along with all the data that their web-browser displays. The public key could be used to encrypt data from the client to the server but the safe procedure is to use it in a protocol that determines a temporary shared symmetric encryption key; messages in such a key exchange protocol can be enciphered with the bank's public key in such a way that only the bank server has the private key to read them. The rest of the communication then proceeds using the new (disposable) symmetric key, so when the user enters some information to the bank's page and submits the page (sends the information back to the bank) then the data the user has entered to the page will be encrypted by their web browser. Therefore, even if someone can access the (encrypted) data that was communicated from the user to www.bank.example, such eavesdropper cannot read or decipher it. This mechanism is only safe if the user can be sure that it is the bank that they see in their web browser. If the user types in www.bank.example, but their communication is hijacked and a fake website (that pretends to be the bank website) sends the page information back to the user's browser, the fake web-page can send a fake public key to the user (for which the fake site owns a matching private key). The user will fill the form with their personal data and will submit the page. The fake web-page will then get access to the user's data. This is what the certificate authority mechanism is intended to prevent. A certificate authority (CA) is an organization that stores public keys and their owners, and every party in a communication trusts this organization (and knows its public key). When the user's web browser receives the public key from www.bank.example it also receives a digital signature of the key (with some more information, in a so-called X.509 certificate). The browser already possesses the public key of the CA and consequently can verify the signature, trust the certificate and the public key in it: since www.bank.example uses a public key that the certification authority certifies, a fake www.bank.example can only use the same public key. Since the fake www.bank.example does not know the corresponding private key, it cannot create the signature needed to verify its authenticity. Security It is difficult to assure correctness of match between data and entity when the data are presented to the CA (perhaps over an electronic network), and when the credentials of the person/company/program asking for a certificate are likewise presented. This is why commercial CAs often use a combination of authentication techniques including leveraging government bureaus, the payment infrastructure, third parties' databases and services, and custom heuristics. In some enterprise systems, local forms of authentication such as Kerberos can be used to obtain a certificate which can in turn be used by external relying parties. Notaries are required in some cases to personally know the party whose signature is being notarized; this is a higher standard than is reached by many CAs. According to the American Bar Association outline on Online Transaction Management the primary points of US Federal and State statutes enacted regarding digital signatures has been to "prevent conflicting and overly burdensome local regulation and to establish that electronic writings satisfy the traditional requirements associated with paper documents." Further the US E-Sign statute and the suggested UETA code help ensure that: a signature, contract or other record relating to such transaction may not be denied legal effect, validity, or enforceability solely because it is in electronic form; and a contract relating to such transaction may not be denied legal effect, validity or enforceability solely because an electronic signature or electronic record was used in its formation. Despite the security measures undertaken to correctly verify the identities of people and companies, there is a risk of a single CA issuing a bogus certificate to an imposter. It is also possible to register individuals and companies with the same or very similar names, which may lead to confusion. To minimize this hazard, the certificate transparency initiative proposes auditing all certificates in a public unforgeable log, which could help in the prevention of phishing. In large-scale deployments, Alice may not be familiar with Bob's certificate authority (perhaps they each have a different CA server), so Bob's certificate may also include his CA's public key signed by a different CA2, which is presumably recognizable by Alice. This process typically leads to a hierarchy or mesh of CAs and CA certificates. Certificate revocation Authorities in the WebPKI provide revocation services to allow invalidation of previously issued certificates. According to the Baseline Requirements by the CA/Browser forum, the CAs must maintain revocation status until certificate expiration. The status must be delivered using Online Certificate Status Protocol. Most revocation statuses on the Internet disappear soon after the expiration of the certificates. Authority revocation lists An authority revocation list (ARL) is a form of certificate revocation list (CRL) containing certificates issued to certificate authorities, contrary to CRLs which contain revoked end-entity certificates. Industry organizations Certificate Authority Security Council (CASC) – In February 2013, the CASC was founded as an industry advocacy organization dedicated to addressing industry issues and educating the public on internet security. The founding members are the seven largest Certificate Authorities. Common Computing Security Standards Forum (CCSF) – In 2009 the CCSF was founded to promote industry standards that protect end users. Comodo Group CEO Melih Abdulhayoğlu is considered the founder of the CCSF. CA/Browser Forum – In 2005, a new consortium of Certificate Authorities and web browser vendors was formed to promote industry standards and baseline requirements for internet security. Comodo Group CEO Melih Abdulhayoğlu organized the first meeting and is considered the founder of the CA/Browser Forum. Baseline requirements The CA/Browser Forum publishes the Baseline Requirements, a list of policies and technical requirements for CAs to follow. These are a requirement for inclusion in the certificate stores of Firefox and Safari. CA compromise If the CA can be subverted, then the security of the entire system is lost, potentially subverting all the entities that trust the compromised CA. For example, suppose an attacker, Eve, manages to get a CA to issue to her a certificate that claims to represent Alice. That is, the certificate would publicly state that it represents Alice, and might include other information about Alice. Some of the information about Alice, such as her employer name, might be true, increasing the certificate's credibility. Eve, however, would have the all-important private key associated with the certificate. Eve could then use the certificate to send digitally signed email to Bob, tricking Bob into believing that the email was from Alice. Bob might even respond with encrypted email, believing that it could only be read by Alice, when Eve is actually able to decrypt it using the private key. A notable case of CA subversion like this occurred in 2001, when the certificate authority VeriSign issued two certificates to a person claiming to represent Microsoft. The certificates have the name "Microsoft Corporation", so they could be used to spoof someone into believing that updates to Microsoft software came from Microsoft when they actually did not. The fraud was detected in early 2001. Microsoft and VeriSign took steps to limit the impact of the problem. In 2008, Comodo reseller Certstar sold a certificate for mozilla.com to Eddy Nigg, who had no authority to represent Mozilla. In 2011 fraudulent certificates were obtained from Comodo and DigiNotar, allegedly by Iranian hackers. There is evidence that the fraudulent DigiNotar certificates were used in a man-in-the-middle attack in Iran. In 2012, it became known that Trustwave issued a subordinate root certificate that was used for transparent traffic management (man-in-the-middle) which effectively permitted an enterprise to sniff SSL internal network traffic using the subordinate certificate. Key storage An attacker who steals a certificate authority's private keys is able to forge certificates as if they were CA, without needed ongoing access to the CA's systems. Key theft is therefore one of the main risks certificate authorities defend against. Publicly trusted CAs almost always store their keys on a hardware security module (HSM), which allows them to sign certificates with a key, but generally prevent extraction of that key with both physical and software controls. CAs typically take the further precaution of keeping the key for their long-term root certificates in an HSM that is kept offline, except when it is needed to sign shorter-lived intermediate certificates. The intermediate certificates, stored in an online HSM, can do the day-to-day work of signing end-entity certificates and keeping revocation information up to date. CAs sometimes use a key ceremony when generating signing keys, in order to ensure that the keys are not tampered with or copied. Implementation weakness of the trusted third party scheme The critical weakness in the way that the current X.509 scheme is implemented is that any CA trusted by a particular party can then issue certificates for any domain they choose. Such certificates will be accepted as valid by the trusting party whether they are legitimate and authorized or not. This is a serious shortcoming given that the most commonly encountered technology employing X.509 and trusted third parties is the HTTPS protocol. As all major web browsers are distributed to their end-users pre-configured with a list of trusted CAs that numbers in the dozens this means that any one of these pre-approved trusted CAs can issue a valid certificate for any domain whatsoever. The industry response to this has been muted. Given that the contents of a browser's pre-configured trusted CA list is determined independently by the party that is distributing or causing to be installed the browser application there is really nothing that the CAs themselves can do. This issue is the driving impetus behind the development of the DNS-based Authentication of Named Entities (DANE) protocol. If adopted in conjunction with Domain Name System Security Extensions (DNSSEC) DANE will greatly reduce if not eliminate the role of trusted third parties in a domain's PKI. See also Validation Authority Contact page People for Internet Responsibility Web of trust Chain of trust Digital signature DigiNotar certificate authority breach Comodo certificate authority breach References External links How secure is HTTPS today? How often is it attacked?, Electronic Frontier Foundation (25 October 2011) Public-key cryptography Key management Public key infrastructure Transport Layer Security
46313541
https://en.wikipedia.org/wiki/Tim%20Devine
Tim Devine
Tim Devine is an American music executive and entrepreneur. The founder of Webcastr, Devine is best known for his work as an a&r executive. Early life and education Devine spent his childhood in Chicago, Kansas City, New York and New Jersey and moved to Los Angeles when he was 12. At 8, he saw the Beatles on the Ed Sullivan Show, and decided to pursue a career in the music business. In junior and senior high, he wrote about music for his school papers and worked at Licorice Pizza, a retail music chain. He continued as a music journalist through college, and freelanced for Phonograph Record, Rolling Stone, and the LA Free Press, among others. After a year at UCLA and a year at California State University, Northridge, Devine left Los Angeles to attend the University of California, Berkeley. At Berkeley, he was involved with the school's concert committee and served as the music director at KALX, the university's radio station, and as a fan, he attended historic concerts which included the final shows by the Sex Pistols and The Band's The Last Waltz. As a sophomore, he was hired by A&M Records as a college promotion representative, a position he held until 1978, when he graduated from Berkeley with a BA in Mass Communications/Political Science. Career Just prior to his graduation, Devine—who by then had a diverse background in the music business—was hired by Warner Communications for a management training program, and began working at Warner Bros. Records in Burbank, California. As part of the program, he spent a month in each of 12 departments within the company, including the A&R, promotion, marketing departments, and worked with music industry veterans Mo Ostin, Lenny Waronker, Russ Titelman, Jerry Wexler, Bob Krasnow, Ed Rosenblatt, and Russ Thyret. Following the management trainee program, Devine was hired as a product manager for the label. In that capacity he was responsible for releases by Prince, Devo, Gang of Four, Van Morrison, Bob Marley, Pat Metheny, Laurie Anderson, Steve Winwood, and Little Feat, among others, and served as the product manager for U2's first two American releases, Boy and October. After six years at the company, Devine left Warner Bros. to briefly become a manager. His artist roster included Dream Syndicate, Gang of Four, Thin Lizzy and Ultravox. In 1984, Devine was appointed head of artist development for MCA Records, where he was involved with records by Tom Petty and the Heartbreakers, The Who, Oingo Boingo, Charlie Sexton, the Pogues, among others. In 1987, Devine moved into an A&R position at Capitol Records. In a 2010 interview, he said: "I didn't want to jump into A&R until I really knew the full spectrum of marketing. My fundamental belief is that you can sign a great band and make a great record, but if nobody hears it, what's the point? I was ready to make the move because I wanted to get closer to the source of the artistic nucleus." Devine had significant success during his 8-year tenure at Capitol, where he signed Mazzy Star, Concrete Blonde, John Hiatt, and Lloyd Cole, among others, and a&r'd records by Paul McCartney, Beastie Boys, The Beach Boys, and Heart. He was widely recognized for his work with Bonnie Raitt, whom he signed in 1988. Made over the course of a year, her Capitol debut, Nick of Time, sold in excess of five million albums and won three Grammy Awards, including Album of the Year. Devine was also acknowledged for signing Blind Melon and a&r'ing their self-titled debut. Released in September 1992, the single "No Rain" was an international hit, and Blind Melon sold more than four million albums. Additionally, Devine produced numerous film and television soundtrack recordings for the label, including the soundtracks for Rainman, Clueless, Bull Durham, Moonstruck, and Imagine: John Lennon. Devine was named senior vice president of A&R for Columbia Records in 1996; in 2002, as his role expanded, he was appointed General Manager of the label's West Coast division. He served in an A&R capacity on records by Aerosmith, The Offspring, Leonard Cohen, Ric Ocasek, Soul Asylum and Pete Yorn, and signed artists including OneRepublic (via Velvet Hammer), Switchfoot, Sinead O'Connor, Brandi Carlile, Cake and the Afghan Whigs. Additionally, Devine signed Train, who sold more than two million albums and won two Grammy Awards with their first Columbia release, Drops of Jupiter. Devine orchestrated the label deals for Aware Records (John Mayer, Five for Fighting) and Rick Rubin's American Recordings, whose roster then included System of a Down, Johnny Cash, and Black Crowes. He co-produced Columbia film soundtracks including the soundtracks for Orange County and I Know What You Did Last Summer and executive produced the soundtrack for the PBS series, Sessions at West 54th: Recorded Live in New York, which featured Sheryl Crow, Lou Reed, David Byrne and others. Devine also signed Katy Perry, who he met through Glen Ballard in 2003. Ballard's label, Java, which would have released Perry's album, had been dropped by Island Def Jam, and Columbia bought the masters for Katy's unreleased Java record. The label planned to release the record with the addition of two songs, but after Devine brought in co-writers including Desmond Child, Greg Wells, Butch Walker, Scott Cutler/Anne Previn, The Matrix, Kara DioGuardi, Dr. Luke and Max Martin, the entire Java record was scrapped. Subsequently, Chairman Don Ienner and COO Michelle Anthony resigned from Sony Music, and Perry was among several artists who were dropped. Perry then signed with Capitol. Six of the songs written and recorded during the Columbia sessions ended up on Perry's One of the Boys. The only Columbia track released from Perry was "Simple," which Devine had pitched for the soundtrack for the film The Sisterhood of the Travelling Pants. In 2006, as Devine became increasingly interested in technology and digital media, he founded Webcastr.com, a 24-hour online digital multi-channel network that featured daily content from more than 200 channel providers including CBS News, the BBC, MTV News, Fox Sports, the Wall Street Journal, CBC, AFP (France), the New York Times, Newsweek, Warner Music Group, Sony/BMG Music, an others. Webcastr's viewership exceeded one million viewers per month in more than 175 countries. In 2014, Devine joined Scayl, an end-to-end encryption email service. He is the senior vice president of business development and serves on the company's board of directors. Devine has been a featured speaker at Digital Hollywood, SXSW, and the New Music Seminar, and is a founding member of Organizing for America. He is featured in the 2015 documentary The Damned: Don’t You Wish That We Were Dead. Selected discography References External links Scayl American music industry executives Living people Year of birth missing (living people)
298404
https://en.wikipedia.org/wiki/Michael%20O.%20Rabin
Michael O. Rabin
Michael Oser Rabin (; born September 1, 1931) is an Israeli mathematician and computer scientist and a recipient of the Turing Award. Biography Early life and education Rabin was born in 1931 in Breslau, Germany (today Wrocław, in Poland), the son of a rabbi. In 1935, he emigrated with his family to Mandate Palestine. As a young boy, he was very interested in mathematics and his father sent him to the best high school in Haifa, where he studied under mathematician Elisha Netanyahu, who was then a high school teacher. Rabin graduated from the Hebrew Reali School in Haifa in 1948, and was drafted into the army during the 1948 Arab–Israeli War. The mathematician Abraham Fraenkel, who was a professor of mathematics in Jerusalem, intervened with the army command, and Rabin was discharged to study at the university in 1949. He received an M.Sc. from Hebrew University of Jerusalem in 1953 and a Ph.D. from Princeton University in 1956. Career Rabin became Associate Professor of Mathematics at the University of California, Berkeley (1961–62) and MIT (1962-63). Before moving to Harvard University as Gordon McKay Professor of Computer Science in 1981, he was a professor at the Hebrew University. In the late 1950s, he was invited for a summer to do research for IBM at the Lamb Estate in Westchester County, New York with other promising mathematicians and scientists. It was there that he and Dana Scott wrote the paper "Finite Automata and Their Decision Problems". Soon, using nondeterministic automata, they were able to re-prove Kleene's result that finite state machines exactly accept regular languages. As to the origins of what was to become computational complexity theory, the next summer Rabin returned to the Lamb Estate. John McCarthy posed a puzzle to him about spies, guards, and passwords, which Rabin studied and soon after he wrote an article, "Degree of Difficulty of Computing a Function and Hierarchy of Recursive Sets." Nondeterministic machines have become a key concept in computational complexity theory, particularly with the description of the complexity classes P and NP. Rabin then returned to Jerusalem, researching logic, and working on the foundations of what would later be known as computer science. He was an associate professor and the head of the Institute of Mathematics at the Hebrew University at 29 years old, and a full professor by 33. Rabin recalls, "There was absolutely no appreciation of the work on the issues of computing. Mathematicians did not recognize the emerging new field". In 1960, he was invited by Edward F. Moore to work at Bell Labs, where Rabin introduced probabilistic automata that employ coin tosses in order to decide which state transitions to take. He showed examples of regular languages that required a very large number of states, but for which you get an exponential reduction of the number of states if you go over to probabilistic automata. In 1969, Rabin proved that the second-order theory of n successors is decidable. A key component of the proof implicitly showed determinacy of parity games, which lie in the third level of the Borel hierarchy. In 1975, Rabin finished his tenure as Rector of the Hebrew University of Jerusalem and went to the Massachusetts Institute of Technology in the USA as a visiting professor. Gary Miller was also there and had his polynomial time test for primality based on the extended Riemann hypothesis. While there, Rabin invented the Miller–Rabin primality test, a randomized algorithm that can determine very quickly (but with a tiny probability of error) whether a number is prime. Rabin's method was based on previous work of Gary Miller that solved the problem deterministically with the assumption that the generalized Riemann hypothesis is true, but Rabin's version of the test made no such assumption. Fast primality testing is key in the successful implementation of most public-key cryptography, and in 2003 Miller, Rabin, Robert M. Solovay, and Volker Strassen were given the Paris Kanellakis Award for their work on primality testing. In 1976 he was invited by Joseph Traub to meet at Carnegie Mellon University and presented the primality test. After he gave that lecture, Traub had said, "No, no, this is revolutionary, and it's going to become very important." In 1979, Rabin invented the Rabin cryptosystem, the first asymmetric cryptosystem whose security was proved equivalent to the intractability of integer factorization. In 1981, Rabin reinvented a weak variant of the technique of oblivious transfer invented by Wiesner under the name of multiplexing, allowing a sender to transmit a message to a receiver where the receiver has some probability between 0 and 1 of learning the message, with the sender being unaware whether the receiver was able to do so. In 1987, Rabin, together with Richard Karp, created one of the most well-known efficient string search algorithms, the Rabin–Karp string search algorithm, known for its rolling hash. Rabin's more recent research has concentrated on computer security. He is currently the Thomas J. Watson Sr. Professor of Computer Science at Harvard University and Professor of Computer Science at Hebrew University. During the spring semester of 2007, he was a visiting professor at Columbia University teaching Introduction to Cryptography. Awards and honours Rabin is a foreign member of the United States National Academy of Sciences, a member of the French Academy of Sciences and a foreign member of the Royal Society. In 1976, the Turing Award was awarded jointly to Rabin and Dana Scott for a paper written in 1959, the citation for which states that the award was granted: For their joint paper "Finite Automata and Their Decision Problems," which introduced the idea of nondeterministic machines, which has proved to be an enormously valuable concept. Their (Scott & Rabin) classic paper has been a continuous source of inspiration for subsequent work in this field. In 1995, Rabin was awarded the Israel Prize, in computer sciences. In 2010, Rabin was awarded the Tel Aviv University Dan David Prize ("Future" category), jointly with Leonard Kleinrock and Gordon E. Moore, for Computers and Telecommunications. Rabin was awarded an Honorary Doctor of Science from Harvard University in 2017. See also Oblivious transfer Rabin automaton Rabin fingerprint Hyper-encryption List of Israel Prize recipients References External links Short Description in an Information Science Hall of Fame at University of Pittsburgh Oblivious transfer Quotes from some of Professor Rabin's classes Website for one of Rabin's courses Description of Rabin's research by Richard J. Lipton 1931 births Foreign associates of the National Academy of Sciences Israeli mathematicians Israeli computer scientists Hebrew Reali School alumni Einstein Institute of Mathematics alumni Hebrew University of Jerusalem faculty Columbia University faculty Turing Award laureates Dijkstra Prize laureates Israel Prize in computer sciences recipients Members of the Israel Academy of Sciences and Humanities Modern cryptographers Logicians Theoretical computer scientists Living people Foreign Members of the Royal Society Tarski lecturers International Association for Cryptologic Research fellows IBM Research computer scientists IBM employees Harvard University faculty ETH Zurich faculty Gödel Lecturers
299670
https://en.wikipedia.org/wiki/Jude%20Milhon
Jude Milhon
Judith [Jude] Milhon (March 12, 1939 – July 19, 2003), in Washington D.C., best known by her pseudonym St. Jude, was a self-taught programmer, civil rights advocate, writer, editor, advocate for women in computing hacker and author in the San Francisco Bay Area. Milhon coined the term cypherpunk and was a founding member of the cypherpunks. On July 19, 2003, Milhon died of cancer. Life Judith Milhon was born March 12, 1939 in Washington D.C., raised in Indiana, to a military family of the Marine Corps. She married Robert Behling in 1961 and had one daughter, Tresca Behling, with him. Attracted to the growing countercultural movement, Milhon moved near Antioch College in Yellow Springs, Ohio, and established a communal household with her husband, young daughter, and friends. In 1968 she moved to San Francisco with her friend and partner Efrem Lipkin and divorced her husband in 1970. At the time of her death in 2003 from cancer, she was survived by at least one child, Tresca Behling, and one grandchild, Emilio Zuniga, as well as her partner of over 40 years, Efrem Lipkin. Professional projects Milhon taught herself programming in 1967 and landed her first job at the Horn and Hardart vending machine company of New York before she moved away to California to join the counterculture movement. She worked at the Berkeley Computer Company (an outgrowth of Project Genie), and she helped implement the communications controller of the BCC timesharing system. In 1971 she partnered with other local activists and technologists at Project One, where she was particularly drawn to the Resource One project, with the goal of creating the Bay Area's first public computerized bulletin board system. In 1973, a subset of the Resource One group, including Milhon, broke away to create Community Memory in Berkeley. Later, she also worked on BSD, a Unix-based operating system developed by the Computer Systems Research Group at UC Berkeley. She was a member of Computer Professionals for Social Responsibility, and the author of several books. She was a senior editor at the magazine Mondo 2000 and frequent contributor to Boing Boing. Bibliography The Joy of Hacker Sex (proposed) How to Mutate & Take Over the World: an Exploded Post-Novel (1997) (with R. U. Sirius) Random House Cyberpunk Handbook: The Real Cyberpunk Fakebook (1995) (with R. U. Sirius and Bart Nagel) Random House. Hacking the Wetware: The NerdGirl’s Pillow Book (1994) (internet release of ebook) Activism and Vision St. Jude had her hand in many different causes. She was active in the 1960s Civil Rights Movement helping to organize the march from Selma to Montgomery, Alabama. Dedicated to protest, Milhon was jailed for trespassing in Montgomery, Alabama as well as for civil disobedience in Jackson, Mississippi. Activism within the cyber community was important to Milhon as well. She frequently urged women toward the internet and hacking while encouraging them to have "tough skin" in the face of harassment. At a time when the internet was dominated by men, she was an ardent advocate of the joys of hacking, cybersex and a woman's right to technology. She often said, "Girls need modems. Women may not be great at physical altercations, but we sure excel at rapid-fire keyboarding." Milhon once noted that there was a conspicuous lack of female hardware hackers, and while working at Community Memory she worked against this exclusion and worked to get new, inexperienced users to experiment with Community Memory. She did so by writing open-ended questions in the system about available resources in the region (such as “Where can I get a decent bagel in the Bay Area (Berkeley particularly)?”), which would get curious users to try out the system. She also wrote "The Cyberpunk Handbook" and coined the term "cypherpunk" for computer users dedicated to online privacy through encryption. References External links Milhon, Jude. (AOL homepage). Retrieved August 24, 2013. Archived August 14, 2007. The WELL's Virtual Wake Delio, Michelle. "Hackers Lose a Patron Saint", Wired News. July 22, 2003. Retrieved March 8, 2018. Welton, Corey. "St. Jude Gets Verbose", Verbosity Magazine. August 1996. Retrieved March 4, 2006. 1939 births 2003 deaths Cypherpunks American feminists American women's rights activists People from Anderson, Indiana People from the San Francisco Bay Area Deaths from cancer in California Women Internet pioneers American women computer scientists American computer scientists Hackers 20th-century American women scientists 20th-century American scientists American computer criminals 21st-century American women
299686
https://en.wikipedia.org/wiki/Point-to-Point%20Protocol%20over%20Ethernet
Point-to-Point Protocol over Ethernet
The Point-to-Point Protocol over Ethernet (PPPoE) is a network protocol for encapsulating Point-to-Point Protocol (PPP) frames inside Ethernet frames. It appeared in 1999, in the context of the boom of DSL as the solution for tunneling packets over the DSL connection to the ISP's IP network, and from there to the rest of the Internet. A 2005 networking book noted that "Most DSL providers use PPPoE, which provides authentication, encryption, and compression." Typical use of PPPoE involves leveraging the PPP facilities for authenticating the user with a username and password, predominately via the PAP protocol and less often via CHAP. On the customer-premises equipment, PPPoE may be implemented either in a unified residential gateway device that handles both DSL modem and IP routing functions or in the case of a simple DSL modem (without routing support), PPPoE may be handled behind it on a separate Ethernet-only router or even directly on a user's computer. (Support for PPPoE is present in most operating systems, ranging from Windows XP, Linux to Mac OS X.) More recently, some GPON-based (instead of DSL-based) residential gateways also use PPPoE, although the status of PPPoE in the GPON standards is marginal. PPPoE was developed by UUNET, Redback Networks (now Ericsson) and RouterWare (now Wind River Systems) and is available as an informational RFC 2516. In the world of DSL, PPPoE was commonly understood to be running on top of ATM (or DSL) as the underlying transport, although no such limitation exists in the PPPoE protocol itself. Other usage scenarios are sometimes distinguished by tacking as a suffix another underlying transport. For example, PPPoEoE, when the transport is Ethernet itself, as in the case of Metro Ethernet networks. (In this notation, the original use of PPPoE would be labeled PPPoEoA, although it should not be confused with PPPoA, which is a different encapsulation protocol.) PPPoE has been described in some books as a "layer 2.5" protocol, in some rudimentary sense similar to MPLS because it can be used to distinguish different IP flows sharing an Ethernet infrastructure, although the lack of PPPoE switches making routing decisions based on PPPoE headers limits applicability in that respect. Original rationale In late 1998, the DSL service model had yet to reach the large scale that would bring prices down to household levels. ADSL technology had been proposed a decade earlier. Potential equipment vendors and carriers alike recognized that broadband such as cable modem or DSL would eventually replace dialup service, but the hardware (both customer premises and LEC) faced a significant low-quantity cost barrier. Initial estimates for low-quantity deployment of DSL showed costs in the $300–$500 range for a DSL modem and $300/month access fee from the telco, which was well beyond what a home user would pay. Thus the initial focus was on small and home business customers for whom a ~1.5 megabit T1 line (at the time $800–$1500 per month) was not economical, but who needed more than dialup or ISDN could deliver. If enough of these customers paved the way, quantities would drive the prices down to where the home-use dialup user might be interested. Different usage profile The problem was that small business customers had a different usage profile than a home-use dialup user, including: Connecting an entire LAN to the Internet; Providing services on a local LAN accessible from the far side of the connection; Simultaneous access to multiple external data sources, such as a company VPN and a general purpose ISP; Continuous usage throughout the workday, or even around the clock. These requirements didn't lend themselves to the connection establishment lag of a dial-up process nor its one-computer-to-one-ISP model, nor even the many-to-one that NAT plus dial-up provided. A new model was required. PPPoE is used mainly either: with PPPoE-speaking Internet DSL services where a PPPoE-speaking modem-router (residential gateway) connects to the DSL service. Here both ISP and modem-router need to speak PPPoE. (Note that in this case, the PPPoE-over-DSL side of things is occasionally referred to as PPPoEoA, for ‘PPPoE over ATM’.) or when a PPPoE-speaking DSL modem is connected to a PPPoE-speaking Ethernet-only router using an Ethernet cable. Time to market: simpler is better One problem with creating a completely new protocol to fill these needs was time. The equipment was available immediately, as was the service, and a whole new protocol stack (Microsoft at the time was advocating fiber-based atm-cells-to-the-desktop, and L2TP was brewing as well, but was not near completion) would take so long to implement that the window of opportunity might slip by. Several decisions were made to simplify implementation and standardization in an effort to deliver a complete solution quickly. Reuse existing software stacks PPPoE hoped to merge the widespread Ethernet infrastructure with the ubiquitous PPP, allowing vendors to reuse their existing software and deliver products in the very near term. Essentially all operating systems at the time had a PPP stack, and the design of PPPoE allowed for a simple shim at the line-encoding stage to convert from PPP to PPPoE. Simplify hardware requirements Competing WAN technologies (T1, ISDN) required a router on the customer premises. PPPoE used a different Ethernet frame type, which allowed the DSL hardware to function as simply a bridge, passing some frames to the WAN and ignoring the others. Implementation of such a bridge is multiple orders of magnitude simpler than a router. Informational RFC RFC 2516 was initially released as an informational (rather than standards-track) RFC for the same reason: the adoption period for a standards-track RFC was prohibitively long. Success PPPoE was initially designed to provide a small LAN with individual independent connections to the Internet at large, but also such that the protocol itself would be lightweight enough that it wouldn't impinge on the hoped-for home usage market when it finally arrived. While success on the second matter may be debated (some complain that 8 bytes per packet is too much) PPPoE clearly succeeded in bringing sufficient volume to drive the price for service down to what a home user would pay. Stages The PPPoE has two distinct stages: PPPoE discovery Since traditional PPP connections are established between two end points over a serial link or over an ATM virtual circuit that has already been established during dial-up, all PPP frames sent on the wire are sure to reach the other end. But Ethernet networks are multi-access where each node in the network can access every other node. An Ethernet frame contains the hardware address of the destination node (MAC address). This helps the frame reach the intended destination. Hence before exchanging PPP control packets to establish the connection over Ethernet, the MAC addresses of the two end points should be known to each other so that they can be encoded in these control packets. The PPPoE Discovery stage does exactly this. It also helps establish a Session ID that can be used for further exchange of packets. PPP session Once the MAC address of the peer is known and a session has been established, the session stage will start. PPPoE discovery (PPPoED) Although traditional PPP is a peer-to-peer protocol, PPPoE is inherently a client-server relationship since multiple hosts can connect to a service provider over a single physical connection. The discovery process consists of four steps between the host computer which acts as the client and the access concentrator at the Internet service provider's end acts as the server. They are outlined below. The fifth and last step is the way to close an existing session. Client to server: Initiation (PADI) PADI stands for PPPoE Active Discovery Initiation. If a user wants to "dial up" to the Internet using DSL, then their computer first must find the DSL access concentrator (DSL-AC) at the user's Internet service provider's point of presence (POP). Communication over Ethernet is only possible via MAC addresses. As the computer does not know the MAC address of the DSL-AC, it sends out a PADI packet via an Ethernet broadcast (MAC: ff:ff:ff:ff:ff:ff). This PADI packet contains the MAC address of the computer sending it. Example of a PADI-packet: Frame 1 (44 bytes on wire, 44 bytes captured) Ethernet II, Src: 00:50:da:42:d7:df, Dst: ff:ff:ff:ff:ff:ff PPP-over-Ethernet Discovery Version: 1 Type 1 Code Active Discovery Initiation (PADI) Session ID: 0000 Payload Length: 24 PPPoE Tags Tag: Service-Name Tag: Host-Uniq Binary Data: (16 bytes) Src. (=source) holds the MAC address of the computer sending the PADI. Dst. (=destination) is the Ethernet broadcast address. The PADI packet can be received by more than one DSL-AC. Only DSL-AC equipment that can serve the "Service-Name" tag should reply. Server to client: Offer (PADO) PADO stands for PPPoE Active Discovery Offer. Once the user's computer has sent the PADI packet, the DSL-AC replies with a PADO packet, using the MAC address supplied in the PADI. The PADO packet contains the MAC address of the DSL-AC, its name (e.g. LEIX11-erx for the T-Com DSL-AC in Leipzig) and the name of the service. If more than one POP's DSL-AC replies with a PADO packet, the user's computer selects the DSL-AC for a particular POP using the supplied name or service. Here is an example of a PADO packet: Frame 2 (60 bytes on wire, 60 bytes captured) Ethernet II, Src: 00:0e:40:7b:f3:8a, Dst: 00:50:da:42:d7:df PPP-over-Ethernet Discovery Version: 1 Type 1 Code Active Discovery Offer (PADO) Session ID: 0000 Payload Length: 36 PPPoE Tags Tag: AC-Name String Data: IpzbrOOl Tag: Host-Uniq Binary Data: (16 bytes) AC-Name -> String data holds the AC name, in this case “Ipzbr001” (the Arcor DSL-AC in Leipzig) Src. holds the MAC address of the DSL-AC. The MAC address of the DSL-AC also reveals the manufacturer of the DSL-AC (in this case Nortel Networks). Client to server: request (PADR) PADR stands for PPPoE active discovery request. A PADR packet is sent by the user's computer to the DSL-AC following receipt of an acceptable PADO packet from the DSL-AC. It confirms acceptance of the offer of a PPPoE connection made by the DSL-AC issuing the PADO packet. Server to client: session-confirmation (PADS) PADS stands for PPPoE Active Discovery Session-confirmation. The PADR packet above is confirmed by the DSL-AC with a PADS packet, and a Session ID is given out with it. The connection with the DSL-AC for that POP has now been fully established. Either end to other end: termination (PADT) PADT stands for PPPoE Active Discovery Termination. This packet terminates the connection to the POP. It may be sent either from the user's computer or from the DSL-AC. Protocol overhead PPPoE is used to connect a PC or a router to a modem via an Ethernet link and it can also be used in Internet access over DSL on a telephone line in the PPPoE over ATM (PPPoEoA) over ADSL protocol stack. PPPoE over ATM has the highest overhead of the popular DSL delivery methods, when compared with for example PPPoA (RFC 2364). Use with DSL – PPPoE over ATM (PPPoEoA) The amount of overhead added by PPPoEoA on a DSL link depends on the packet size because of (i) the absorbing effect of ATM cell-padding (discussed below), which completely cancels out additional overhead of PPPoEoA in some cases, (ii) PPPoEoA + AAL5 overhead which can cause an entire additional 53-byte ATM cell to be required, and (iii) in the case of IP packets, PPPoE overhead added to packets that are near maximum length (‘MRU’) may cause IP fragmentation, which also involves the first two considerations for both of the resulting IP fragments. However ignoring ATM and IP fragmentation for the moment, the protocol header overheads for ATM payload due to choosing PPP + PPPoEoA can be as high as 44 bytes = 2 bytes (for PPP) + 6 (for PPPoE) + 18 (Ethernet MAC, variable) + 10 (RFC 2684 LLC, variable) + 8 (AAL5 CPCS). This overhead is that obtained when using the LLC header option described in RFC 2684 for PPPoEoA. Compare this with a vastly more header-efficient protocol, PPP + PPPoA RFC 2364 VC-MUX over ATM+DSL, which has a mere 10-byte overhead within the ATM payload. (In fact, just simply 10 bytes = 2 bytes for PPP + zero for RFC 2364 + 8 (AAL5 CPCS).) This figure of 44 bytes AAL5 payload overhead can be reduced in two ways: (i) by choosing the RFC 2684 option of discarding the 4-byte Ethernet MAC FCS, which reduces the figure of 18 bytes above to 14, and (ii) by using the RFC 2684 VC-MUX option, whose overhead contribution is a mere 2 bytes compared with the 10 byte overhead of the LLC alternative. It turns out that this overhead reduction can be a valuable efficiency improvement. Using VC-MUX instead of LLC, the ATM payload overhead is either 32 bytes (without Ethernet FCS) or 36 bytes (with FCS). ATM AAL5 requires that an 8-byte-long ‘CPCS’ trailer must always be present at the very end of the final cell (‘right justified’) of the run of ATM cells that make up the AAL5 payload packet. In the LLC case, the total ATM payload overhead is 2 + 6 + 18 + 10 + 8 = 44 bytes if the Ethernet MAC FCS is present, or 2 + 6 + 14 + 10 + 8 = 40 bytes with no FCS. In the more efficient VC-MUX case the ATM payload overhead is 2 + 6 + 18 + 2 + 8 = 36 bytes (with FCS), or 2 + 6 + 14 + 2 + 8 = 32 bytes (no FCS). However, the true overhead in terms of the total amount of ATM payload data sent is not simply a fixed additional value – it can only be either zero or 48 bytes (leaving aside scenario (iii) mentioned earlier, IP fragmentation). This is because ATM cells are fixed length with a payload capacity of 48 bytes, and adding a greater extra amount of AAL5 payload due to additional headers may require one more whole ATM cell to be sent containing the excess. The last one or two ATM cells contain padding bytes as required to ensure that each cell's payload is 48 bytes long. An example: In the case of a 1500-byte IP packet sent over AAL5/ATM with PPPoEoA and RFC2684-LLC, neglecting final cell padding for the moment, one starts with 1500 + 2 + 6 + 18 + 10 + 8 (AAL5 CPCS trailer) = 1544 bytes if the ethernet FCS is present, or else + 2 + 6 + 14 + 10 + 8 = 40 bytes with no FCS. To send 1544 bytes over ATM requires 33 48-byte ATM cells, since the available payload capacity of 32 cells × 48 bytes per cell = 1536 bytes is not quite enough. Compare this to the case of PPP + PPPoA which at 1500 + 2 (PPP) + 0 (PPPoA: RFC 2364 VC-MUX) + 8 (CPCS trailer) = 1510 bytes fits in 32 cells. So the real cost of choosing PPPoEoA plus RFC2684-LLC for 1500-byte IP packets is one additional ATM cell per IP packet, a ratio of 33:32. So for 1500 byte packets, PPPoEoA with LLC is ~3.125% slower than PPPoA or optimal choices of PPPoEoA header options. For some packet lengths the true additional effective DSL overhead due to choosing PPPoEoA compared with PPPoA will be zero if the extra header overhead is not enough to need an additional ATM cell at that particular packet length. For example, a 1492 byte long packet sent with PPP + PPPoEoA using RFC2684-LLC plus FCS gives us a total ATM payload of 1492 + 44 = 1536 bytes = 32 cells exactly, and the overhead in this special case is no greater than if we were using the header-efficient PPPoA protocol, which would require 1492 + 2 + 0 + 8 = 1502 bytes ATM payload = 32 cells also. The case where the packet length is 1492 represents the optimum efficiency for PPPoEoA with RFC2684-LLC in ratio terms, unless even longer packets are allowed. Using PPPoEoA with the RFC2684 VC-MUX header option is always much more efficient than the LLC option, since the ATM overhead, as mentioned earlier, is only 32 or 36 bytes (depending on whether this is without or with the ethernet FCS option in PPPoEoA) so that a 1500 byte long packet including all overheads of PPP + PPPoEoA using VC-MUX equates to a total 1500 + 36 = 1536 bytes ATM payload if the FCS is present = 32 ATM cells exactly, thus saving an entire ATM cell. With short packets, the longer the header overheads the greater the likelihood of generating an additional ATM cell. A worst case might be sending 3 ATM cells instead of two because of a 44 byte header overhead compared with a 10 byte header overhead, so 50% more time taken to transmit the data. For example, a TCP ACK packet over IPv6 is 60 bytes long, and with overhead of 40 or 44 bytes for PPPoEoA + LLC this requires three 48 byte ATM cells’ payloads. As a comparison, PPPoA with overheads of 10 bytes so 70 bytes total fits into two cells. So the extra cost of choosing PPPoE/LLC over PPPoA is 50% extra data sent. PPPoEoA + VC-MUX would be fine though: with 32 or 36 byte overhead, our IP packet still fits in two cells. In all cases the most efficient option for ATM-based ADSL internet access is to choose PPPoA (RFC2364) VC-MUX. However, if PPPoEoA is required, then the best choice is always to use VC-MUX (as opposed to LLC) with no Ethernet FCS, giving an ATM payload overhead of 32 bytes = 2 bytes (for PPP) + 6 (for PPPoE) + 14 (Ethernet MAC, no FCS) + 2 (RFC 2684 VC-MUX) + 8 (AAL5 CPCS trailer). Unfortunately some DSL services require the use of wasteful LLC headers with PPPoE and do not allow the more efficient VC-MUX option. In that case, using a reduced packet length, such as enforcing a maximum MTU of 1492 regains efficiency with long packets even with LLC headers and, as mentioned earlier, in that case no extra wasteful ATM cell is generated. Overhead on Ethernet On an Ethernet LAN, the overhead for PPP + PPPoE is a fixed 2 + 6 = 8 bytes, unless IP fragmentation is produced. MTU/MRU When a PPPoE-speaking DSL modem sends or receives Ethernet frames containing PPP + PPPoE payload across the Ethernet link to a router (or PPPoE-speaking single PC), PPP + PPPoE contributes an additional overhead of 8 bytes = 2 (PPP) + 6 (PPPoE) included within the payload of each Ethernet frame. This added overhead can mean that a reduced maximum length limit (so-called ‘MTU’ or ‘MRU’) of 1500 − 8 = 1492 bytes is imposed on (for example) IP packets sent or received, as opposed to the usual 1500-byte Ethernet frame payload length limit which applies to standard Ethernet networks. Some devices support RFC 4638, which allows negotiation for the use of non-standard Ethernet frames with a 1508-byte Ethernet payload, sometimes called ‘baby jumbo frames’, so allowing a full 1500-byte PPPoE payload. This capability is advantageous for many users in cases where companies receiving IP packets have (incorrectly) chosen to block all ICMP responses from exiting their network, a bad practice which prevents path MTU discovery from working correctly and which can cause problems for users accessing such networks if they have an MTU of less than 1500 bytes. PPPoE-to-PPPoA converting modem The following diagram shows a scenario where a modem acts as a PPPoE-to-PPPoA protocol converter and the service provider offers a PPPoA service and does not understand PPPoE. There is no PPPoEoA in this protocol chain. This is an optimally efficient design for a separate modem connected to a router by ethernet. In this alternative technology, PPPoE is merely a means of connecting DSL-modems to an Ethernet-only router (again, or to a single host PC). Here it is not concerned with the mechanism employed by an ISP to offer broadband services. The Draytek Vigor 110, 120 and 130 modems work in this way. When transmitting packets bound for the Internet, the PPPoE-speaking Ethernet router sends Ethernet frames to the (also PPPoE-speaking) DSL modem. The modem extracts PPP frames from within the received PPPoE frames, and sends the PPP frames onwards to the DSLAM by encapsulating them according to RFC 2364 (PPPoA), thus converting PPPoE into PPPoA. {| border="0" cellspacing="3" style="float:center;padding-left:15px" |+ DSL Internet access architecture |- style="vertical-align:bottom; text-align:center; background:#fc9;" | colspan="2"| PC or Gateway | colspan="2"| DSL modem | colspan="2"| DSLAM | colspan="2"| Remote access server || (ISP) |- | style="vertical-align:bottom; text-align:center; background:#eef;"| (IP) | | | | | | | | style="vertical-align:bottom; text-align:center; background:#eef;"| (IP) |- | style="vertical-align:bottom; text-align:center; background:#eef;"| Ethernet | style="vertical-align:bottom; text-align:center; background:#eef;"| PPP | | | | | style="vertical-align:bottom; text-align:center; background:#eef;"| PPP | style="vertical-align:bottom; text-align:center; background:#eef;"| PPP | style="vertical-align:bottom; text-align:center; background:#eef;"| PPP |- | | style="vertical-align:bottom; text-align:center; background:#99f;"| PPPoE | style="vertical-align:bottom; text-align:center; background:#99f;"| PPPoE | style="vertical-align:bottom; text-align:center; background:#99f;"| PPPoA | | | style="vertical-align:bottom; text-align:center; background:#99f;"| PPPoA | style="vertical-align:bottom; text-align:center; background:#eef;"| L2TP | style="vertical-align:bottom; text-align:center; background:#eef;"| L2TP |- | | style="vertical-align:bottom; text-align:center; background:#eef;"| Ethernet | style="vertical-align:bottom; text-align:center; background:#eef;"| Ethernet | style="vertical-align:bottom; text-align:center; background:#eef;"| AAL5 | style="vertical-align:bottom; text-align:center; background:#eef;"| AAL5 | style="vertical-align:bottom; text-align:center; background:#eef;"| backbone | style="vertical-align:bottom; text-align:center; background:#eef;"| backbone | style="vertical-align:bottom; text-align:center; background:#eef;"| IP | style="vertical-align:bottom; text-align:center; background:#eef;"| IP |- | | | | style="vertical-align:bottom; text-align:center; background:#eef;"| ATM | style="vertical-align:bottom; text-align:center; background:#eef;"| ATM |- | | | | style="vertical-align:bottom; text-align:center; background:#eef;"| DSL | style="vertical-align:bottom; text-align:center; background:#eef;"| DSL |} On the diagram, the area shown as ‘backbone’ could also be ATM on older networks, however its architecture is service provider-dependent. On a more detailed, more service-provider specific diagram there would be additional table cells in this area. Quirks Since the point-to-point connection established has a MTU lower than that of standard Ethernet (typically 1492 vs Ethernet's 1500), it can sometimes cause problems when Path MTU Discovery is defeated by poorly configured firewalls. Although higher MTUs are becoming more common in providers' networks, usually the workaround is to use TCP MSS (Maximum Segment Size) "clamping" or "rewrite", whereby the access concentrator rewrites the MSS to ensure TCP peers send smaller datagrams. Although TCP MSS clamping solves the MTU issue for TCP, other protocols such as ICMP and UDP may still be affected. RFC 4638 allows PPPoE devices to negotiate an MTU of greater than 1492 if the underlying Ethernet layer is capable of jumbo frames. Some vendors (Cisco and Juniper, for example) distinguish PPPoE[oA] from PPPoEoE (PPPoE over Ethernet), which is PPPoE running directly over Ethernet or other IEEE 802 networks or over Ethernet bridged over ATM, in order to distinguish it from PPPoEoA (PPPoE over ATM), which is PPPoE running over an ATM virtual circuit using RFC 2684 and SNAP encapsulation of PPPoE. (PPPoEoA is not the same as Point-to-Point Protocol over ATM (PPPoA), which doesn't use SNAP). According to a Cisco document "PPPoEoE is a variant of PPPoE where the Layer 2 transport protocol is now Ethernet or 802.1q VLAN instead of ATM. This encapsulation method is generally found in Metro Ethernet or Ethernet digital subscriber line access multiplexer (DSLAM) environments. The common deployment model is that this encapsulation method is typically found in multi-tenant buildings or hotels. By delivering Ethernet to the subscriber, the available bandwidth is much more abundant and the ease of further service delivery is increased." It is possible to find DSL modems, such as the Draytek Vigor 120, where PPPoE is confined to the Ethernet link between a DSL modem and a partnering router, and the ISP does not speak PPPoE at all (but rather PPPoA). Post-DSL uses and some alternatives in these contexts A certain method of using PPPoE in conjunction with GPON (which involves creating a VLAN via OMCI) has been patented by ZTE. PPPoE over GPON is reportedly used by retail service providers such as Internode of Australia's National Broadband Network, Orange France, and Philippines' Globe Telecom. RFC 6934, "Applicability of Access Node Control Mechanism to PON based Broadband Networks", which argues for the use of Access Node Control Protocol in PONs for—among other things—authenticating subscriber access and managing their IP addresses, and the first author of which is a Verizon employee, excludes PPPoE as an acceptable encapsulation for GPON: "The protocol encapsulation on BPON is based on multi-protocol encapsulation over ATM Adaptation Layer 5 (AAL5), defined in [RFC2684]. This covers PPP over Ethernet (PPPoE, defined in [RFC2516]) or IP over Ethernet (IPoE). The protocol encapsulation on GPON is always IPoE." The 10G-PON (XG-PON) standard (G.987) provides for 802.1X mutual authentication of the ONU and OLT, besides the OMCI method carried forward from G.984. G.987 also adds support for authenticating other customer-premises equipment beyond the ONU (e.g. in a MDU), although this is limited to Ethernet ports, also handled via 802.1X. (The ONU is supposed snoop EAP-encapsulated RADIUS messages in this scenario and determine if the authentication was successful or not.) There is some modicum support for PPPoE specified in the OMCI standards, but only in terms of the ONU being able to filter and add VLAN tags for traffic based on its encapsulation (and other parameters), which includes PPPoE among the protocols that ONU must be able to discern. The Broadband Forum's TR-200 "Using EPON in the Context of TR-101" (2011), which also pertains to 10G-EPON, says "The OLT and the multiple-subscriber ONU MUST be able to perform the PPPoE Intermediate Agent function, as specified in Section 3.9.2/TR-101." A book on Ethernet in the first mile notes that DHCP can obviously be used instead of PPPoE to configure a host for an IP session, although it points out that DHCP is not a complete replacement for PPPoE if some encapsulation is also desired (although VLAN bridges can fulfill this function) and that furthermore, DHCP does not provide (subscriber) authentication, suggesting that IEEE 802.1X is also needed for a "complete solution" without PPPoE. (This book assumes that PPPoE is leveraged for other features of PPP besides encapsulation, including IPCP for host configuration, and PAP or CHAP for authentication.) There are security reasons to use PPPoE in a (non-DSL/ATM) shared-medium environment, such as power line communication networks, in order to create separate tunnels for each customer. PPPoE is widely used on WAN lines, including FTTx. Many FTTx residential gateway provided by ISP has integrated the routing functions. See also Multiprotocol Encapsulation over ATM Point-to-Point Protocol daemon Point-to-Point Tunneling Protocol Point-to-Point Protocol over ATM (PPPoA) Point-to-Point Protocol over X (PPPoX) References External links - A Method for Transmitting PPP Over Ethernet (PPPoE) - Layer 2 Tunneling Protocol (L2TP) Active Discovery Relay for PPP over Ethernet (PPPoE) - Accommodating a Maximum Transit Unit/Maximum Receive Unit (MTU/MRU) Greater Than 1492 in the Point-to-Point Protocol over Ethernet (PPPoE) - PPP Over Ethernet (PPPoE) Extensions for Credit Flow and Link Metrics US Patent 6891825 - Method and system of providing multi-user access to a packet switched network TR-043 - Protocols at the U Interface for Accessing Data Networks using ATM/DSL, Issue 1.0, August 2001 1999 in technology Tunneling protocols Logical link control Wide area networks Ethernet
299690
https://en.wikipedia.org/wiki/Point-to-Point%20Protocol%20over%20ATM
Point-to-Point Protocol over ATM
In computer networking, the Point-to-Point Protocol over ATM (PPPoA) is a layer 2 data-link protocol typically used to connect domestic broadband modems to ISPs via phone lines. It is used mainly with DOCSIS and DSL carriers, by encapsulating PPP frames in ATM AAL5. Point-to-Point Protocol over Asynchronous Transfer Mode (PPPoA) is specified by The Internet Engineering Task Force (IETF) in RFC 2364. It offers standard PPP features such as authentication, encryption, and compression. It also supports the encapsulation types: VC-MUX and LLC - see RFC 2364. If it is used as the connection encapsulation method on an ATM based network it can reduce overhead significantly compared with PPPoEoA – by between 0 and ~3.125% for long packets, depending on the packet length and also on the choices of header options in PPPoEoA – see PPPoEoA protocol overheads. This is because it uses headers that are short so imposes minimal overheads, 2 bytes for PPP and 8 bytes for PPPoA (with the RFC2364 VC-MUX option) = 10 bytes. It also avoids the issues that PPPoE suffers from, related to sometimes needing to use an IP MTU of 1492 bytes or less, lower than the standard 1500 bytes. The use of PPPoA over PPPoE is not geographically significant; rather, it varies by the provider's preference. Configuration Configuration of a PPPoA requires PPP configuration and ATM configuration. These data are generally stored in a cable modem or DSL modem, and may or may not be visible to—or configurable by—an end-user. PPP configuration generally includes: user credentials, user name and password, and is unique to each user. ATM configuration includes: Virtual Channel Link (VCL) – Virtual Path Identifier & Virtual Channel Identifier (VPI/VCI), such as 0/32 (analogous to a phone number) Modulation (Type): such as G.dmt Multiplexing (Method): such as VC-MUX or LLC ATM configuration can either be performed manually, or it may be hard-coded (or pre-set) into the firmware of a DSL modem provided by the user's ISP; it cannot be automatically negotiated. See also PPPoE PPPoX L2TP ATM DSL Notes External links A typical PPPoA architecture diagram (out of date and no longer maintained) Tunneling protocols
300602
https://en.wikipedia.org/wiki/Internet%20access
Internet access
Internet access is the ability of individuals and organizations to connect to the Internet using computer terminals, computers, and other devices; and to access services such as email and the World Wide Web. Internet access is sold by Internet service providers (ISPs) delivering connectivity at a wide range of data transfer rates via various networking technologies. Many organizations, including a growing number of municipal entities, also provide cost-free wireless access and landlines. Availability of Internet access was once limited, but has grown rapidly. In 1995, only percent of the world's population had access, with well over half of those living in the United States, and consumer use was through dial-up. By the first decade of the 21st century, many consumers in developed nations used faster broadband technology, and by 2014, 41 percent of the world's population had access, broadband was almost ubiquitous worldwide, and global average connection speeds exceeded one megabit per second. History The Internet developed from the ARPANET, which was funded by the US government to support projects within the government and at universities and research laboratories in the US – but grew over time to include most of the world's large universities and the research arms of many technology companies. Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted. In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks (LANs) or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s, while modem data-rates grew from 1200 bit/s in the early 1980s, to 56 kbit/s by the late 1990s. Initially, dial-up connections were made from terminals or computers running terminal emulation software to terminal servers on LANs. These dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal to host connections. The introduction of network access servers supporting the Serial Line Internet Protocol (SLIP) and later the point-to-point protocol (PPP) extended the Internet protocols and made the full range of Internet services available to dial-up users; although slower, due to the lower data rates available using dial-up. An important factor in the rapid rise of Internet access speed has been advances in MOSFET (MOS transistor) technology. The MOSFET, originally invented by Mohamed Atalla and Dawon Kahng in 1959, is the building block of the Internet telecommunications networks. The laser, originally demonstrated by Charles H. Townes and Arthur Leonard Schawlow in 1960, was adopted for MOS light wave systems around 1980, which led to exponential growth of Internet bandwidth. Continuous MOSFET scaling has since led to online bandwidth doubling every 18 months (Edholm's law, which is related to Moore's law), with the bandwidths of telecommunications networks rising from bits per second to terabits per second. Broadband Internet access, often shortened to just broadband, is simply defined as "Internet access that is always on, and faster than the traditional dial-up access" and so covers a wide range of technologies. The core of these broadband Internet technologies are complementary MOS (CMOS) digital circuits, the speed capabilities of which were extended with innovative design techniques. Broadband connections are typically made using a computer's built in Ethernet networking capabilities, or by using a NIC expansion card. Most broadband services provide a continuous "always on" connection; there is no dial-in process required, and it does not interfere with voice use of phone lines. Broadband provides improved access to Internet services such as: Faster World Wide Web browsing Faster downloading of documents, photographs, videos, and other large files Telephony, radio, television, and videoconferencing Virtual private networks and remote system administration Online gaming, especially massively multiplayer online role-playing games which are interaction-intensive In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet access a public policy issue. In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries and fewer than 20 million broadband subscriptions. By 2005, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million. The broadband technologies in widest use are of digital subscriber line (DSL), ADSL and cable Internet access. Newer technologies include VDSL and optical fiber extended closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at very high data rates over longer distances much more cost-effective than copper wire technology. In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless, satellite and microwave Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available. Newer technologies being deployed for fixed (stationary) and mobile broadband access include WiMAX, LTE, and fixed wireless. Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE. Availability In addition to access from home, school, and the workplace Internet access may be available from public places such as libraries and Internet cafes, where computers with Internet connections are available. Some libraries provide stations for physically connecting users' laptops to LANs. Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin-operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels also have public terminals, usually fee based. Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A Wi-Fi hotspot need not be limited to a confined location since multiple ones combined can cover a whole campus or park, or even an entire city can be enabled. Additionally, mobile broadband access allows smart phones and other digital devices to connect to the Internet from any location from which a mobile phone call can be made, subject to the capabilities of that mobile network. Speed The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up modem connection from 220 (V.42bis) to 320 (V.44) kbit/s. However, the effectiveness of data compression is quite variable, depending on the type of data being sent, the condition of the telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150 kbit/s. Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s. In 1988 the CCITT standards body defined "broadband service" as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s. A 2006 Organisation for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256 kbit/s. And in 2015 the U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission speeds of at least 25 Mbit/s downstream (from the Internet to the user's computer) and 3 Mbit/s upstream (from the user's computer to the Internet). The trend is to raise the threshold of the broadband definition as higher data rate services become available. The higher data rate dial-up modems and many broadband services are "asymmetric"—supporting much higher data rates for download (toward the user) than for upload (toward the Internet). Data rates, including those given in this article, are usually defined and advertised in terms of the maximum or peak download rate. In practice, these maximum data rates are not always reliably available to the customer. Actual end-to-end data rates can be lower due to a number of factors. In late June 2016, internet connection speeds averaged about 6 Mbit/s globally. Physical link quality can vary with distance and for wireless access with terrain, weather, building construction, antenna placement, and interference from other radio sources. Network bottlenecks may exist at points anywhere on the path from the end-user to the remote server or service being used and not just on the first or last link providing Internet access to the end-user. Network congestion Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well, and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users that experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live video–effectively making the service unavailable. When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked. Outages An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. On April 25, 1997, due to a combination of human error and software bug, an incorrect routing table at MAI Network Service (a Virginia Internet service provider) propagated across backbone routers and caused major disruption to Internet traffic for a few hours. Technologies When the Internet is accessed using a modem, digital data is converted to analog for transmission over analog networks such as the telephone and cable networks. A computer or other device accessing the Internet would either be connected directly to a modem that communicates with an Internet service provider (ISP) or the modem's Internet connection would be shared via a LAN which provides access in a limited area such as a home, school, computer laboratory, or office building. Although a connection to a LAN may provide very high data-rates within the LAN, actual Internet access speed is limited by the upstream link to the ISP. LANs may be wired or wireless. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used to build LANs today, but ARCNET, Token Ring, Localtalk, FDDI, and other technologies were used in the past. Ethernet is the name of the IEEE 802.3 standard for physical LAN communication and Wi-Fi is a trade name for a wireless local area network (WLAN) that uses one of the IEEE 802.11 standards. Ethernet cables are interconnected via switches & routers. Wi-Fi networks are built using one or more wireless antenna called access points. Many "modems" (cable modems, DSL gateways or Optical Network Terminals (ONTs)) provide the additional functionality to host a LAN so most Internet access today is through a LAN such as that created by a WiFi router connected to a modem or a combo modem router, often a very small LAN with just one or two devices attached. And while LANs are an important form of Internet access, this raises the question of how and at what data rate the LAN itself is connected to the rest of the global Internet. The technologies described below are used to make these connections, or in other words, how customers' modems (Customer-premises equipment) are most often connected to internet service providers (ISPs). Dial-up technologies Dial-up access Dial-up Internet access uses a modem and a phone call placed over the public switched telephone network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a computer's digital signal into an analog signal that travels over a phone line's local loop until it reaches a telephone company's switching facilities or central office (CO) where it is switched to another phone line that connects to another modem at the remote end of the connection. Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas as it requires no new infrastructure beyond the already existing telephone network, to connect to the Internet. Typically, dial-up connections do not exceed a speed of 56 kbit/s, as they are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream (towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet). Multilink dial-up Multilink dial-up provides increased bandwidth by channel bonding multiple dial-up connections and accessing them as a single data channel. It requires two or more modems, phone lines, and dial-up accounts, as well as an ISP that supports multilinking – and of course any line and data charges are also doubled. This inverse multiplexing option was briefly popular with some high-end users before ISDN, DSL and other technologies became available. Diamond and other vendors created special modems to support multilinking. Hardwired broadband access The term broadband includes a broad range of technologies, all of which provide higher data rate access to the Internet. The following technologies use wires or cables in contrast to wireless broadband described later. Integrated Services Digital Network Integrated Services Digital Network (ISDN) is a switched telephone service capable of transporting voice and digital data, and is one of the oldest Internet access methods. ISDN has been used for voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but less common in North America. Its use peaked in the late 1990s before the availability of DSL and cable modem technologies. Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels can be used separately for voice or data calls or bonded together to provide a 128 kbit/s service. Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of 1.5 Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a combined data rate of 1.9 Mbit/s. Leased lines Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to connect LANs and campus networks to the Internet using the existing infrastructure of the public telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are used to provide Internet access directly as well as the building blocks from which several other forms of Internet access are created. T-carrier technology dates to 1957 and provides data rates that range from 56 and (DS0) to (DS1 or T1), to (DS3 or T3). A T1 line carries 24 voice or data channels (24 DS0s), so customers may use some channels for data and others for voice traffic or use all 24 channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines are also available in multiples of a DS0 to provide data rates between 56 and . T-carrier lines require special termination equipment that may be separate from or integrated into a router or switch and which may be purchased or leased from an ISP. In Japan the equivalent standard is J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels () on an E1 () and 512 user channels or 16 E1s on an E3 (). Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital Hierarchy (SDH, in the rest of the world) are the standard multiplexing protocols used to carry high-data-rate digital bit-streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface. The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries . Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-12c (), OC-48c (), OC-192c (), and OC-768c (39.813 Gbit/s). The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream rather than several multiplexed data streams. The 1, 10, 40, and 100 gigabit Ethernet (GbE, 10 GbE, 40/100 GbE) IEEE standards (802.3) allow digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at distances to . Cable Internet access Cable Internet provides access using a cable modem on hybrid fiber coaxial wiring originally developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node to a customer's location at a connection known as a cable drop. In a cable modem termination system, all nodes for cable subscribers in a neighborhood connect to a cable company's central office, known as the "head end." The cable company then connects to the Internet using a variety of means – usually fiber optic cable or digital satellite and microwave transmissions. Like DSL, broadband cable provides a continuous connection with an ISP. Downstream, the direction toward the user, bit rates can be as much as 1000 Mbit/s in some countries, with the use of DOCSIS 3.1. Upstream traffic, originating at the user, ranges from 384 kbit/s to more than 50 Mbit/s. DOCSIS 4.0 promises up to 10 Gbit/s downstream ands 6 Gbit/s upstream, however this technology is yet to have been implemented in real world usage. Broadband cable access tends to service fewer business customers because existing television cable networks tend to service residential buildings; commercial buildings do not always include wiring for coaxial cable networks. In addition, because broadband cable subscribers share the same local line, communications may be intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for data traveling to and from customers, but these schemes may be thwarted. Digital subscriber line (DSL, ADSL, SDSL, and VDSL) Digital subscriber line (DSL) service provides a connection to the Internet through the telephone network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible) frequencies of the line are left free for regular telephone communication. These frequency bands are subsequently separated by filters installed at the customer's premises. DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital subscriber line is widely understood to mean asymmetric digital subscriber line (ADSL), the most commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges from 256 kbit/s to 20 Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e., in the direction to the service provider) is lower than that in the downstream direction (i.e. to the customer), hence the designation of asymmetric. With a symmetric digital subscriber line (SDSL), the downstream and upstream data rates are equal. Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1) is a digital subscriber line (DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s upstream over copper wires and up to 85 Mbit/s down- and upstream on coaxial cable. VDSL is capable of supporting applications such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single physical connection. VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL. Approved in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. However, the maximum data rate is achieved at a range of about 300 meters and performance degrades as distance and loop attenuation increases. DSL Rings DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing copper telephone wires to provide data rates of up to 400 Mbit/s. Fiber to the home Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-the-curb (FTTC), and Fiber-to-the-node (FTTN). These methods all bring data closer to the end user on optical fibers. The differences between the methods have mostly to do with just how close to the end user the delivery on fiber comes. All of these delivery methods are similar in function and architecture to hybrid fiber-coaxial (HFC) systems used to provide cable Internet access. Fiber internet connections to customers are either AON (Active optical network) or more commonly PON (Passive optical network). Examples of fiber optic internet access standards are G.984 (GPON, G-PON) and 10G-PON (XG-PON). ISPs may instead use Metro Ethernet for corporate and institutional customers. The use of optical fiber offers much higher data rates over relatively longer distances. Most high-capacity Internet and cable television backbones already use fiber optic technology, with data switched to other technologies (DSL, cable, LTE) for final delivery to customers. In 2010, Australia began rolling out its National Broadband Network across the country using fiber-optic cables to 93 percent of Australian homes, schools, and businesses. The project was abandoned by the subsequent LNP government, in favour of a hybrid FTTN design, which turned out to be more expensive and introduced delays. Similar efforts are underway in Italy, Canada, India, and many other countries (see Fiber to the premises by country). Power-line Internet Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a conductor that is also used for electric power transmission. Because of the extensive power line infrastructure already in place, this technology can provide people in rural and low population areas access to the Internet with little cost in terms of new transmission equipment, cables, or wires. Data rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s. Because these systems use parts of the radio spectrum allocated to other over-the-air communication services, interference between the services is a limiting factor in the introduction of power-line Internet systems. The IEEE P1901 standard specifies that all power-line protocols must detect existing usage and avoid interfering with it. Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in power system design philosophies. Data signals cannot pass through the step-down transformers used and so a repeater must be installed on each transformer. In the U.S. a transformer serves a small cluster of from one to a few houses. In Europe, it is more common for a somewhat larger transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an order of magnitude more repeaters than a comparable European city. ATM and Frame Relay Asynchronous Transfer Mode (ATM) and Frame Relay are wide-area networking standards that can be used to provide Internet access directly or as building blocks of other access technologies. For example, many DSL implementations use an ATM layer over the low-level bitstream layer to enable a number of different technologies over the same link. Customer LANs are typically connected to an ATM switch or a Frame Relay node using leased lines at a wide range of data rates. While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role they once did. Wireless broadband access Wireless broadband is used to provide both fixed and mobile Internet access with the following technologies. Satellite broadband Satellite Internet access provides fixed, portable, and mobile Internet access. Data rates range from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. In the northern hemisphere, satellite antenna dishes require a clear line of sight to the southern sky, due to the equatorial position of all geostationary satellites. In the southern hemisphere, this situation is reversed, and dishes are pointed north. Service can be adversely affected by moisture, rain, and snow (known as rain fade). The system requires a carefully aimed directional antenna. Satellites in geostationary Earth orbit (GEO) operate in a fixed position above the Earth's equator. At the speed of light (about ), it takes a quarter of a second for a radio signal to travel from the Earth to the satellite and back. When other switching and routing delays are added and the delays are doubled to allow for a full round-trip transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long latencies negatively affect some applications that require real-time response, particularly online games, voice over IP, and remote control devices. TCP tuning and TCP acceleration techniques can mitigate some of these problems. GEO satellites do not cover the Earth's polar regions. HughesNet, Exede, AT&T and Dish Network have GEO systems. Satellites in low Earth orbit (LEO, below ) and medium Earth orbit (MEO, between ) are less common, operate at lower altitudes, and are not fixed in their position above the Earth. Because of their lower altitude, more satellites and launch vehicles are needed for worldwide coverage. This makes the initial required investment very large which initially caused OneWeb and Iridium to declare bankruptcy. However, their lower altitudes allow lower latencies and higher speeds which make real-time interactive Internet applications more feasible. LEO systems include Globalstar, Starlink, OneWeb and Iridium. The O3b constellation is a medium Earth-orbit system with a latency of 125 ms. COMMStellation™ is a LEO system, scheduled for launch in 2015, that is expected to have a latency of just 7 ms. Mobile broadband Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers (cellular networks) to computers, mobile phones (called "cell phones" in North America and South Africa, and "hand phones" in Asia), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used. New mobile phone technology and infrastructure is introduced periodically and generally involves a change in the fundamental nature of the service, non-backwards-compatible transmission technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in Hertz becomes available. These transitions are referred to as generations. The first mobile data services became available during the second generation (2G). The download (to the user) and upload (to the Internet) data rates given above are peak or maximum rates and end users will typically experience lower data rates. WiMAX was originally developed to deliver fixed wireless service with wireless mobility added in 2005. CDPD, CDMA2000 EV-DO, and MBWA are no longer being actively developed. In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with 2G and 3G coverage. 5G was designed to be faster and have lower latency than its predecessor, 4G. It can be used for mobile broadband in smartphones or separate modems that emit WiFi or can be connected through USB to a computer, or for fixed wireless. Fixed wireless Fixed wireless internet connections that do not use a satellite nor are designed to support moving equipment such as smartphones due to the use of, for example, customer premises equipment such as antennas that can't be moved over a significant geographical area without losing the signal from the ISP, unlike smartphones. Microwave wireless broadband or 5G may be used for fixed wireless. WiMAX Worldwide Interoperability for Microwave Access (WiMAX) is a set of interoperable implementations of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. It enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL". The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and provided 30 to 40 megabit-per-second data rates. Mobility support was added in 2005. A 2011 update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area network with a signal radius of about 50 km (30 miles), far surpassing the 30-metre (100-foot) wireless range of a conventional Wi-Fi LAN. WiMAX signals also penetrate building walls much more effectively than Wi-Fi. WiMAX is most often used as a fixed wireless standard. Wireless ISP Wireless Internet service providers (WISPs) operate independently of mobile phone operators. WISPs typically employ low-cost IEEE 802.11 Wi-Fi radio systems to link up remote locations over great distances (Long-range Wi-Fi), but may use other higher-power radio communications systems as well, such as microwave and WiMAX. Traditional 802.11a/b/g/n/ac is an unlicensed omnidirectional service designed to span between 100 and 150 m (300 to 500 ft). By focusing the radio signal using a directional antenna (where allowed by regulations), 802.11 can operate reliably over a distance of many km(miles), although the technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); data rates are usually slower (2 to 50 times slower); and the network can be less stable, due to interference from other wireless devices and networks, weather and line-of-sight problems. With the increasing popularity of unrelated consumer devices operating on the same 2.4 GHz band, many providers have migrated to the 5GHz ISM band. If the service provider holds the necessary spectrum license, it could also reconfigure various brands of off the shelf Wi-Fi hardware to operate on its own band instead of the crowded unlicensed ones. Using higher frequencies carries various advantages: usually regulatory bodies allow for more power and using (better-) directional antennae, there exists much more bandwidth to share, allowing both better throughput and improved coexistence, there are fewer consumer devices that operate over 5 GHz than over 2.4 GHz, hence fewer interferers are present, the shorter wavelengths don't propagate as well through walls and other structures, so much less interference leaks outside of the homes of consumers. Proprietary technologies like Motorola Canopy & Expedience can be used by a WISP to offer wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX. There are a number of companies that provide this service. Local Multipoint Distribution Service Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses microwave signals operating between 26 GHz and 29 GHz. Originally designed for digital television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s. Distance is typically limited to about , but links of up to from the base station are possible in some circumstances. LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX standards. Hybrid Access Networks In some regions, notably in rural areas, the length of the copper lines makes it difficult for network operators to provide high-bandwidth services. One alternative is to combine a fixed-access network, typically XDSL, with a wireless network, typically LTE. The Broadband Forum has standardised an architecture for such Hybrid Access Networks. Non-commercial alternatives for using Internet services Grassroots wireless networking movements Deploying multiple adjacent Wi-Fi access points is sometimes used to create city-wide wireless networks. It is usually ordered by the local municipality from commercial WISPs. Grassroots efforts have also led to wireless community networks widely deployed at numerous countries, both developing and developed ones. Rural wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available. Where radio spectrum regulation is not community-friendly, the channels are crowded or when equipment can not be afforded by local residents, free-space optical communication can also be deployed in a similar manner for point to point transmission in air (rather than in fiber optic cable). Packet radio Packet radio connects computers or whole networks operated by radio amateurs with the option to access the Internet. Note that as per the regulatory rules outlined in the HAM license, Internet access and e-mail should be strictly related to the activities of hardware amateurs. Sneakernet The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to the wearing of sneakers as the transport mechanism for the data. For those who do not have access to or can not afford broadband at home, downloading large files and disseminating information is done by transmission through workplace or library networks, taken home and shared with neighbors by sneakernet. The Cuban El Paquete Semanal is an organized example of this. There are various decentralized, delay tolerant peer to peer applications which aim to fully automate this using any available interface, including both wireless (Bluetooth, Wi-Fi mesh, P2P or hotspots) and physically connected ones (USB storage, Ethernet, etc.). Sneakernets may also be used in tandem with computer network data transfer to increase data security or overall throughput for big data use cases. Innovation continues in the area to this day; for example, AWS has recently announced Snowball, and bulk data processing is also done in a similar fashion by many research institutes and government agencies. Pricing and spending Internet access is limited by the relation between pricing and available resources to spend. Regarding the latter, it is estimated that 40% of the world's population has less than US$20 per year available to spend on information and communications technology (ICT). In Mexico, the poorest 30% of the society counts with an estimated US$35 per year (US$3 per month) and in Brazil, the poorest 22% of the population counts with merely US$9 per year to spend on ICT (US$0.75 per month). From Latin America it is known that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the “magical number” of US$10 per person per month, or US$120 per year. This is the amount of ICT spending people esteem to be a basic necessity. Current Internet access prices exceed the available resources by large in many countries. Dial-up users pay the costs for making local or long-distance phone calls, usually pay a monthly subscription fee, and may be subject to additional per minute or traffic based charges, and connect time limits by their ISP. Though less common today than in the past, some dial-up access is offered for "free" in return for watching banner ads as part of the dial-up service. NetZero, BlueLight, Juno, Freenet (NZ), and Free-nets are examples of services providing free access. Some Wireless community networks continue the tradition of providing free Internet access. Fixed broadband Internet access is often sold under an "unlimited" or flat rate pricing model, with price determined by the maximum data rate chosen by the customer, rather than a per minute or traffic based charge. Per minute and traffic based charges and traffic caps are common for mobile broadband Internet access. Internet services like Facebook, Wikipedia and Google have built special programs to partner with mobile network operators (MNO) to introduce zero-rating the cost for their data volumes as a means to provide their service more broadly into developing markets. With increased consumer demand for streaming content such as video on demand and peer-to-peer file sharing, demand for bandwidth has increased rapidly and for some ISPs the flat rate pricing model may become unsustainable. However, with fixed costs estimated to represent 80–90% of the cost of providing broadband service, the marginal cost to carry additional traffic is low. Most ISPs do not disclose their costs, but the cost to transmit a gigabyte of data in 2011 was estimated to be about $0.03. Some ISPs estimate that a small number of their users consume a disproportionate portion of the total bandwidth. In response some ISPs are considering, are experimenting with, or have implemented combinations of traffic based pricing, time of day or "peak" and "off peak" pricing, and bandwidth or traffic caps. Others claim that because the marginal cost of extra bandwidth is very small with 80 to 90 percent of the costs fixed regardless of usage level, that such steps are unnecessary or motivated by concerns other than the cost of delivering bandwidth to the end user. In Canada, Rogers Hi-Speed Internet and Bell Canada have imposed bandwidth caps. In 2008 Time Warner began experimenting with usage-based pricing in Beaumont, Texas. In 2009 an effort by Time Warner to expand usage-based pricing into the Rochester, New York area met with public resistance, however, and was abandoned. On August 1, 2012, in Nashville, Tennessee and on October 1, 2012, in Tucson, Arizona Comcast began tests that impose data caps on area residents. In Nashville exceeding the 300 Gbyte cap mandates a temporary purchase of 50 Gbytes of additional data. Digital divide Despite its tremendous growth, Internet access is not distributed equally within or between countries. The digital divide refers to “the gap between people with effective access to information and communications technology (ICT), and those with very limited or no access”. The gap between people with Internet access and those without is one of many aspects of the digital divide. Whether someone has access to the Internet can depend greatly on financial status, geographical location as well as government policies. "Low-income, rural, and minority populations have received special scrutiny as the technological 'have-nots'." Government policies play a tremendous role in bringing Internet access to or limiting access for underserved groups, regions, and countries. For example, in Pakistan, which is pursuing an aggressive IT policy aimed at boosting its drive for economic modernization, the number of Internet users grew from 133,900 (0.1% of the population) in 2000 to 31 million (17.6% of the population) in 2011. In North Korea there is relatively little access to the Internet due to the governments' fear of political instability that might accompany the benefits of access to the global Internet. The U.S. trade embargo is a barrier limiting Internet access in Cuba. Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access. The majority of people in developing countries do not have Internet access. About 4 billion people do not have Internet access. When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007). Internet access has changed the way in which many people think and has become an integral part of people's economic, political, and social lives. The United Nations has recognized that providing Internet access to more people in the world will allow them to take advantage of the “political, social, economic, educational, and career opportunities” available over the Internet. Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003, directly address the digital divide. To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world. Growth in number of users Access to the Internet grew from an estimated 10 million people in 1993, to almost 40 million in 1995, to 670 million in 2002, and to 2.7 billion in 2013. With market saturation, growth in the number of Internet users is slowing in industrialized countries, but continues in Asia, Africa, Latin America, the Caribbean, and the Middle East. There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011. In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available. Bandwidth divide Traditionally the divide has been measured in terms of the existing numbers of subscriptions and digital devices ("have and have-not of subscriptions"). Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita). As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing, but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality". This is because a new kind of connectivity is never introduced instantaneously and uniformly to society as a whole at once, but diffuses slowly through social networks. As shown by the Figure, during the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 3G and fiber optics FTTH). As shown in the Figure, Internet access in terms of bandwidth is more unequally distributed in 2014 as it was in the mid-1990s. Rural access One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project. Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service. Wireless Internet service providers (WISPs) are rapidly becoming a popular broadband option for rural areas. The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option. The Broadband for Rural Nova Scotia initiative is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy. In New Zealand, a fund has been formed by the government to improve rural broadband, and mobile phone coverage. Current proposals include: (a) extending fibre coverage and upgrading copper to support VDSL, (b) focussing on improving the coverage of cellphone technology, or (c) regional wireless. Several countries have started to Hybrid Access Networks to provide faster Internet services in rural areas by enabling network operators to efficiently combine their XDSL and LTE networks. Access as a civil or human right The actions, statements, opinions, and recommendations outlined below have led to the suggestion that Internet access itself is or should become a civil or perhaps a human right. Several countries have adopted laws requiring the state to work to ensure that Internet access is broadly available or preventing the state from unreasonably restricting an individual's access to information and the Internet: Costa Rica: A 30 July 2010 ruling by the Supreme Court of Costa Rica stated: "Without fear of equivocation, it can be said that these technologies [information technology and communication] have impacted the way humans communicate, facilitating the connection between people and institutions worldwide and eliminating barriers of space and time. At this time, access to these technologies becomes a basic tool to facilitate the exercise of fundamental rights and democratic participation (e-democracy) and citizen control, education, freedom of thought and expression, access to information and public services online, the right to communicate with government electronically and administrative transparency, among others. This includes the fundamental right of access to these technologies, in particular, the right of access to the Internet or World Wide Web." Estonia: In 2000, the parliament launched a massive program to expand access to the countryside. The Internet, the government argues, is essential for life in the twenty-first century. Finland: By July 2010, every person in Finland was to have access to a one-megabit per second broadband connection, according to the Ministry of Transport and Communications. And by 2015, access to a 100 Mbit/s connection. France: In June 2009, the Constitutional Council, France's highest court, declared access to the Internet to be a basic human right in a strongly-worded decision that struck down portions of the HADOPI law, a law that would have tracked abusers and without judicial review automatically cut off network access to those who continued to download illicit material after two warnings Greece: Article 5A of the Constitution of Greece states that all persons has a right to participate in the Information Society and that the state has an obligation to facilitate the production, exchange, diffusion, and access to electronically transmitted information. Spain: Starting in 2011, Telefónica, the former state monopoly that holds the country's "universal service" contract, has to guarantee to offer "reasonably" priced broadband of at least one megabyte per second throughout Spain. In December 2003, the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations. After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming the importance of the Information Society to maintaining and strengthening human rights: 1. We, the representatives of the peoples of the world, assembled in Geneva from 10–12 December 2003 for the first phase of the World Summit on the Information Society, declare our common desire and commitment to build a people-centred, inclusive and development-oriented Information Society, where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life, premised on the purposes and principles of the Charter of the United Nations and respecting fully and upholding the Universal Declaration of Human Rights. 3. We reaffirm the universality, indivisibility, interdependence and interrelation of all human rights and fundamental freedoms, including the right to development, as enshrined in the Vienna Declaration. We also reaffirm that democracy, sustainable development, and respect for human rights and fundamental freedoms as well as good governance at all levels are interdependent and mutually reinforcing. We further resolve to strengthen the rule of law in international as in national affairs. The WSIS Declaration of Principles makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating: 4. We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organisation. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits of the Information Society offers." A poll of 27,973 adults in 26 countries, including 14,306 Internet users, conducted for the BBC World Service between 30 November 2009 and 7 February 2010 found that almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right. 50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion. The 88 recommendations made by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly include several that bear on the question of the right to Internet access: 67. Unlike any other medium, the Internet enables individuals to seek, receive and impart information and ideas of all kinds instantaneously and inexpensively across national borders. By vastly expanding the capacity of individuals to enjoy their right to freedom of opinion and expression, which is an “enabler” of other human rights, the Internet boosts economic, social and political development, and contributes to the progress of humankind as a whole. In this regard, the Special Rapporteur encourages other Special Procedures mandate holders to engage on the issue of the Internet with respect to their particular mandates. 78. While blocking and filtering measures deny users access to specific content on the Internet, States have also taken measures to cut off access to the Internet entirely. The Special Rapporteur considers cutting off users from Internet access, regardless of the justification provided, including on the grounds of violating intellectual property rights law, to be disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights. 79. The Special Rapporteur calls upon all States to ensure that Internet access is maintained at all times, including during times of political unrest. 85. Given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States. Each State should thus develop a concrete and effective policy, in consultation with individuals from all sections of society, including the private sector and relevant Government ministries, to make the Internet widely available, accessible and affordable to all segments of population. Network neutrality Network neutrality (also net neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn't broken. In April 2017, a recent attempt to compromise net neutrality in the United States is being considered by the newly appointed FCC chairman, Ajit Varadaraj Pai. The vote on whether or not to abolish net neutrality was passed on December 14, 2017, and ended in a 3–2 split in favor of abolishing net neutrality. Natural disasters and access Natural disasters disrupt internet access in profound ways. This is important—not only for telecommunication companies who own the networks and the businesses who use them, but for emergency crew and displaced citizens as well. The situation is worsened when hospitals or other buildings necessary to disaster response lose their connection. Knowledge gained from studying past internet disruptions by natural disasters could be put to use in planning or recovery. Additionally, because of both natural and man-made disasters, studies in network resiliency are now being conducted to prevent large-scale outages. One way natural disasters impact internet connection is by damaging end sub-networks (subnets), making them unreachable. A study on local networks after Hurricane Katrina found that 26% of subnets within the storm coverage were unreachable. At Hurricane Katrina's peak intensity, almost 35% of networks in Mississippi were without power, while around 14% of Louisiana's networks were disrupted. Of those unreachable subnets, 73% were disrupted for four weeks or longer and 57% were at “network edges where important emergency organizations such as hospitals and government agencies are mostly located”. Extensive infrastructure damage and inaccessible areas were two explanations for the long delay in returning service. The company Cisco has revealed a Network Emergency Response Vehicle (NERV), a truck that makes portable communications possible for emergency responders despite traditional networks being disrupted. A second way natural disasters destroy internet connectivity is by severing submarine cables—fiber-optic cables placed on the ocean floor that provide international internet connection. A sequence of undersea earthquakes cut six out of seven international cables connected to Taiwan and caused a tsunami that wiped out one of its cable and landing stations. The impact slowed or disabled internet connection for five days within the Asia-Pacific region as well as between the region and the United States and Europe. With the rise in popularity of cloud computing, concern has grown over access to cloud-hosted data in the event of a natural disaster. Amazon Web Services (AWS) has been in the news for major network outages in April 2011 and June 2012. AWS, like other major cloud hosting companies, prepares for typical outages and large-scale natural disasters with backup power as well as backup data centers in other locations. AWS divides the globe into five regions and then splits each region into availability zones. A data center in one availability zone should be backed up by a data center in a different availability zone. Theoretically, a natural disaster would not affect more than one availability zone. This theory plays out as long as human error is not added to the mix. The June 2012 major storm only disabled the primary data center, but human error disabled the secondary and tertiary backups, affecting companies such as Netflix, Pinterest, Reddit, and Instagram. See also Back-channel, a low bandwidth, or less-than-optimal, transmission channel in the opposite direction to the main channel Broadband mapping in the United States Comparison of wireless data standards Connectivity in a social and cultural sense Fiber-optic communication History of the Internet IP over DVB, Internet access using MPEG data streams over a digital television network List of countries by number of broadband Internet subscriptions National broadband plan Public switched telephone network (PSTN) Residential gateway White spaces (radio), a group of technology companies working to deliver broadband Internet access via unused analog television frequencies References External links European broadband Corporate vs. Community Internet, AlterNet, June 14, 2005, – on the clash between US cities' attempts to expand municipal broadband and corporate attempts to defend their markets Broadband data, from Google public data FCC Broadband Map Types of Broadband Connections, Broadband.gov Broadband Human rights by issue Rights
301173
https://en.wikipedia.org/wiki/Lorenz%20cipher
Lorenz cipher
The Lorenz SZ40, SZ42a and SZ42b were German rotor stream cipher machines used by the German Army during World War II. They were developed by C. Lorenz AG in Berlin. The model name SZ was derived from Schlüssel-Zusatz, meaning cipher attachment. The instruments implemented a Vernam stream cipher. British cryptanalysts, who referred to encrypted German teleprinter traffic as Fish, dubbed the machine and its traffic Tunny (meaning tunafish) and deduced its logical structure three years before they saw such a machine. The SZ machines were in-line attachments to standard teleprinters. An experimental link using SZ40 machines was started in June 1941. The enhanced SZ42 machines were brought into substantial use from mid-1942 onwards for high-level communications between the German High Command in Wünsdorf close to Berlin, and Army Commands throughout occupied Europe. The more advanced SZ42A came into routine use in February 1943 and the SZ42B in June 1944. Radioteletype (RTTY) rather than land-line circuits was used for this traffic. These non-Morse (NoMo) messages were picked up by Britain's Y-stations at Knockholt in Kent and Denmark Hill in south London, and sent to the Government Code and Cypher School at Bletchley Park (BP). Some were deciphered using hand methods before the process was partially automated, first with Robinson machines and then with the Colossus computers. The deciphered Lorenz messages made one of the most significant contributions to British Ultra military intelligence and to Allied victory in Europe, due to the high-level strategic nature of the information that was gained from Lorenz decrypts. History After the Second World War a group of British and US cryptanalysts entered Germany with the front-line troops to capture the documents, technology and personnel of the various German signal intelligence organizations before these secrets could be destroyed, looted, or captured by the Soviets. They were called the Target Intelligence Committee: TICOM. From captured German cryptographers Drs Huttenhain and Fricke they learnt of the development of the SZ40 and SZ42 a/b. The design was for a machine that could be attached to any teleprinter. The first machine was referred to as the SZ40 (old type) which had ten rotors with fixed cams. It was recognised that the security of this machine was not great. The definitive SZ40 had twelve rotors with movable cams. The rightmost five rotors were called Spaltencäsar but named the Chi wheels by Bill Tutte. The leftmost five were named Springcäsar, Psi wheels to Tutte. The middle two Vorgeleger rotors were called Mu or motor wheels by Tutte. The five data bits of each ITA2-coded telegraph character were processed first by the five chi wheels and then further processed by the five psi wheels. The cams on the wheels reversed the value of a bit if in the raised position, but left it unchanged if in the lowered position. Vernam cipher Gilbert Vernam was an AT&T Bell Labs research engineer who, in 1917, invented a cipher system that used the Boolean "exclusive or" (XOR) function, symbolised by ⊕. This is represented by the following "truth table", where 1 represents "true" and 0 represents "false". Other names for this function are: Not equal (NEQ), modulo 2 addition (without 'carry') and modulo 2 subtraction (without 'borrow'). Vernam's cipher is a symmetric-key algorithm, i.e. the same key is used both to encipher plaintext to produce the ciphertext and to decipher ciphertext to yield the original plaintext: and: This produces the essential reciprocity that allows the same machine with the same settings to be used for both encryption and decryption. Vernam's idea was to use conventional telegraphy practice with a paper tape of the plaintext combined with a paper tape of the key. Each key tape would have been unique (a one-time tape), but generating and distributing such tapes presented considerable practical difficulties. In the 1920s four men in different countries invented rotor cipher machines to produce a key stream to act instead of a tape. The 1940 Lorenz SZ40/42 was one of these. Key stream The logical functioning of the Tunny system was worked out well before the Bletchley Park cryptanalysts saw one of the machines—which only happened in 1945, as Germany was surrendering to the Allies. The SZ machine served as an in-line attachment to a standard Lorenz teleprinter. It had a metal base and was high. The teleprinter characters consisted of five data bits (or "impulses"), encoded in the International Telegraphy Alphabet No. 2 (ITA2). The machine generated a stream of pseudorandom characters. These formed the key that was combined with the plaintext input characters to form the ciphertext output characters. The combination was by means of the XOR (or modulo 2 addition) process. The key stream consisted of two component parts that were XOR-ed together. These were generated by two sets of five wheels which rotated together. The Bletchley Park cryptanalyst Bill Tutte called these the χ ("chi") wheels, and the ψ ("psi") wheels. Each wheel had a series of cams (or "pins") around their circumference. These cams could be set in a raised (active) or lowered (inactive) position. In the raised position they generated a '1' which reversed the value of a bit, in the lowered position they generated a '0' which left the bit unchanged. The number of cams on each wheel equalled the number of impulses needed to cause them to complete a full rotation. These numbers are all co-prime with each other, giving the longest possible time before the pattern repeated. This is the product of the number of positions of the wheels. For the set of χ wheels it was 41 × 31 × 29 × 26 × 23 = 22,041,682 and for the ψ wheels it was 43 × 47 × 51 × 53 × 59 = 322,303,017. The number of different ways that all twelve wheels could be set was i.e. 16 billion billion. The set of five χ wheels all moved on one position after each character had been enciphered. The five ψ wheels, however, advanced intermittently. Their movement was controlled by the two μ ("mu") or "motor" wheels in series. The SZ40 μ61 motor wheel stepped every time but the μ37 motor wheel stepped only if the first motor wheel was a '1'. The ψ wheels then stepped only if the second motor wheel was a '1'. The SZ42A and SZ42B models added additional complexity to this mechanism, known at Bletchley Park as Limitations. Two of the four different limitations involved characteristics of the plaintext and so were autoclaves. The key stream generated by the SZ machines thus had a χ component and a ψ component. Symbolically, the key that was combined with the plaintext for enciphering and with the ciphertext for deciphering, can be represented as follows. key = χ-key ⊕ ψ-key However to indicate that the ψ component often did not change from character to character, the term extended psi was used, symbolised as: Ψ. So enciphering can be shown symbolically as: plaintext ⊕ χ-stream ⊕ ψ'''-stream = ciphertext and deciphering as: ciphertext ⊕ χ-stream ⊕ ψ-stream = plaintext. Operation Each "Tunny" link had four SZ machines with a transmitting and a receiving teleprinter at each end. For enciphering and deciphering to work, the transmitting and receiving machines had to be set up identically. There were two components to this; setting the patterns of cams on the wheels and rotating the wheels for the start of enciphering a message. The cam settings were changed less frequently before summer 1944. The ψ wheel cams were initially only changed quarterly, but later monthly, the χ wheels were changed monthly but the motor wheel patterns were changed daily. From 1 August 1944, all wheel patterns were changed daily. Initially the wheel settings for a message were sent to the receiving end by means of a 12-letter indicator sent un-enciphered, the letters being associated with wheel positions in a book. In October 1942 this was changed to the use of a book of single-use settings in what was known as the QEP book. The last two digits of the QEP book entry were sent for the receiving operator to look up in his copy of the QEP book and set his machine's wheels. Each book contained one hundred or more combinations. Once all the combinations in a QEP book had been used it was replaced by a new one. The message settings should never have been re-used, but on occasion they were, providing a "depth", which could be utilised by a cryptanalyst. As was normal telegraphy practice, messages of any length were keyed into a teleprinter with a paper tape perforator. The typical sequence of operations would be that the sending operator would punch up the message, make contact with the receiving operator, use the EIN / AUS switch on the SZ machine to connect it into the circuit, and then run the tape through the reader. At the receiving end, the operator would similarly connect his SZ machine into the circuit and the output would be printed up on a continuous sticky tape. Because this was the practice, the plaintext did not contain the characters for "carriage return", "line feed" or the null (blank tape, 00000) character. Cryptanalysis British cryptographers at Bletchley Park had deduced the operation of the machine by January 1942 without ever having seen a Lorenz machine, a feat made possible by a mistake made by a German operator. Interception Tunny traffic was known by Y Station operators used to listening to Morse code transmission as "new music". Its interception was originally concentrated at the Foreign Office Y Station operated by the Metropolitan Police at Denmark Hill in Camberwell, London. But due to lack of resources at this time (around 1941), it was given a low priority. A new Y Station, Knockholt in Kent, was later constructed specifically to intercept Tunny traffic so that the messages could be efficiently recorded and sent to Bletchley Park. The head of Y station, Harold Kenworthy, moved to head up Knockholt. He was later promoted to head the Foreign Office Research and Development Establishment (F.O.R.D.E). Code breaking On 30 August 1941, a message of some 4,000 characters was transmitted from Athens to Vienna. However, the message was not received correctly at the other end. The receiving operator then sent an uncoded request back to the sender asking for the message to be retransmitted. This let the codebreakers know what was happening. The sender then retransmitted the message but, critically, did not change the key settings from the original "HQIBPEXEZMUG". This was a forbidden practice; using a different key for every different message is critical to any stream cipher's security. This would not have mattered had the two messages been identical, however the second time the operator made a number of small alterations to the message, such as using abbreviations, making the second message somewhat shorter. From these two related ciphertexts, known to cryptanalysts as a depth, the veteran cryptanalyst Brigadier John Tiltman in the Research Section teased out the two plaintexts and hence the keystream. But even almost 4,000 characters of key was not enough for the team to figure out how the stream was being generated; it was just too complex and seemingly random. After three months, the Research Section handed the task to mathematician Bill Tutte. He applied a technique that he had been taught in his cryptographic training, of writing out the key by hand and looking for repetitions. Tutte did this with the original teleprinter 5-bit Baudot codes, which led him to his initial breakthrough of recognising a 41-bit repetition. Over the following two months up to January 1942, Tutte and colleagues worked out the complete logical structure of the cipher machine. This remarkable piece of reverse engineering was later described as "one of the greatest intellectual feats of World War II". After this cracking of Tunny, a special team of code breakers was set up under Ralph Tester, most initially transferred from Alan Turing's Hut 8. The team became known as the Testery. It performed the bulk of the subsequent work in breaking Tunny messages, but was aided by machines in the complementary section under Max Newman known as the Newmanry. Decryption machines Several complex machines were built by the British to aid the attack on Tunny. The first was the British Tunny.Bletchley Park completes epic Tunny machine The Register, 26 May 2011, Accessed May 2011 This machine was designed by Bletchley Park, based on the reverse engineering work done by Tiltman's team in the Testery, to emulate the Lorenz Cipher Machine. When the pin wheel settings were found by the Testery, the Tunny machine was set up and run so that the messages could be printed. A family of machines known as "Robinsons" were built for the Newmanry. These used two paper tapes, along with logic circuitry, to find the settings of the χ pin wheels of the Lorenz machine. The Robinsons had major problems keeping the two paper tapes synchronized and were relatively slow, reading only 2,000 characters per second. The most important machine was the Colossus of which ten were in use by the war's end, the first becoming operational in December 1943. Although not fully programmable, they were far more efficient than their predecessors, representing advances in electronic digital computers. The Colossus computers were developed and built by Tommy Flowers, of the Dollis Hill Post Office Research Station, using algorithms developed by Bill Tutte and his team of mathematicians. Colossus proved to be efficient and quick against the twelve-rotor Lorenz SZ42 on-line teleprinter cipher machine. Some influential figures had doubts about his proposed design for the decryption machine, and Flowers proceeded with the project while partly funding it himself. Like the later ENIAC of 1946, Colossus did not have a stored program, and was programmed through plugboards and jumper cables. It was faster, more reliable and more capable than the Robinsons, so speeding up the process of finding the Lorenz χ pin wheel settings. Since Colossus generated the putative keys electronically, it only had to read one tape. It did so with an optical reader which, at 5,000 characters per second, was driven much faster than the Robinsons' and meant that the tape travelled at almost 30 miles per hour (48 km/h). This, and the clocking of the electronics from the optically read paper tape sprocket holes, completely eliminated the Robinsons' synchronisation problems. Bletchley Park management, which had been sceptical of Flowers's ability to make a workable device, immediately began pressuring him to construct another. After the end of the war, Colossus machines were dismantled on the orders of Winston Churchill, but GCHQ retained two of them. Testery executives and Tunny codebreakers Ralph Tester: linguist and head of Testery Jerry Roberts: shift-leader, linguist and senior codebreaker Peter Ericsson: shift-leader, linguist and senior codebreaker Victor Masters: shift-leader Denis Oswald: linguist and senior codebreaker Peter Hilton: codebreaker and mathematician Peter Benenson: codebreaker Peter Edgerley: codebreaker John Christie: codebreaker John Thompson: codebreaker Roy Jenkins: codebreaker Shaun Wylie: codebreaker Tom Colvill: general manager By the end of the war, the Testery had grown to nine cryptographers and 24 ATS girls (as the women serving that role were then called), with a total staff of 118, organised in three shifts working round the clock. Surviving machines Lorenz cipher machines were built in small numbers; today only a handful survive in museums. In Germany, examples may be seen at the Heinz Nixdorf MuseumsForum, a computer museum in Paderborn and the Deutsches Museum, a museum of science and technology in Munich. Two further Lorenz machines are also displayed at both Bletchley Park and The National Museum of Computing in the United Kingdom. Another example is also on display at the National Cryptologic Museum in the United States. John Whetter and John Pether, volunteers with The National Museum of Computing, bought a Lorenz teleprinter on eBay for £9.50 that had been retrieved from a garden shed in Southend-on-Sea. It was found to be the World War II military version, was refurbished and in May 2016 installed next to the SZ42 machine in the museum's "Tunny" gallery. See also Enigma machine Siemens and Halske T52 Turingery Combined Cipher Machine Notes References Davies, Donald W., The Lorenz Cipher Machine SZ42, (reprinted in Selections from Cryptologia: History, People, and Technology, Artech House, Norwood, 1998) in in . (Facsimile copy) . (Transcript of much of this document in PDF format) . (Web transcript of Part 1) in in in Transcript of a lecture given by Prof. Tutte at the University of Waterloo Entry for "Tunny" in the GC&CS Cryptographic DictionaryFurther reading Contains a short but informative section (pages 312–315) describing the operation of Tunny, and how it was attacked. * Paul Gannon, Colossus: Bletchley Park's Greatest Secret (Atlantic Books, 2006). Using recently declassified material and dealing exclusively with the efforts to break into Tunny. Clears up many previous misconceptions about Fish traffic, the Lorenz cipher machine and Colossus. Contains a lengthy section (pages 148–164) about Tunny and the British attack on it. External links Frode Weierud’s CryptoCellar Historical documents and publications about Lorenz Schlüsselzusatz'' SZ42. Retrieved 22 April 2016. Lorenz ciphers and the Colossus Photographs and description of Tunny Simplified Lorenz Cipher Toolkit "Tunny" Machine and Its Solution – Brigadier General John Tiltman – National Security Agency General Report on Tunny: With Emphasis on Statistical Methods – National Archives UK General Report on Tunny: With Emphasis on Statistical Methods – Jack Good, Donald Michie, Geoffrey Timms – 1945. Virtual Lorenz Cryptographic hardware Encryption devices World War II military equipment of Germany Signals intelligence of World War II Broken stream ciphers
301236
https://en.wikipedia.org/wiki/VIC%20cipher
VIC cipher
The VIC cipher was a pencil and paper cipher used by the Soviet spy Reino Häyhänen, codenamed "VICTOR". If the cipher were to be given a modern technical name, it would be known as a "straddling bipartite monoalphabetic substitution superenciphered by modified double transposition." However, by general classification it is part of the Nihilist family of ciphers. It was arguably the most complex hand-operated cipher ever seen, when it was first discovered. The initial analysis done by the American National Security Agency (NSA) in 1953 did not absolutely conclude that it was a hand cipher, but its placement in a hollowed out 5¢ coin implied it could be decoded using pencil and paper. The VIC cipher remained unbroken until more information about its structure was available. Although certainly not as complex or secure as modern computer operated stream ciphers or block ciphers, in practice messages protected by it resisted all attempts at cryptanalysis by at least the NSA from its discovery in 1953 until Häyhänen's defection in 1957. A revolutionary leap The VIC cipher can be regarded as the evolutionary pinnacle of the Nihilist cipher family. The VIC cipher has several important integrated components, including mod 10 chain addition, a lagged Fibonacci generator (a recursive formula used to generate a sequence of pseudorandom digits), a straddling checkerboard, and a disrupted double transposition. Until the discovery of VIC, it was generally thought that a double transposition alone was the most complex cipher an agent, as a practical matter, could use as a field cipher. History During World War II, several Soviet spy rings communicated to Moscow Centre using two ciphers which are essentially evolutionary improvements on the basic Nihilist cipher. A very strong version was used by Max Clausen in Richard Sorge's network in Japan, and by Alexander Foote in the Lucy spy ring in Switzerland. A slightly weaker version was used by the Rote Kapelle network. In both versions, the plaintext was first converted to digits by use of a straddling checkerboard rather than a Polybius square. This has the advantage of slightly compressing the plaintext, thus raising its unicity distance and also allowing radio operators to complete their transmissions quicker and shut down sooner. Shutting down sooner reduces the risk of the operator being found by enemy radio direction finders. Increasing the unicity distance increases strength against statistical attacks. Clausen and Foote both wrote their plaintext in English, and memorized the 8 most frequent letters of English (to fill the top row of the checkerboard) through the mnemonic phrase "a sin to err" (dropping the second "r"). The standard English straddling checkerboard has 28 character slots and in this cipher the extra two became "full stop" and "numbers shift". Numbers were sent by a numbers shift, followed by the actual plaintext digits in repeated pairs, followed by another shift. Then, similarly to the basic Nihilist, a digital additive was added in, which was called "closing". However a different additive was used each time, so finally a concealed "indicator group" had to be inserted to indicate what additive was used. Unlike basic Nihilist, the additive was added by non-carrying addition (digit-wise addition modulo 10), thus producing a more uniform output which doesn't leak as much information. More importantly, the additive was generated not through a keyword, but by selecting lines at random from almanacs of industrial statistics. Such books were deemed dull enough to not arouse suspicion if an agent was searched (particularly as the agents' cover stories were as businessmen), and to have such high entropy density as to provide a very secure additive. Of course the figures from such a book are not actually uniformly distributed (there is an excess of "0" and "1" (see Benford's Law), and sequential numbers are likely to be somewhat similar), but nevertheless they have much higher entropy density than passphrases and the like; at any rate, in practice they seem never to have been successfully cryptanalysed. The weaker version generated the additive from the text of a novel or similar book (at least one Rote Kapelle member actually used The Good Soldier Schweik), This text was converted to a digital additive using a technique similar to a straddling checkerboard. The ultimate development along these lines was the VIC cipher, used in the 1950s by Reino Häyhänen. By this time, most Soviet agents were instead using one-time pads. However, despite the theoretical perfection of the one-time pad, in practice they were broken, while VIC was not. The one-time cipher could however only be broken when cipher pages were re-used, due to logistic problems, and therefore was no longer truly one-time. Mechanics overview The secret key for the encryption is the following: A short Phrase (e.g. the first line of a song) A Date (in a 6-digit format) A Personal Number (unique to agent, a 1 or 2 digit number) The encryption was also aided by the adversary not knowing a 5-digit Keygroup which was unique to each message. The Keygroup was not strictly a 'secret', (as it was embedded in-clear in the ciphertext), but it was at a location in the ciphertext that was not known to an adversary. The cipher broadly worked as follows: Use the secrets above (Phrase, Date, Keygroup and Personal Number) create a 50 digit block of pseudo random-numbers Use this block to create the message keys for: A Straddling Checkerboard Two Columnar transpositions Encrypt the Plaintext message via the straddling checkerboard Apply two transpositions to the resultant (intermediary) ciphertext through two columnar A 'Standard' Columnar Transposition A Diagonal Columnar Transposition Insertion of the Keygroup into the ciphertext - as determined by the Personal Number Detailed mechanics Note: this section tracks the calculations by referring to [Line-X] or similar. This is to align with the notation stated in the CIA archive description. Pseudorandom block derivation [Line-A]: Generate a random 5-digit Keygroup [Line-B]: Write the first 5 digits of the secret Date [Line-C]: Subtract [Line-B] from [Line-A] by modular arithmetic (digit-by-digit, not 'borrowing' any tens from a neighboring column) [Line-D]: Write out the first 20 letters from the secret Phrase [Line-E.1&2]: Sequence (see below) the first and second ten characters separately (to get [Line-E.1] & [Line-E.2] respectively) [Line-F.1]: Write out the 5-Digits from [Line-C], then apply Chain Addition (see below) applied to create five more digits [Line-F.2]: The digit sequence '1234567890' is written out (under [Line-E.2]) as an aide for encoding when creating [Line-H] [Line-G]: Addition of [Line-E.1] to [Line-F.1] - this is digit-by-digit by mod-10 arithmetic, i.e. no 'carrying' over tens to the next column [Line-H]: Encoding (see below) of the digits in [Line-G] under [Line-E.2] as the key [Line-I]: No [Line-I] used, presumably to avoid confusion (as 'I' may be misread as a '1' or 'J') [Line-J]: The Sequencing of [Line-H] [Lines-K,L,M,N,P]: These are five 10-digit lines created by chain addition of [Line-H]. The last two non-equal digits are added to the agent's personal number to determine the key length of the 2 transpositions. (Lines K-to-P are in-effect a key-driven pseudo-random block used for the next stage of encryption) [Line-O]: No [Line-O] used, presumably to avoid confusion (as 'O' may be misread as a zero or 'Q') Message key derivation [Line-Q]: The first 'a' digits extracted from [Lines-K,L,M,N,P] when Transposed via [Line-J]. (Where 'a' is the first value resulting from the addition of the last non-equal digits in [Line-P] to the Personal Number). These are used to key the Columnar Transposition. [Line-R]: The next 'b' digits extracted (after the 'a' digits have been extracted) from [Lines-K,L,M,N,P] when transposed via [Line-J]. (Where 'b' is the second value resulting from the addition of the last non-equal digits in [Line-P] to the Personal Number). These are used to key the Diagonal Transposition. [Line-S]: The Sequencing of [Line-P], this is used as the key to the Straddling Checkerboard Example of key generation Personal Number: 6 Date: 13 Sept 1959 // Moon Landing - 13 Sept 1959 ('139195' - truncated to 6 digits) Phrase: 'Twas the night before Christmas' // from 'A visit from St. Nicholas' - poem Keygroup: 72401 // randomly generated [Line-A]: 72401 // Keygroup [Line-B]: 13919 // Date - truncated to 5 digits [Line-C]: 69592 // subtract [Line-B] from [Line-A] [Line-D]: TWASTHENIG HTBEFORECH // Phrase - truncated to 20 characters [Line-E]: 8017942653 6013589427 // via Sequencing [Line-F]: 6959254417 1234567890 // from [Line-C] and chain addition, then '1234567890' [Line-G]: 4966196060 // add [Line-E.1] to [Line-F.1] [Line-H]: 3288628787 // encode [Line-G] with [Line-E.2], [Line-F.2] helps [Line-J]: 3178429506 // The Sequencing of [Line-H] [Line-K]: 5064805552 // BLOCK: Chain addition of [Line-H] for 50 digits [Line-L]: 5602850077 [Line-M]: 1620350748 [Line-N]: 7823857125 [Line-P]: 5051328370 Last two non-equal digits are '7' and '0', added to Personal Number (6) means that the permutation keys are 13 and 6 digits long. [Line-Q]: 0668005552551 // first 13 digits from block [Line-R]: 758838 // next 6 digits from block [Line-S]: 5961328470 // Sequencing of [Line-P] Message encryption Straddling checkerboard Once the key has been generated, the first stage of actually encrypting the Message is to convert it to a series of digits, this is done via a Straddling checkerboard. The key (header row) for the checkerboard is based on [Line-S]. Then a pre-agreed series of common letters used on the second row. The example below uses the English mnemonic 'AT ONE SIR', however the Cyrillic mnemonic used by Hayhanen was 'snegopad', the Russian word for snowfall. The remaining cells are filled in, with the rest of the alphabet/symbols filled in in order. An example encoding is below: MESSAGE: 'Attack at dawn. By dawn I mean 0500. Not 0915 like you did last time.' 59956 96459 66583 38765 88665 83376 02538 00005 55000 00080 87319 80000 99911 15558 06776 42881 86667 66675 49976 0287- Transpositions: columnar transposition The message is transposed via standard columnar transposition keyed by [Line-Q] above. (Note: if the message encoded length is not a multiple of 5 at this stage, an additional digit is added) The message is then transposed via Diagonal Transposition keyed by [Line-R] above. Keygroup insertion The (unencrypted) Keygroup is inserted into the ciphertext 'P' groups from the end; where 'P' is the unused sixth digit of the Date. Modular addition/subtraction Modular addition or subtraction, also known as 'false adding/subtraction', in this context (and many pen and paper ciphers) is digit-by-digit addition and subtraction without 'carrying' or 'borrowing'. For example: 1234 + 6789 = 7913 1234 - 6789 = 5555 Sequencing Sequencing in this context is ordering the elements of an input from 1-10 (where '0' represents 10). This occurs either to letters (whereby alphabetical order is used), or numbers (where numerical value is used). In the event of equal values, then the leftmost value is sequenced first. For example: LETTERS: The word 'Octopus' is sequenced as '2163475' - (i.e. C=1, first 'O'=2, second 'O'=3, ...) NUMBERS: The number '90210' is sequenced as '34215' - (by numerical order. Zero is valued at '10' in terms of ordering) Chain addition Chain addition is akin to a Linear-feedback shift register, whereby a stream of number is generated as an output (and fed back in as an input) to a seed number. Within the VIC Cipher chain addition works by (1) taking the original (seed) number, (2) false-adding the first two digits, (3) putting this new number at the end of the chain. This continues, however the digits being added are incremented by one. For example, if the seed was '90210', the first 5 iterations are shown below: 90210           // Initial seed value 90210 9         // 9 = 9+0 (first two digits) 90210 92        // 2 = 0+2 (next two...) 90210 923       // 3 = 2+1 90210 9231      // 1 = 1+0 90210 92319     // 9 = 0+9; note how the first '9' generated is being fed back in Digit encoding The encoding step replaces each digit in a number (i.e. [Line-G] in the cipher) with one from a key sequence (i.e. [Line-E.2]) that represents its position in the 1-10 ordering. It should be seen that by writing out the series '1234567890' (shown as [Line-F.2]) underneath [Line.E.2] each value from 0-9 has another above it. Simply replace every digit in the number to be encoded with the one above it in the key sequence. For example the number '90210' would have encodings as follows; . So the output would be: '27067'. Decryption To decrypt the VIC Cipher is as follows: Extract the Keygroup - By knowledge of the agent's Personal Number, remove the 5 digits of the Keygroup from the ciphertext Generate the Message Keys - By using the knowledge of the various secrets (Phrase, Date, Personal Number, Keygroup) generate the keys in the same manner as the encryption process Decrypt the Ciphertext - By using knowledge of the Message Keys for the transpositions and straddling checkerboard decrypt them Cryptanalysis The cipher is one of the strongest pen and paper ciphers actually used in the real world, and was not broken (in terms of determining the underlying algorithm) by the NSA at the time. However, with the advent of modern computing, and public disclosure of the algorithm this would not be considered a strong cipher. It can be observed that the majority of the entropy in the secret key converges to a 10-digit number [Line-H]. This 10-digit number is approximately 34 bits of entropy, combined with the last digit of the date (needed to identify where the KeyGroup is) would make about 38 bits of entropy in terms of Message Key strength. 38 bits is subject to a Brute-force attack within less than a day on modern computing. See also Topics in cryptography References External links FBI page on the hollow nickel case with images of the hollow nickel that contained the VIC encrypted message "The Cipher in a Hollow Nickel" The VIC Cipher Straddling Checkerboards Various different versions of checkerboards on Cipher Machines and Cryptology SECOM, a VIC variant with extended checkerboard "The Rise Of Field Ciphers: straddling checkerboard ciphers" by Greg Goebel 2009 Classical ciphers Science and technology in the Soviet Union
301240
https://en.wikipedia.org/wiki/RSA%20Security
RSA Security
RSA Security LLC, formerly RSA Security, Inc. and doing business as RSA, is an American computer and network security company with a focus on encryption and encryption standards. RSA was named after the initials of its co-founders, Ron Rivest, Adi Shamir and Leonard Adleman, after whom the RSA public key cryptography algorithm was also named. Among its products is the SecurID authentication token. The BSAFE cryptography libraries were also initially owned by RSA. RSA is known for incorporating backdoors developed by the NSA in its products. It also organizes the annual RSA Conference, an information security conference. Founded as an independent company in 1982, RSA Security was acquired by EMC Corporation in 2006 for US$2.1 billion and operated as a division within EMC. When EMC was acquired by Dell Technologies in 2016, RSA became part of the Dell Technologies family of brands. On 10 March 2020, Dell Technologies announced that they will be selling RSA Security to a consortium, led by Symphony Technology Group (STG), Ontario Teachers’ Pension Plan Board (Ontario Teachers’) and AlpInvest Partners (AlpInvest) for US$2.1 billion, the same price when it was bought by EMC back in 2006. RSA is based in Bedford, Massachusetts, with regional headquarters in Bracknell (UK) and Singapore, and numerous international offices. History Ron Rivest, Adi Shamir and Leonard Adleman, who developed the RSA encryption algorithm in 1977, founded RSA Data Security in 1982. In 1994, RSA was against the Clipper Chip during the Crypto War. In 1995, RSA sent a handful of people across the hall to found Digital Certificates International, better known as VeriSign. The company then called Security Dynamics acquired RSA Data Security in July 1996 and DynaSoft AB in 1997. In January 1997, it proposed the first of the DES Challenges which led to the first public breaking of a message based on the Data Encryption Standard. In February 2001, it acquired Xcert International, Inc., a privately held company that developed and delivered digital certificate-based products for securing e-business transactions. In May 2001, it acquired 3-G International, Inc., a privately held company that developed and delivered smart card and biometric authentication products. In August 2001, it acquired Securant Technologies, Inc., a privately held company that produced ClearTrust, an identity management product. In December 2005, it acquired Cyota, a privately held Israeli company specializing in online security and anti-fraud solutions for financial institutions. In April 2006, it acquired PassMark Security. On September 14, 2006, RSA stockholders approved the acquisition of the company by EMC Corporation for $2.1 billion. In 2007, RSA acquired Valyd Software, a Hyderabad-based Indian company specializing in file and data security . In 2009, RSA launched the RSA Share Project. As part of this project, some of the RSA BSAFE libraries were made available for free. To promote the launch, RSA ran a programming competition with a US$10,000 first prize. In 2011, RSA introduced a new CyberCrime Intelligence Service designed to help organizations identify computers, information assets and identities compromised by trojans and other online attacks. In July 2013, RSA acquired Aveksa the leader in Identity and Access Governance sector On September 7, 2016, RSA was acquired by and became a subsidiary of Dell EMC Infrastructure Solutions Group through the acquisition of EMC Corporation by Dell Technologies in a cash and stock deal led by Michael Dell. On February 18, 2020, Dell Technologies announced their intention to sell RSA for $2.075 billion to Symphony Technology Group. In anticipation of the sale of RSA to Symphony Technology Group, Dell Technologies made the strategic decision to retain the BSAFE product line. To that end, RSA transferred BSAFE products (including the Data Protection Manager product) and customer agreements, including maintenance and support, to Dell Technologies on July 1, 2020. On September 1, 2020, Symphony Technology Group (STG) completed its acquisition of RSA from Dell Technologies. RSA became an independent company, one of the world’s largest cybersecurity and risk management organizations. Controversy SecurID security breach On March 17, 2011, RSA disclosed an attack on its two-factor authentication products. The attack was similar to the Sykipot attacks, the July 2011 SK Communications hack, and the NightDragon series of attacks. RSA called it an advanced persistent threat. Today, SecurID is more commonly used as a software token rather than older physical tokens. Relationship with NSA RSA's relationship with the NSA has changed over the years. Reuters' Joseph Menn and cybersecurity analyst Jeffrey Carr have noted that the two once had an adversarial relationship. In its early years, RSA and its leaders were prominent advocates of strong cryptography for public use, while the NSA and the Bush and Clinton administrations sought to prevent its proliferation. In the mid-1990s, RSA and Bidzos led a "fierce" public campaign against the Clipper Chip, an encryption chip with a backdoor that would allow the U.S. government to decrypt communications. The Clinton administration pressed telecommunications companies to use the chip in their devices, and relaxed export restrictions on products that used it. (Such restrictions had prevented RSA Security from selling its software abroad.) RSA joined civil libertarians and others in opposing the Clipper Chip by, among other things, distributing posters with a foundering sailing ship and the words "Sink Clipper!" RSA Security also created the DES Challenges to show that the widely used DES encryption was breakable by well-funded entities like the NSA. The relationship shifted from adversarial to cooperative after Bidzos stepped down as CEO in 1999, according to Victor Chan, who led RSA's department engineering until 2005: "When I joined there were 10 people in the labs, and we were fighting the NSA. It became a very different company later on." For example, RSA was reported to have accepted $10 million from the NSA in 2004 in a deal to use the NSA-designed Dual EC DRBG random number generator in their BSAFE library, despite many indications that Dual_EC_DRBG was both of poor quality and possibly backdoored. RSA Security later released a statement about the Dual_EC_DRBG kleptographic backdoor: In March 2014, it was reported by Reuters that RSA had also adapted the extended random standard championed by NSA. Later cryptanalysis showed that extended random did not add any security, and was rejected by the prominent standards group Internet Engineering Task Force. Extended random did however make NSA's backdoor for Dual_EC_DRBG tens of thousands of times faster to use for attackers with the key to the Dual_EC_DRBG backdoor (presumably only NSA), because the extended nonces in extended random made part of the internal state of Dual_EC_DRBG easier to guess. Only RSA Security's Java version was hard to crack without extended random, since the caching of Dual_EC_DRBG output in e.g. RSA Security's C programming language version already made the internal state fast enough to determine. And indeed, RSA Security only implemented extended random in its Java implementation of Dual_EC_DRBG. NSA Dual_EC_DRBG backdoor From 2004 to 2013, RSA shipped security software—BSAFE toolkit and Data Protection Manager—that included a default cryptographically secure pseudorandom number generator, Dual EC DRBG, that was later suspected to contain a secret National Security Agency kleptographic backdoor. The backdoor could have made data encrypted with these tools much easier to break for the NSA, which would have had the secret private key to the backdoor. Scientifically speaking, the backdoor employs kleptography, and is, essentially, an instance of the Diffie Hellman kleptographic attack published in 1997 by Adam Young and Moti Yung. RSA Security employees should have been aware, at least, that Dual_EC_DRBG might contain a backdoor. Three employees were members of the ANSI X9F1 Tool Standards and Guidelines Group, to which Dual_EC_DRBG had been submitted for consideration in the early 2000s. The possibility that the random number generator could contain a backdoor was "first raised in an ANSI X9 meeting", according to John Kelsey, a co-author of the NIST SP 800-90A standard that contains Dual_EC_DRBG. In January 2005, two employees of the cryptography company Certicom—who were also members of the X9F1 group—wrote a patent application that described a backdoor for Dual_EC_DRBG identical to the NSA one. The patent application also described three ways to neutralize the backdoor. Two of these—ensuring that two arbitrary elliptic curve points P and Q used in Dual_EC_DRBG are independently chosen, and a smaller output length—were added to the standard as an option, though NSA's backdoored version of P and Q and large output length remained as the standard's default option. Kelsey said he knew of no implementers who actually generated their own non-backdoored P and Q, and there have been no reports of implementations using the smaller outlet. Nevertheless, NIST included Dual_EC_DRBG in its 2006 NIST SP 800-90A standard with the default settings enabling the backdoor, largely at the behest of NSA officials, who had cited RSA Security's early use of the random number generator as an argument for its inclusion. The standard did also not fix the unrelated (to the backdoor) problem that the CSPRNG was predictable, which Gjøsteen had pointed out earlier in 2006, and which led Gjøsteen to call Dual_EC_DRBG not cryptographically sound. ANSI standard group members and Microsoft employees Dan Shumow and Niels Ferguson made a public presentation about the backdoor in 2007. Commenting on Shumow and Ferguson's presentation, prominent security researcher and cryptographer Bruce Schneier called the possible NSA backdoor "rather obvious", and wondered why NSA bothered pushing to have Dual_EC_DRBG included, when the general poor quality and possible backdoor would ensure that nobody would ever use it. There does not seem to have been a general awareness that RSA Security had made it the default in some of its products in 2004, until the Snowden leak. In September 2013, the New York Times, drawing on the Snowden leaks, revealed that the NSA worked to "Insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets" as part of the Bullrun program. One of these vulnerabilities, the Times reported, was the Dual_EC_DRBG backdoor. With the renewed focus on Dual_EC_DRBG, it was noted that RSA Security's BSAFE used Dual_EC_DRBG by default, which had not previously been widely known. After the New York Times published its article, RSA Security recommended that users switch away from Dual_EC_DRBG, but denied that they had deliberately inserted a backdoor. RSA Security officials have largely declined to explain why they did not remove the dubious random number generator once the flaws became known, or why they did not implement the simple mitigation that NIST added to the standard to neutralize the suggested and later verified backdoor. On 20 December 2013, Reuters' Joseph Menn reported that NSA secretly paid RSA Security $10 million in 2004 to set Dual_EC_DRBG as the default CSPRNG in BSAFE. The story quoted former RSA Security employees as saying that "no alarms were raised because the deal was handled by business leaders rather than pure technologists". Interviewed by CNET, Schneier called the $10 million deal a bribe. RSA officials responded that they have not "entered into any contract or engaged in any project with the intention of weakening RSA’s products." Menn stood by his story, and media analysis noted that RSA's reply was a non-denial denial, which denied only that company officials knew about the backdoor when they agreed to the deal, an assertion Menn's story did not make. In the wake of the reports, several industry experts cancelled their planned talks at RSA's 2014 RSA Conference. Among them was Mikko Hyppönen, a Finnish researcher with F-Secure, who cited RSA's denial of the alleged $10 million payment by the NSA as suspicious. Hyppönen announced his intention to give his talk, "Governments as Malware Authors", at a conference quickly set up in reaction to the reports: TrustyCon, to be held on the same day and one block away from the RSA Conference. At the 2014 RSA Conference, former RSA Security Executive Chairman Art Coviello defended RSA Security's choice to keep using Dual_EC_DRBG by saying "it became possible that concerns raised in 2007 might have merit" only after NIST acknowledged the problems in 2013. Products RSA is most known for its SecurID product, which provides two-factor authentication to hundreds of technologies utilizing hardware tokens that rotate keys on timed intervals, software tokens, and one time codes. In 2016, RSA re-branded the SecurID platform as RSA SecurID Access. This release added Single-Sign-On capabilities and cloud authentication for resources using SAML 2.0 and other types of federation. The RSA SecurID Suite also contains the RSA Identity Governance and Lifecycle software (formally Aveksa). The software provides visibility of who has access to what within an organization and manages that access with various capabilities such as access review, request andprovisioning. RSA enVision is a security information and event management (SIEM) platform, with centralised log-management service that claims to "enable organisations to simplify compliance process as well as optimise security-incident management as they occur." On April 4, 2011, EMC purchased NetWitness and added it to the RSA group of products. NetWitness was a packet capture tool aimed at gaining full network visibility to detect security incidents. This tool was re-branded RSA Security Analytics and was a combination of RSA enVIsion and NetWitness as a SIEM tool that did log and packet capture. The RSA Archer GRC platform is software that supports business-level management of governance, risk management, and compliance (GRC). The product was originally developed by Archer Technologies, which EMC acquired in 2010. See also Hardware token RSA Factoring Challenge RSA Secret-Key Challenge BSAFE RSA SecurID Software token References Cryptography organizations American companies established in 1982 Software companies based in Massachusetts Software companies established in 1982 Former certificate authorities Computer security companies Companies based in Bedford, Massachusetts 1982 establishments in Massachusetts 2020 mergers and acquisitions Software companies of the United States Private equity portfolio companies
302188
https://en.wikipedia.org/wiki/Adjoint%20representation
Adjoint representation
In mathematics, the adjoint representation (or adjoint action) of a Lie group G is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if G is , the Lie group of real n-by-n invertible matrices, then the adjoint representation is the group homomorphism that sends an invertible n-by-n matrix to an endomorphism of the vector space of all linear transformations of defined by: . For any Lie group, this natural representation is obtained by linearizing (i.e. taking the differential of) the action of G on itself by conjugation. The adjoint representation can be defined for linear algebraic groups over arbitrary fields. Definition Let G be a Lie group, and let be the mapping , with Aut(G) the automorphism group of G and given by the inner automorphism (conjugation) This Ψ is a Lie group homomorphism. For each g in G, define to be the derivative of at the origin: where is the differential and is the tangent space at the origin ( being the identity element of the group ). Since is a Lie group automorphism, Adg is a Lie algebra automorphism; i.e., an invertible linear transformation of to itself that preserves the Lie bracket. Moreover, since is a group homomorphism, too is a group homomorphism. Hence, the map is a group representation called the adjoint representation of G. If G is an immersed Lie subgroup of the general linear group (called immersely linear Lie group), then the Lie algebra consists of matrices and the exponential map is the matrix exponential for matrices X with small operator norms. Thus, for g in G and small X in , taking the derivative of at t = 0, one gets: where on the right we have the products of matrices. If is a closed subgroup (that is, G is a matrix Lie group), then this formula is valid for all g in G and all X in . Succinctly, an adjoint representation is an isotropy representation associated to the conjugation action of G around the identity element of G. Derivative of Ad One may always pass from a representation of a Lie group G to a representation of its Lie algebra by taking the derivative at the identity. Taking the derivative of the adjoint map at the identity element gives the adjoint representation of the Lie algebra of G: where is the Lie algebra of which may be identified with the derivation algebra of . One can show that for all , where the right hand side is given (induced) by the Lie bracket of vector fields. Indeed, recall that, viewing as the Lie algebra of left-invariant vector fields on G, the bracket on is given as: for left-invariant vector fields X, Y, where denotes the flow generated by X. As it turns out, , roughly because both sides satisfy the same ODE defining the flow. That is, where denotes the right multiplication by . On the other hand, since , by chain rule, as Y is left-invariant. Hence, , which is what was needed to show. Thus, coincides with the same one defined in below. Ad and ad are related through the exponential map: Specifically, Adexp(x) = exp(adx) for all x in the Lie algebra. It is a consequence of the general result relating Lie group and Lie algebra homomorphisms via the exponential map. If G is an immersely linear Lie group, then the above computation simplifies: indeed, as noted early, and thus with , . Taking the derivative of this at , we have: . The general case can also be deduced from the linear case: indeed, let be an immersely linear Lie group having the same Lie algebra as that of G. Then the derivative of Ad at the identity element for G and that for G coincide; hence, without loss of generality, G can be assumed to be G. The upper-case/lower-case notation is used extensively in the literature. Thus, for example, a vector in the algebra generates a vector field in the group . Similarly, the adjoint map of vectors in is homomorphic to the Lie derivative of vector fields on the group considered as a manifold. Further see the derivative of the exponential map. Adjoint representation of a Lie algebra Let be a Lie algebra over some field. Given an element of a Lie algebra , one defines the adjoint action of on as the map for all in . It is called the adjoint endomorphism or adjoint action. ( is also often denoted as .) Since a bracket is bilinear, this determines the linear mapping given by . Within End, the bracket is, by definition, given by the commutator of the two operators: where denotes composition of linear maps. Using the above definition of the bracket, the Jacobi identity takes the form where , , and are arbitrary elements of . This last identity says that ad is a Lie algebra homomorphism; i.e., a linear mapping that takes brackets to brackets. Hence, ad is a representation of a Lie algebra and is called the adjoint representation of the algebra . If is finite-dimensional, then End is isomorphic to , the Lie algebra of the general linear group of the vector space , and if a basis for it is chosen, the composition corresponds to matrix multiplication. In a more module-theoretic language, the construction says that is a module over itself. The kernel of ad is the center of (that's just rephrasing the definition). On the other hand, for each element in , the linear mapping obeys the Leibniz' law: for all and in the algebra (the restatement of the Jacobi identity). That is to say, adz is a derivation and the image of under ad is a subalgebra of Der, the space of all derivations of . When is the Lie algebra of a Lie group G, ad is the differential of Ad at the identity element of G (see #Derivative of Ad above). There is the following formula similar to the Leibniz formula: for scalars and Lie algebra elements , . Structure constants The explicit matrix elements of the adjoint representation are given by the structure constants of the algebra. That is, let {ei} be a set of basis vectors for the algebra, with Then the matrix elements for adei are given by Thus, for example, the adjoint representation of su(2) is the defining representation of so(3). Examples If G is abelian of dimension n, the adjoint representation of G is the trivial n-dimensional representation. If G is a matrix Lie group (i.e. a closed subgroup of GL(n,)), then its Lie algebra is an algebra of n×n matrices with the commutator for a Lie bracket (i.e. a subalgebra of ). In this case, the adjoint map is given by Adg(x) = gxg−1. If G is SL(2, R) (real 2×2 matrices with determinant 1), the Lie algebra of G consists of real 2×2 matrices with trace 0. The representation is equivalent to that given by the action of G by linear substitution on the space of binary (i.e., 2 variable) quadratic forms. Properties The following table summarizes the properties of the various maps mentioned in the definition The image of G under the adjoint representation is denoted by Ad(G). If G is connected, the kernel of the adjoint representation coincides with the kernel of Ψ which is just the center of G. Therefore, the adjoint representation of a connected Lie group G is faithful if and only if G is centerless. More generally, if G is not connected, then the kernel of the adjoint map is the centralizer of the identity component G0 of G. By the first isomorphism theorem we have Given a finite-dimensional real Lie algebra , by Lie's third theorem, there is a connected Lie group whose Lie algebra is the image of the adjoint representation of (i.e., .) It is called the adjoint group of . Now, if is the Lie algebra of a connected Lie group G, then is the image of the adjoint representation of G: . Roots of a semisimple Lie group If G is semisimple, the non-zero weights of the adjoint representation form a root system. (In general, one needs to pass to the complexification of the Lie algebra before proceeding.) To see how this works, consider the case G = SL(n, R). We can take the group of diagonal matrices diag(t1, ..., tn) as our maximal torus T. Conjugation by an element of T sends Thus, T acts trivially on the diagonal part of the Lie algebra of G and with eigenvectors titj−1 on the various off-diagonal entries. The roots of G are the weights diag(t1, ..., tn) → titj−1. This accounts for the standard description of the root system of G = SLn(R) as the set of vectors of the form ei−ej. Example SL(2, R) When computing the root system for one of the simplest cases of Lie Groups, the group SL(2, R) of two dimensional matrices with determinant 1 consists of the set of matrices of the form: with a, b, c, d real and ad − bc = 1. A maximal compact connected abelian Lie subgroup, or maximal torus T, is given by the subset of all matrices of the form with . The Lie algebra of the maximal torus is the Cartan subalgebra consisting of the matrices If we conjugate an element of SL(2, R) by an element of the maximal torus we obtain The matrices are then 'eigenvectors' of the conjugation operation with eigenvalues . The function Λ which gives is a multiplicative character, or homomorphism from the group's torus to the underlying field R. The function λ giving θ is a weight of the Lie Algebra with weight space given by the span of the matrices. It is satisfying to show the multiplicativity of the character and the linearity of the weight. It can further be proved that the differential of Λ can be used to create a weight. It is also educational to consider the case of SL(3, R). Variants and analogues The adjoint representation can also be defined for algebraic groups over any field. The co-adjoint representation is the contragredient representation of the adjoint representation. Alexandre Kirillov observed that the orbit of any vector in a co-adjoint representation is a symplectic manifold. According to the philosophy in representation theory known as the orbit method (see also the Kirillov character formula), the irreducible representations of a Lie group G should be indexed in some way by its co-adjoint orbits. This relationship is closest in the case of nilpotent Lie groups. Notes References . Representation theory of Lie groups Lie groups
304286
https://en.wikipedia.org/wiki/Info-ZIP
Info-ZIP
Info-ZIP is a set of open-source software to handle ZIP archives. It has been in circulation since 1989. It consists of 4 separately-installable packages: the Zip and UnZip command-line utilities; and WiZ and MacZip, which are graphical user interfaces for archiving programs in Microsoft Windows and classic Mac OS, respectively. Info-ZIP's Zip and UnZip have been ported to dozens of computing platforms. The UnZip web page describes UnZip as "The Third Most Portable Program in the World", surpassed by Hello World, C-Kermit, and possibly the Linux kernel. The "zip" and "unzip" programs included with most Linux and Unix distributions are Info-ZIP's Zip and UnZip. In addition to the Info-ZIP releases themselves, parts of Info-ZIP, including zlib, have been used in numerous other file archivers and other programs. Many Info-ZIP programmers have also been involved in other projects closely related to the DEFLATE compression algorithm, such as the PNG image format and the zlib software library. Features The UnZip package also includes three additional utilities: fUnZip extracts a file in a ZIP or gzip file directly to output from archives or other piped input. UnZipSFX is software to make a ZIP file into an executable self-extracting archive. ZipInfo outputs, in a variety of formats, information about ZIP files and their contents. The Zip package includes three additional utilities: ZipCloak adds or removes password encryption from file in a ZIP archive. ZipNote allows the modification of comment fields in ZIP archives. ZipSplit splits a ZIP archive into sections for separate disks or downloads. History UnZip UnZip 1.0 (March 1989) was released by Samuel M. Smith. It was written in Pascal and C. Pascal was abandoned soon after. UnZip 2.0 (September 1989) was released by Samuel M. Smith. It included support for the "unimploding" (method 6) introduced by PKZIP 1.01. George Sipe created Unix version. UnZip 2.0a (December 1989) was released by Carl Mascott and John Cowan. In Spring 1990, Info-ZIP was formed as a mailing list on SIMTEL20, and released UnZip 3.0 (May 1990) became the first public release by Info-ZIP group. UnZip 4.0 (December 1990) adds support of "central directory" within .ZIP archive. UnZip 5.0 (August 1992) introduces support of DEFLATE (method 8) compression method, used in PKZIP 1.93a. Method 8 has become the de facto base standard for ZIP archives. In 1994 and 1995 Info-ZIP turned a corner, and effectively became the de facto ZIP program on non-MS-DOS systems. A huge number of ports were released that year, including numerous minicomputers, mainframes and practically every microcomputer ever developed. UnZip 5.41 (April 2000) was relicensed under Info-ZIP License. UnZip 5.50 (February 2002) adds support of Deflate64 (method 9) decompression. UnZip 6.0 adds support of "Zip64" .ZIP archive and bzip2 (method 12) decompression. Support for bzip2-style compression was also in Zip from 3.0f beta. Zip Zip 1.9 (August 1992) introduces support of DEFLATE (method 8) compression method. Method 8 has become the de facto base standard for ZIP archives. Zip 2.0 (September 1993) has many portability improvements. Zip 2.1 (May 1996) added new "UNIX" time info to preserve file times across timezones and OSes. Zip 2.3 (December 1999) was the first Info-ZIP archiver tool under the new BSD-like Info-ZIP License. Zip 3.0 (2008-07-07) supports "Zip64" .ZIP archive, more than 65536 files per archive, multi-part archive, bzip2 compression, Unicode (UTF-8) filename and (partial) comment, Unix 32-bit UIDs/GIDs WiZ WiZ 4.0 (November 1997) was released by Info-ZIP. WiZ 5.01 (April 2000) was relicensed under Info-ZIP License. MacZip MacZip 1.05 (July 2000) was released under Info-ZIP License. MacZip 1.06 was released in February 2001. It was written by Dirk Hasse. Forks and patches As a slowly-updated open software package, many patches have been written by various Linux distributions to improve info-zip tools. In addition, from 2015 to 2019, 14 unzip vulnerabilities have been published on the CVE list without version or website updates from info-zip. (Three CVEs from 2014 in oCERT-2014-011 are left out of most statistics; info-zip did provide patches on their now-defunct forum.) Mark Adler has a set of patches for unzip 6.0 that detects zip bombs of the overlapping type. This issue has a CVE ID of CVE-2019-13232. The Debian project provides various patches to correct typographical errors and security issues, including the 17 unzip CVEs. It also hardens against format string injection and other obvious security issues. To deal with pre-UTF-8 Zip files created on other code pages, Giovanni Scafora created a patch that hooks unzip up with iconv for encoding conversion. A version of the patch combined with CVE mitigations are provided as a User Package in Arch Linux. The Gentoo project improves upon the hard-coded locales with an external library. The Fedora project (an upstream of Red Hat Enterprise Linux) applies Adler's patch, most of the Debian patches (or similar), as well as extra security patches like a stack non-execution patch to their unzip. The zip patches are similar to Debian patches. Official betas Some official improvements to zip and unzip are stuck in beta-stage as zip 3.1c and unzip 6.10b from 2015. Among other things, both added support for PPMd8 and LZMA compressions in files, support for AES encryption, and included iconv-based Unicode improvements (based on unzip-iconv). A newer release candidate, Zip 3.1d, appeared on the official FTP site in 2015, but the SourceForge page was not updated. Partially due to the added compressors, the zipped file size increased from 1.4 MB (3.1c) to 2.9 MB (3.1d). The antinode.info FTP site seems to be hosting an even more cutting-edge source of info-zip utilities. Individual revisions are organized into folders containing files differing from the previous revision, and zip archives for sources are occasionally released. , the site provides Unzip 6.10c (rev. 25, 21 Dec 2018) and an unarchived development version of Zip 3.1e from August 2019. The owner of the site, Steven Schweda, maintains these versions. Schweda is a member of the original info-zip team. Replacements FreeBSD has opted to replace info-zip utilities. It produces a command-line compatible version of based on libarchive, which also supports zipx and AES. See also Comparison of file archivers Comparison of archive formats List of archive formats References External links Official (legacy) FTP site Sourceforge Patch submissions 1989 software Cross-platform free software File archivers Free data compression software Unix archivers and compression-related utilities Software using the BSD license