Unnamed: 0
int64
0
676k
text
stringlengths
4
59.1k
title
stringlengths
1
250
900
ENIAC (; Electronic Numerical Integrator and Computer) was the first programmable, electronic, general-purpose digital computer, completed in 1945. There were other computers that had combinations of these features, but the ENIAC had all of them in one computer. It was Turing-complete and able to solve "a large class of numerical problems" through reprogramming
ENIAC
901
The IBM 305 RAMAC was the first commercial computer that used a moving-head hard disk drive (magnetic disk storage) for secondary storage. The system was publicly announced on September 14, 1956, with test units already installed at the U. S
IBM 305 RAMAC
902
The IBM 650 Magnetic Drum Data-Processing Machine is an early digital computer produced by IBM in the mid-1950s. It was the first mass produced computer in the world. Almost 2,000 systems were produced, the last in 1962, and it was the first computer to make a meaningful profit
IBM 650
903
The IBM 1400 series were second-generation (transistor) mid-range business decimal computers that IBM marketed in the early 1960s. The computers were offered to replace tabulating machines like the IBM 407. The 1400-series machines stored information in magnetic cores as variable-length character strings separated on the left by a special bit, called a "wordmark," and on the right by a "record mark
IBM 1400 series
904
The IBM 1401 is a variable-wordlength decimal computer that was announced by IBM on October 5, 1959. The first member of the highly successful IBM 1400 series, it was aimed at replacing unit record equipment for processing data stored on punched cards and at providing peripheral services for larger computers. The 1401 is considered to be the Ford Model-T of the computer industry, because it was mass-produced and because of its sales volume
IBM 1401
905
The IBM 1410, a member of the IBM 1400 series, was a decimal computer with variable word length that was announced by IBM on September 12, 1960 and marketed as a midrange business computer. It was withdrawn on March 30, 1970. Overview The 1410 was similar in design to the very popular IBM 1401, but it had one major difference
IBM 1410
906
The IBM 1440 computer was announced by IBM October 11, 1962. This member of the IBM 1400 series was described many years later as "essentially a lower-cost version of the 1401," and programs for the 1440 could easily be adapted to run on the IBM 1401. Despite what IBM described as "special features
IBM 1440
907
The IBM 1620 was announced by IBM on October 21, 1959, and marketed as an inexpensive scientific computer. After a total production of about two thousand machines, it was withdrawn on November 19, 1970. Modified versions of the 1620 were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process Control Systems (making it the first digital computer considered reliable enough for real-time process control of factory equipment)
IBM 1620
908
IBM 7070 is a decimal-architecture intermediate data-processing system that was introduced by IBM in 1958. It was part of the IBM 700/7000 series, and was based on discrete transistors rather than the vacuum tubes of the 1950s. It was the company's first transistorized stored-program computer
IBM 7070
909
IBM 7070 is a decimal-architecture intermediate data-processing system that was introduced by IBM in 1958. It was part of the IBM 700/7000 series, and was based on discrete transistors rather than the vacuum tubes of the 1950s. It was the company's first transistorized stored-program computer
IBM 7070
910
IBM 7070 is a decimal-architecture intermediate data-processing system that was introduced by IBM in 1958. It was part of the IBM 700/7000 series, and was based on discrete transistors rather than the vacuum tubes of the 1950s. It was the company's first transistorized stored-program computer
IBM 7070
911
The IBM Naval Ordnance Research Calculator (NORC) was a one-of-a-kind first-generation (vacuum tube) computer built by IBM for the United States Navy's Bureau of Ordnance. It went into service in December 1954 and was likely the most powerful computer at the time. The Naval Ordnance Research Calculator (NORC), was built at the Watson Scientific Computing Laboratory under the direction of Wallace Eckert
IBM Naval Ordnance Research Calculator
912
The ICT 1301 and its smaller derivative ICT 1300 were early business computers from International Computers and Tabulators. Typical of mid-sized machines of the era, they used core memory, drum storage and punched cards, but they were unusual in that they were based on decimal logic instead of binary. Description The 1301 was the main machine in the line
ICT 1301
913
The NCR 315 Data Processing System, released in January 1962 by NCR, is a second-generation computer. All printed circuit boards use resistor–transistor logic (RTL) to create the various logic elements. It uses 12-bit slab memory structure using magnetic-core memory
NCR 315
914
The Singer System Ten was a small-business computer manufactured by the Singer Corporation. The System Ten, introduced in 1970, featured an early form of logical partitioning. The System Ten was a character-oriented computer, using 6-bit BCD characters and decimal arithmetic
Singer System Ten
915
The UNIVAC LARC, short for the Livermore Advanced Research Computer, is a mainframe computer designed to a requirement published by Edward Teller in order to run hydrodynamic simulations for nuclear weapon design. It was one of the earliest supercomputers. LARC supported multiprocessing with two CPUs (called Computers) and an input/output (I/O) Processor (called the Processor)
UNIVAC LARC
916
The UNIVAC Solid State was a magnetic drum-based solid-state computer announced by Sperry Rand in December 1958 as a response to the IBM 650. It was one of the first computers to be (nearly) entirely solid-state, using 700 transistors, and 3000 magnetic amplifiers (FERRACTOR) for primary logic, and 20 vacuum tubes largely for power control. It came in two versions, the Solid State 80 (IBM-style 80-column cards) and the Solid State 90 (Remington-Rand 90-column cards)
UNIVAC Solid State
917
The history of computing hardware covers the developments from early simple devices to aid calculation to modern day computers. The first aids to computation were purely mechanical devices which required the operator to set up the initial values of an elementary arithmetic operation, then manipulate the device to obtain the result. Later, computers represented numbers in a continuous form (e
History of computing hardware
918
A digital differential analyzer (DDA), also sometimes called a digital integrating computer, is a digital implementation of a differential analyzer. The integrators in a DDA are implemented as accumulators, with the numeric result converted back to a pulse rate by the overflow of the accumulator. The primary advantages of a DDA over the conventional analog differential analyzer are greater precision of the results and the lack of drift/noise/slip/lash in the calculations
Digital differential analyzer
919
A sense switch, or program switch, is a switch on the front panel of a computer whose state can be tested by conditional branch instructions in software. Most early computers had several sense switches. They were typically used by the operator to set program options
Sense switch
920
A serial computer is a computer typified by bit-serial architecture – i. e. , internally operating on one bit or digit for each clock cycle
Serial computer
921
One meaning of system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators
System console
922
The tyranny of numbers was a problem faced in the 1960s by computer engineers. Engineers were unable to increase the performance of their designs due to the huge number of components involved. In theory, every component needed to be wired to every other component (or at least many other components) and were typically strung and soldered by hand
Tyranny of numbers
923
An accounting machine, or bookkeeping machine or recording-adder, was generally a calculator and printer combination tailored for a specific commercial activity such as billing, payroll, or ledger. Accounting machines were widespread from the early 1900s to 1980s, but were rendered obsolete by the availability of low-cost computers such as the IBM PC. This type of machine is generally distinct from unit record equipment (some unit record machines were also called accounting machines)
Accounting machine
924
The Atanasoff–Berry computer (ABC) was the first automatic electronic digital computer. Limited by the technology of the day, and execution, the device has remained somewhat obscure. The ABC's priority is debated among historians of computer technology, because it was neither programmable, nor Turing-complete
Atanasoff–Berry computer
925
The project to build the M-1 or Automatic Digital Computer (ADC) M-1 (Russian: автоматическая цифровая вычислительная машина (АЦВМ) М-1, romanized: avtomaticheskaya tsifrovaya vychislitel'naya mashina (ATSVM) M-1) was completed at the end of 1951, at the Energetics Institute of the USSR Academy of Sciences. Overview In charge of the Laboratory of Electrosystems was Isaak Semenovich Brook (or Bruk), who obtained the first domestic patent with the title "Digital Computer with Common Bus" in 1948. Work to build the computer based on Brook's design began in 1950
Automatic Digital Computer M-1
926
The Autonetics RECOMP II was a computer first introduced in 1958. It was made by the Autonetics division of North American Aviation. It was attached to a desk that housed the input/output devices
Autonetics Recomp II
927
Bertie the Brain is one of the first games developed in the early history of video games. It was built in Toronto by Josef Kates for the 1950 Canadian National Exhibition. The four meter (13 foot) tall computer allowed exhibition attendees to play a game of tic-tac-toe against an artificial intelligence
Bertie the Brain
928
The RCA BIZMAC was a vacuum tube computer manufactured by RCA from 1956 to 1962. Although RCA was noted for their pioneering work in transistors, RCA decided to build a vacuum tube computer instead of a transistorized computer. It was the largest vacuum tube computer of its time in 1956, occupying 20,000 sq ft (1,900 m2) of floor space with up to 30,000 tubes, 70,000 diodes, and 35,000 magnetic cores
BIZMAC
929
== Imperial College Computing Engines == ICCE I and ICCE II were digital computers built at the Imperial College Department of Mathematics in the post-war period. ICCE I The first Imperial College Computing Engine, ICCE I, was constructed by Sidney Michaelson, Tony Brooker and Keith Tocher in the Department of Mathematics at Imperial College London in the late 1940s and early 1950s. It was a relay based machine which gave relatively slow but highly reliable service
Imperial College Computing Engine
930
The Computron was an electron tube designed to perform the parallel addition and multiplication of digital numbers. It was conceived by Richard L. Snyder, Jr
Computron tube
931
DATAR, short for Digital Automated Tracking and Resolving, was a pioneering computerized battlefield information system. DATAR combined the data from all of the sensors in a naval task force into a single "overall view" that was then transmitted back to all of the ships and displayed on plan-position indicators similar to radar displays. Commanders could then see information from everywhere, not just their own ship's sensors
DATAR
932
Decimal computers are computers which can represent numbers and addresses in decimal as well as providing instructions to operate on those numbers and addresses directly in decimal, without conversion to a pure binary representation. Some also had a variable wordlength, which enabled operations on numbers with a large number of digits. Early computers Early computers that were exclusively decimal include the ENIAC, IBM NORC, IBM 650, IBM 1620, IBM 7070, UNIVAC Solid State 80
Decimal computer
933
The differential analyser is a mechanical analogue computer designed to solve differential equations by integration, using wheel-and-disc mechanisms to perform the integration. It was one of the first advanced computing devices to be used operationally. The original machines could not add, but then it was noticed that if the two wheels of a rear differential are turned, the drive shaft will compute the average of the left and right wheels
Differential analyser
934
ERMA (Electronic Recording Machine, Accounting) was a computer technology that automated bank bookkeeping and check processing. Developed at the nonprofit research institution SRI International under contract from Bank of America, the project began in 1950 and was publicly revealed in September 1955. Payments experts contend that ERMA "established the foundation for computerized banking, magnetic ink character recognition (MICR), and credit-card processing"
Electronic Recording Machine, Accounting
935
Eucrates was a hybrid teaching and learning analog computer created by Gordon Pask in 1956, in response to a request by the Solartron Electronic Group for a machine to exhibit at the Physical Society Exhibition in London. Its operation was based on simulating the functioning of neurons. The Solartron EUCRATES II was created by C
Eucrates
936
The Eureka, also known as the Latin Verse Machine, is a mid-19th century machine for generating Latin verses, created and exhibited by the Quaker inventor John Clark of Bridgwater. Clark, a cousin of Cyrus Clark, was born at Greinton in Somerset in 1785 and moved to Bridgwater in 1809. There he was first a grocer and later a printer
The Eureka
937
The Monte Carlo trolley, or FERMIAC, was an analog computer invented by physicist Enrico Fermi to aid in his studies of neutron transport. Operation The FERMIAC employed the Monte Carlo method to model neutron transport in various types of nuclear systems. Given an initial distribution of neutrons, the goal of the process is to develop numerous "neutron genealogies", or models of the behavior of individual neutrons, including each collision, scattering, and fission
FERMIAC
938
FIELDATA (also written as Fieldata) was a pioneering computer project run by the US Army Signal Corps in the late 1950s that intended to create a single standard (as defined in MIL-STD-188A/B/C) for collecting and distributing battlefield information. In this respect it could be thought of as a generalization of the US Air Force's SAGE system that was being created at about the same time. Unlike SAGE, FIELDATA was intended to be much larger in scope, allowing information to be gathered from any number of sources and forms
Fieldata
939
ILLIAC (Illinois Automatic Computer) was a series of supercomputers built at a variety of locations, some at the University of Illinois at Urbana–Champaign. In all, five computers were built in this series between 1951 and 1974. Some more modern projects also use the name
ILLIAC
940
The MADDIDA (Magnetic Drum Digital Differential Analyzer) was a special-purpose digital computer used for solving systems of ordinary differential equations. It was the first computer to represent bits using voltage levels and whose entire logic was specified in Boolean algebra. Invented by Floyd Steele, MADDIDA was developed at Northrop Aircraft Corporation between 1946 and 1949 to be used as a guidance system for the Snark missile
Magnetic Drum Digital Differential Analyzer
941
The Timişoara Polytechnic Institute Electronic Computer (Romanian: Mașina Electronică de Calcul Institutul Politehnic Timişoara), known as MECIPT, is the name used for a family of computers built from 1961 to 1968 at the Polytechnic University of Timișoara in Romania. MECIPT-1 was a first generation computer built by Iosif Kaufmann and Wilhelm Lowenfeld (1956–1962), a team joined in 1961 by Vasile Baltac. This was the second computer built in Romania after Victor Toma built the CIFA-1 in 1957, and the first in a Romanian university
MECIPT
942
Odra was a line of computers manufactured in Wrocław, Poland. The name comes from the Odra river that flows through the city of Wrocław. Overview The production started in 1959–1960
Odra (computer)
943
The Oslo Analyzer (1938 – 1954) was a mechanical analog differential analyzer, a type of computer, built in Norway from 1938 to 1942. It was the largest computer of its kind in the world when completed. The differential analyzer was based on the same principles as the pioneer machine developed by Vannevar Bush at MIT
Oslo Analyzer
944
The Parametron Computer 1 (PC-1) was a binary, single-address computer developed at Professor Hidetosi Takahasi's Laboratory at the Department of Physics, University of Tokyo, and was one of the first general purpose computers that used parametron components and dual frequency magnetic-core memory. Construction started in September 1957 and was completed on March 26, 1958. The PC-1 was used at Takahasi's Laboratory for research related both to hardware and software and the researchers in the Faculty of Science also used it for scientific computing
PC-1 (computer)
945
PERM (German: Programmierbare (Programmgesteuerte) Elektronische Rechenanlage München, lit.  'Munich Programmable (Program Controlled) Electronic Computing System') is a stored-program-controlled electronic computer, built in Munich under the auspices of Hans Piloty and Robert Sauer 1952–1956. Some jokingly called it Pilotys erstes RechenMonster ('Piloty's first calculating monster')
PERM (computer)
946
The Rice Institute Computer, also known as the Rice Computer or R1, was a 54-bit tagged architecture digital computer built during 1958–1961 (partially operational beginning in 1959) on the campus of Rice University, Houston, Texas, United States. Operating as Rice's primary computer until the middle 1960s, the Rice Institute Computer was decommissioned in 1971. The system initially used vacuum tubes and semiconductor diodes for its logic circuits; some later peripherals were built in solid-state emitter-coupled logic
Rice Institute Computer
947
Designed by Vannevar Bush after he became director of the Carnegie Institution for Science in Washington DC, the Rockefeller Differential Analyzer (RDA) was an all-electronic version of the Differential Analyzer, which Bush had built at the Massachusetts Institute of Technology between 1928 and 1931. The RDA was operational in 1942, a year after the Zuse Z3. It was equipped with 2000 vacuum tubes, weight 100 tons, used 200 miles of wire, 150 motors and thousand of relays
Rockefeller Differential Analyzer
948
The Saugatuck Gap Filler Annex (ADC ID: P-67C, NORAD ID: Z-67C, Z-34G) is a decommissioned radar installation that once served in the vast Cold War era Semi-Automatic Ground Environment (SAGE) air defense system. Of the hundreds of SAGE radars, Saugatuck's is one of, perhaps, two that remain nearly completely intact. Located immediately across the Kalamazoo River from Saugatuck, Michigan, at the top of Mount Baldhead, a 230-foot dune on the shore of Lake Michigan, the annex was positioned to fill gaps in the coverage of long-range "heavy" radars sited further inland
Saugatuck Gap Filler Radar Annex
949
Setun (Russian: Сетунь) was a computer developed in 1958 at Moscow State University. It was built under the leadership of Sergei Sobolev and Nikolay Brusentsov. It was the most modern ternary computer, using the balanced ternary numeral system and three-valued ternary logic instead of the two-valued binary logic prevalent in other computers
Setun
950
Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist and cryptographer known as the "father of information theory". As a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT), he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship. Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications
Claude Shannon
951
The ERA 1101, later renamed UNIVAC 1101, was a computer system designed and built by Engineering Research Associates (ERA) in the early 1950s and continued to be sold by the Remington Rand corporation after that company later purchased ERA. Its (initial) military model, the ERA Atlas, was the first stored-program computer that was moved from its site of manufacture and successfully installed at a distant site. Remington Rand used the 1101's architecture as the basis for a series of machines into the 1960s
UNIVAC 1101
952
A vacuum-tube computer, now termed a first-generation computer, is a computer that uses vacuum tubes for logic circuitry. Although superseded by second-generation transistorized computers, vacuum-tube computers continued to be built into the 1960s. These computers were mostly one-of-a-kind designs
Vacuum-tube computer
953
The Water Integrator (Russian: Гидравлический интегратор Gidravlicheskiy integrator) was an early analog computer built in the Soviet Union in 1936 by Vladimir Sergeevich Lukyanov. It functioned by careful manipulation of water through a room full of interconnected pipes and pumps. The water level in various chambers (with precision to fractions of a millimeter) represented stored numbers, and the rate of flow between them represented mathematical operations
Water integrator
954
The Wisconsin Integrally Synchronized Computer (WISC) was an early digital computer designed and built at the University of Wisconsin–Madison. Operational in 1954, it was the first digital computer in the state. Pioneering computer designer Gene Amdahl drafted the WISC's design as his PhD thesis
Wisconsin Integrally Synchronized Computer
955
In computer hardware, a word mark or flag is a bit in each memory location on some variable word length computers (e. g. , IBM 1401, 1410, 1620) used to mark the end of a word
Word mark (computer hardware)
956
The ZEBRA (Zeer Eenvoudige Binaire Reken Automaat translated Very Simple Binary Automatic Calculator) was one of the first computers to be designed in the Netherlands, (the first one was the "ARRA") and one of the first Dutch computers to be commercially available. It was designed by Willem van der Poel of the Netherlands Post, Telegraph and Telephone, and first delivered in 1958. The production run consisted of fifty-five machines, manufactured and marketed by the British company Standard Telephones and Cables, Ltd
ZEBRA (computer)
957
In computer science, transaction processing is information processing that is divided into individual, indivisible operations called transactions. Each transaction must succeed or fail as a complete unit; it can never be only partially complete. For example, when you purchase a book from an online bookstore, you exchange money (in the form of credit) for a book
Transaction processing
958
Active redundancy is a design concept that increases operational availability and that reduces operating cost by automating most critical maintenance actions. This concept is related to condition-based maintenance and fault reporting. History The initial requirement began with military combat systems during World War I
Active redundancy
959
Checkpointing is a technique that provides fault tolerance for computing systems. It basically consists of saving a snapshot of the application's state, so that applications can restart from that point in case of failure. This is particularly important for long running applications that are executed in failure-prone computing systems
Application checkpointing
960
Avalanche is a decentralized, open-source proof of stake blockchain with smart contract functionality. AVAX is the native cryptocurrency of the platform. History Avalanche began as a protocol for solving for consensus in a network of unreliable machines, where failures may be crash-fault or Byzantine
Avalanche (blockchain platform)
961
A Byzantine fault (also Byzantine generals problem, interactive consistency, source congruency, error avalanche, Byzantine agreement problem, and Byzantine failure) is a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine generals problem", developed to describe a situation in which, to avoid catastrophic failure of the system, the system's actors must agree on a concerted strategy, but some of these actors are unreliable. In a Byzantine fault, a component such as a server can inconsistently appear both failed and functioning to failure-detection systems, presenting different symptoms to different observers
Byzantine fault
962
The Chandra–Toueg consensus algorithm, published by Tushar Deepak Chandra and Sam Toueg in 1996, is an algorithm for solving consensus in a network of unreliable processes equipped with an eventually strong failure detector. The failure detector is an abstract version of timeouts; it signals to each process when other processes may have crashed. An eventually strong failure detector is one that never identifies some specific non-faulty process as having failed after some initial period of confusion, and, at the same time, eventually identifies all faulty processes as failed (where a faulty process is a process which eventually fails or crashes and a non-faulty process never fails)
Chandra–Toueg consensus algorithm
963
A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The components of a cluster are usually connected to each other through fast local area networks, with each node (computer used as a server) running its own instance of an operating system
Computer cluster
964
In distributed computing, a conflict-free replicated data type (CRDT) is a data structure that is replicated across multiple computers in a network, with the following features: The application can update any replica independently, concurrently and without coordinating with other replicas. An algorithm (itself part of the data type) automatically resolves any inconsistencies that might occur. Although replicas may have different state at any particular point in time, they are guaranteed to eventually converge
Conflict-free replicated data type
965
A fundamental problem in distributed computing and multi-agent systems is to achieve overall system reliability in the presence of a number of faulty processes. This often requires coordinating processes to reach consensus, or agree on some data value that is needed during computation. Example applications of consensus include agreeing on what transactions to commit to a database in which order, state machine replication, and atomic broadcasts
Consensus (computer science)
966
In computer main memory, auxiliary storage and computer buses, data redundancy is the existence of data that is additional to the actual data and permits correction of errors in stored or transmitted data. The additional data can simply be a complete copy of the actual data (a type of repetition code), or only select pieces of data that allow detection of errors and reconstruction of lost or damaged data up to a certain level. For example, by including computed check bits, ECC memory is capable of detecting and correcting single-bit errors within each memory word, while RAID 1 combines two hard disk drives (HDDs) into a logical storage unit that allows stored data to survive a complete failure of one drive
Data redundancy
967
Data synchronization is the process of establishing consistency between source and target data stores, and the continuous harmonization of the data over time. It is fundamental to a wide variety of applications, including file synchronization and mobile device synchronization. Data synchronization can also be useful in encryption for synchronizing public key servers
Data synchronization
968
A disk array is a disk storage system which contains multiple disk drives. It is differentiated from a disk enclosure, in that an array has cache memory and advanced functionality, like RAID, deduplication, encryption and virtualization. Components of a disk array include: Disk array controllers Cache in form of both volatile random-access memory and non-volatile flash memory
Disk array
969
A disk array controller is a device that manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, thus it is sometimes referred to as RAID controller. It also often provides additional disk cache
Disk array controller
970
In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete logical representation of separate volume copies
Disk mirroring
971
In reliability engineering, dual modular redundancy (DMR) is when components of a system are duplicated, providing redundancy in case one should fail. It is particularly applied to systems where the duplicated components work in parallel, particularly in fault-tolerant computer systems. A typical example is a complex computer system which has duplicated nodes, so that should one node fail, another is ready to carry on its work
Dual modular redundancy
972
Error correction code memory (ECC memory) is a type of computer data storage that uses an error correction code (ECC) to detect and correct n-bit data corruption which occurs in memory. ECC memory is used in most computers where data corruption cannot be tolerated, like industrial control applications, critical databases, and infrastructural memory caches. Typically, ECC memory maintains a memory system immune to single-bit errors: the data that is read from each word is always the same as the data that had been written to it, even if one of the bits actually stored has been flipped to the wrong state
ECC memory
973
An error-tolerant design (or human-error-tolerant design) is one that does not unduly penalize user or human errors. It is the human equivalent of fault tolerant design that allows equipment to continue functioning in the presence of hardware faults, such as a "limp-in" mode for an automobile electronics unit that would be employed if something like the oxygen sensor failed. Use of behavior shaping constraints to prevent errors Use of forcing functions or behavior-shaping constraints is one technique in error-tolerant design
Error-tolerant design
974
In engineering, a fail-safe is a design feature or practice that, in the event of a specific type of failure, inherently responds in a way that will cause minimal or no harm to other equipment, to the environment or to people. Unlike inherent safety to a particular hazard, a system being "fail-safe" does not mean that failure is impossible or improbable, but rather that the system's design prevents or mitigates unsafe consequences of the system's failure. That is, if and when a "fail-safe" system fails, it remains at least as safe as it was before the failure
Fail-safe
975
Failover is switching to a redundant or standby computer server, system, hardware component or network upon the failure or abnormal termination of the previously active application, server, system, hardware component, or network in a computer network. Failover and switchover are essentially the same operation, except that failover is automatic and usually operates without warning, while switchover requires human intervention. Systems designers usually provide failover capability in servers, systems or networks requiring near-continuous availability and a high degree of reliability
Failover
976
In a distributed computing system, a failure detector is a computer application or a subsystem that is responsible for the detection of node failures or crashes. Failure detectors were first introduced in 1996 by Chandra and Toueg in their book Unreliable Failure Detectors for Reliable Distributed Systems. The book depicts the failure detector as a tool to improve consensus (the achievement of reliability) and atomic broadcast (the same sequence of messages) in the distributed system
Failure detector
977
Fencing is the process of isolating a node of a computer cluster or protecting shared resources when a node appears to be malfunctioning. As the number of nodes in a cluster increases, so does the likelihood that one of them may fail at some point. The failed node may have control over shared resources that need to be reclaimed and if the node is acting erratically, the rest of the system needs to be protected
Fencing (computing)
978
FlockDB was an open-source distributed, fault-tolerant graph database for managing wide but shallow network graphs. It was initially used by Twitter to store relationships between users, e. g
FlockDB
979
Gbcast (also known as group broadcast) is a reliable multicast protocol that provides ordered, fault-tolerant (all-or-none) message delivery in a group of receivers within a network of machines that experience crash failure. The protocol is capable of solving Consensus in a network of unreliable processors, and can be used to implement state machine replication. Gbcast can be used in a standalone manner, or can support the virtual synchrony execution model, in which case Gbcast is normally used for group membership management while other, faster, protocols are often favored for routine communication tasks
Gbcast
980
Geoplexing is a computer science term relating to the duplication of computer storage and applications within a server farm over geographically diverse locations for the purpose of fault tolerance. The name comes from a contraction of geographical multiplex. Description In a geoplex, server clusters are duplicated over one or more geographically separate sites
Geoplexing
981
In quantum information, the gnu code refers to a particular family of quantum error correcting codes, with the special property of being invariant under permutations of the qubits. Given integers g (the gap), n (the occupancy), and m (the length of the code), the two codewords are | 0 L ⟩ = ∑ ℓ even 0 ≤ ℓ ≤ n ( n ℓ ) 2 n − 1 | D g ℓ m ⟩ {\displaystyle |0_{\rm {L}}\rangle =\sum _{\ell \,{\textrm {even}} \atop 0\leq \ell \leq n}{\sqrt {\frac {n \choose \ell }{2^{n-1}}}}|D_{g\ell }^{m}\rangle } | 1 L ⟩ = ∑ ℓ odd 0 ≤ ℓ ≤ n ( n ℓ ) 2 n − 1 | D g ℓ m ⟩ {\displaystyle |1_{\rm {L}}\rangle =\sum _{\ell \,{\textrm {odd}} \atop 0\leq \ell \leq n}{\sqrt {\frac {n \choose \ell }{2^{n-1}}}}|D_{g\ell }^{m}\rangle } where | D k m ⟩ {\displaystyle |D_{k}^{m}\rangle } are the Dicke states consisting of a uniform superposition of all weight-k words on m qubits, e. g
Gnu code
982
High-availability clusters (also known as HA clusters, fail-over clusters) are groups of computers that support server applications that can be reliably utilized with a minimum amount of down-time. They operate by using high availability software to harness redundant computers in groups or clusters that provide continued service when system components fail. Without clustering, if a server running a particular application crashes, the application will be unavailable until the crashed server is fixed
High-availability cluster
983
A hot spare or warm spare or hot standby is used as a failover mechanism to provide reliability in system configurations. The hot spare is active and connected as part of a working system. When a key component fails, the hot spare is switched into operation
Hot spare
984
Hot swapping is the replacement or addition of components to a computer system without stopping, shutting down, or rebooting the system; hot plugging describes the addition of components only. Components which have such functionality are said to be hot-swappable or hot-pluggable; likewise, components which do not are cold-swappable or cold-pluggable. Most desktop computer hardware, such as CPUs and memory, are only cold-pluggable
Hot swapping
985
Lockstep systems are fault-tolerant computer systems that run the same set of operations at the same time in parallel. The redundancy (duplication) allows error detection and error correction: the output from lockstep operations can be compared to determine if there has been a fault if there are at least two systems (dual modular redundancy), and the error can be automatically corrected if there are at least three systems (triple modular redundancy), via majority vote. The term "lockstep" originates from army usage, where it refers to synchronized walking, in which marchers walk as closely together as physically practical
Lockstep (computing)
986
Log shipping is the process of automating the backup of transaction log files on a primary (production) database server, and then restoring them onto a standby server. This technique is supported by Microsoft SQL Server, 4D Server, MySQL, and PostgreSQL. Similar to replication, the primary purpose of log shipping is to increase database availability by maintaining a backup server that can replace a production server quickly
Log shipping
987
A log-structured filesystem is a file system in which data and metadata are written sequentially to a circular buffer, called a log. The design was first proposed in 1988 by John K. Ousterhout and Fred Douglis and first implemented in 1992 by Ousterhout and Mendel Rosenblum for the Unix-like Sprite distributed operating system
Log-structured file system
988
In the context of virtualization, where a guest simulation of an entire computer is actually merely a software virtual machine (VM) running on a host computer under a hypervisor, migration (also known as teleportation) is the process by which a running virtual machine is moved from one physical host to another, with little or no disruption in service. Subjective effects Ideally, the process is completely transparent, resulting in no disruption of service (or downtime). In practice, there is always some minor pause in availability, though it may be low enough that only hard real-time systems are affected
Migration (virtualization)
989
In computer storage, multipath I/O is a fault-tolerance and performance-enhancement technique that defines more than one physical path between the CPU in a computer system and its mass-storage devices through the buses, controllers, switches, and bridge devices connecting them. As an example, a SCSI hard disk drive may connect to two SCSI controllers on the same computer, or a disk may connect to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route the I/O through the remaining controller, port or switch transparently and with no changes visible to the applications, other than perhaps resulting in increased latency
Multipath I/O
990
The Multiple Spanning Tree Protocol (MSTP) and algorithm, provides both simple and full connectivity assigned to any given virtual LAN (VLAN) throughout a bridged local area network. MSTP uses bridge protocol data unit (BPDUs) to exchange information between spanning-tree compatible devices, to prevent loops in each Multiple Spanning Tree instance (MSTI) and in the common and internal spanning tree (CIST), by selecting active and blocked paths. This is done as well as in Spanning Tree Protocol (STP) without the need of manually enabling backup links and getting rid of switching loop danger
Multiple Spanning Tree Protocol
991
N-version programming (NVP), also known as multiversion programming or multiple-version dissimilar software, is a method or process in software engineering where multiple functionally equivalent programs are independently generated from the same initial specifications. The concept of N-version programming was introduced in 1977 by Liming Chen and Algirdas Avizienis with the central conjecture that the "independence of programming efforts will greatly reduce the probability of identical software faults occurring in two or more versions of the program". The aim of NVP is to improve the reliability of software operation by building in fault tolerance or redundancy
N-version programming
992
NonStop is a series of server computers introduced to market in 1976 by Tandem Computers Inc. , beginning with the NonStop product line. It was followed by the Tandem Integrity NonStop line of lock-step fault tolerant computers, now defunct (not to be confused with the later and much different Hewlett-Packard Integrity product line extension)
NonStop (server computers)
993
NonStop SQL is a commercial relational database management system that is designed for fault tolerance and scalability, currently offered by Hewlett Packard Enterprise. The latest version is SQL/MX 3. 4
NonStop SQL
994
OpenVMS, often referred to as just VMS, is a multi-user, multiprocessing and virtual memory-based operating system. It is designed to support time-sharing, batch processing, transaction processing and workstation applications. Customers using OpenVMS include banks and financial services, hospitals and healthcare, telecommunications operators, network information services, and industrial manufacturers
OpenVMS
995
Operational availability in systems engineering is a measurement of how long a system has been available to use when compared with how long it should have been available to be used. Definition Operational availability is a management concept that evaluates the following. Diagnostic down time Criticality Fault isolation down time Logistics delay down time Corrective maintenance down timeAny failed item that is not corrected will induce operational failure
Operational availability
996
Parallel Computers, Inc. was an American computer manufacturing company, based in Santa Cruz, California, that made fault-tolerant computer systems based around the Unix operating system and various processors in the Motorola 68000 series. History The company was founded in 1983 and was premised on the idea of providing a less expensive alternative to existing fault-tolerant solutions, one that would be attractive to smaller businesses
Parallel Computers, Inc.
997
Paxos is a family of protocols for solving consensus in a network of unreliable or fallible processors. Consensus is the process of agreeing on one result among a group of participants. This problem becomes difficult when the participants or their communications may experience failures
Paxos (computer science)
998
Quantum error correction (QEC) is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is theorised as essential to achieve fault tolerant quantum computing that can reduce the effects of noise on stored quantum information, faulty quantum gates, faulty quantum preparation, and faulty measurements. This would allow algorithms of greater circuit depth
Quantum error correction
999
Raft is a consensus algorithm designed as an alternative to the Paxos family of algorithms. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features. Raft offers a generic way to distribute a state machine across a cluster of computing systems, ensuring that each node in the cluster agrees upon the same series of state transitions
Raft (algorithm)