text
stringlengths
4.06k
10.7k
id
stringlengths
47
47
dump
stringclasses
20 values
url
stringlengths
26
321
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.71
0.98
token_count
int64
1.02k
2.05k
score
float64
3.5
4.53
int_score
int64
4
5
The roots of encryption go deep into human history. Encryption has been used for centuries to encode messages, usually to keep government secrets, but also to protect business or trade secrets such as the formula to make silk or pottery. Early encryption was fairly simplistic, largely relying on paper and pencil techniques like steganography, transposition and substitution. In the last century, encryption methods have advanced at a rapid clip, first by leveraging automation and the use of machinery and then by employing advanced mathematics and powerful computers. While encryption today involves powerful computers, it wasn't always so complicated or ubiquitous. Early Encryption Methods It is said that in 700 B.C., the Spartan military used scytales to send secret messages during battle. The sender and the recipient each possessed a wooden rod of the same diameter and length. The sender would tightly wind a piece of parchment or leather around the stick and write a message. The unwound document would be sent to the recipient, who would wind it around his stick to decode the message. In its unwound state, the message was gibberish. Julius Caesar created one of the simplest and most recognized encryption techniques: the Caesar cipher. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, D would be replaced by A, E would become B, and so on. He used this method in his private correspondence at a time when many of his enemies could not read and other may have assumed the message was written in a foreign language. It is therefore assumed to have been reasonably secure in the first century B.C., but today a single-alphabet substitution cipher is easily broken and offers essentially zero security. In the 15th century, Italy’s Leon Battista Alberti was the quintessential Renaissance man. Mostly known for being an artist, he also is credited as an author, architect, priest, poet, linguist, philosopher and cryptographer. In 1467, Alberti invented the first polyalphabetic substitution cipher. The Alberti Cipher consisted of two metal discs on the same axle, one inside the other, and involved mixed alphabets and variable rotations. It changed the course of encryption: unlike previous ciphers, the Alberti Cipher was impossible to break without knowledge of the method. This was because the frequency distribution of the letters was masked, and frequency analysis – the only known technique for attacking ciphers at that time – was no help. During his tenure as George Washington’s Secretary of State, Thomas Jefferson invented the Jefferson disk, or wheel cipher. The system used a set of wheels or disks, and the letters of the alphabet were inscribed on each wheel in random order. Turning them would scramble and unscramble words. Each disk is marked with a unique number, and the hole in the center of the disk allowed them to be stacked on an axle in any order desired. To encrypt the message, both sender and receiver had to arrange the disks in the same predefined order. By using 36 disks, Jefferson’s disk was considered unbreakable at the time. Encryption and War Jefferson’s disk was independently reinvented in the late 19th century by Commandant Etienne Bazeries, and named Bazeries cylinder. It was used as a U.S. Army field cipher after World War I. But perhaps the most famous war time encryption machine is Engima. Invented by Arthur Scherbius, Enigma was Germany's main cryptographic technology during World War II. The Enigma machine consisted of a basic keyboard, a display that would reveal the cipher text letter and a scrambling mechanism. Each plain text letter entered via the keyboard was transcribed to its corresponding cipher text letter. Enigma was eventually broken due in large part to the work of Marian Rejewski, a Polish statistician, mathematician and code breaker. Before Germany invaded Poland, Rejewski transferred all his research to the English and the French. The team at Bletchley Park, including Alan Turing, used Rejewski's work to build bombes, electromechanical machines that were designed specifically to break Enigma. This work is credited with being a crucial step to ending World War II. Encryption in Today’s Computing World Advances in computing led to even greater advances in encryption. In 1979, the National Bureau of Standards invented Data Encryption Standard (DES) using what was then state-of-the-art 56-bit encryption – even supercomputers of the day could not crack it. In general, the longer the key is, the more difficult it is to crack the code. This holds true because deciphering an encrypted message by brute force would require the attacker to try every possible key. DES was the standard for encryption for more than 20 years, until 1998, when the Electronic Frontier Foundation broke the DES key. It took 56 hours in 1998, and only 22 hours to accomplish the same feat in 1999. As we can see, as technology advances, so does the quality of encryption. Once the internet began to see increased commercial transaction use, DES was finally replaced by the Advanced Encryption Standard, or AES, which was found through a competition open to the public and approved by NIST. This method is still in use today. But perhaps one of the most notable advances in the study of cryptography since World War II is the introduction of the asymmetric key ciphers (also known as public key encryption). Whitfield Diffie and Martin Hellman were pioneers in the field of asymmetric cryptographic techniques. These are algorithms that use a pair of mathematically related keys, each of which decrypts the encryption performed using the other. By designating one key of the pair as private, and the other as public (often widely available), no secure channel is needed for key exchange. You can reuse the same key pair indefinitely – as long as the private key stays secret. Most importantly, in an asymmetric key system, the encryption and decryption keys are not identical, which means that, for the first time in history, two people could secure communications without any prior interaction – ideal for internet transactions. Ronald L. Rivest, Adi Shamir and Leonard M. Adleman were inspired by Diffie and Hellman to create a practical public key system. The result was RSA, which was based on the difficulty of factoring large numbers, and is a common cryptograhic technique on the internet today. Now that we have widespread use of encryption, what challenges do we face? To break encryption, the most basic method of attack is brute force. This is why keys are getting longer and longer – to create more possible solutions and increase the resources required to perform such large computations. There are more than a few informed experts who believe that quantum computing may bring forth the ability to break codes in the foreseeable future. Some of the industry’s brightest minds are working on quantum-resistant encryption so that we can continue to exchange sensitive information privately. There are also concerns about cost and downtime when deploying encryption schemes. For enterprise-class encryption, you used to need to account and plan for downtime while tens of thousands of files or a large database was getting encrypted. But now you have the option of enterprise encryption without downtime, with Vormetric Live Data Transformation. In fact, a database of any size or any number of files can be used while undergoing encryption. We call it zero-downtime encryption, and it’s an industry game-changer. And now as we have more and more services moving to the cloud, encrypting and securing data is even more critical. More sensitive data is residing in the cloud, and ensuring that data is secure can be a challenging task. However, there are new strategies for cloud data protection such as transparent and application-level encryption. Additional methods of encryption can involve tokenization and dynamic data masking. I would be remiss if I didn’t add key management to the mix, as well. Compliance mandates, data-residency requirements, government regulations and best practices require that enterprises protect and maintain encryption keys in accordance with specific frameworks and laws. Allowing organizations to “bring your own key,” also known as BYOK, enables maximum control and trust between the data owner and cloud provider, and is considered a best practice for internal and external compliance controls. Later this month, Thales will release the results of our annual Global Encryption Study. While I won’t give away the findings, I can share that keeping pace with cloud adoption and escalating threats is a major pain point for organizations and business leaders. It is our focus and vision to make protecting your data as transparent and operationally “invisible” as possible. It is a tough mission, but a worthy one. I hope you’ll download that report when it becomes available, as I think you’ll find the results eye-opening.
<urn:uuid:96bbead5-35e4-4480-83cb-03cd246ab353>
CC-MAIN-2021-43
https://cpl.thalesgroup.com/2017/04/04/evolution-encryption
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00279.warc.gz
en
0.959299
1,848
3.875
4
Causality is one of the oldest and most important concepts of Physics. Even recently, at the beginning of the XX century, with the invention of Special Relativity, this concept was in some sense rediscovered. As in a relativistic framework the events can change their temporal order a great effort was made in order to preserve causality in the theory. There is a general consensus in the scientific community about this concept: For all scientific theories, even for all the theories that will come in the future, causality should be preserved. If causal relations are broken an important number of paradoxes and counter-intuitive results arise. You could even go back in time and kill your grandgrandfather! In quantum mechanics the discovery of entangled states, that are states with correlations than can act immediately even in they are separated by a distance of millions of light years, challenged this concept. The solution for preserving causality was to accept that quantum systems are intrinsically random and no theory can give a complete description of them. Very recently, in Reference 1, a paper published in Nature Communications by Ognyan Oreshkov and coworkers, from the University of Vienna, the concept of causality itself is discussed. Just by assuming that quantum mechanics is valid only locally, they show that it is difficult to talk about ‘causal order’. As it has been made before in order to analyze the effects of quantum mechanics the authors decided to illustrate their result with a thought experiment. The rules of this experiment are: - There are two parties, Alice and Bob. They are in labs that are far away from each other. - They both receive one random bit, either 0 or 1. - They can send information out between their labs. - They have to guess the bit of each other. This decision should be made at the same time they send their information out. Obviously, the experiment should be repeated several times, and the goal is to guess the bit of the other party as much times as possible. The ‘figure of merit’ that measures how well we are performing the game is the probability of guessing for both Alice and Bob together, that is a number between 0 and 1. Let see what can we do in a classical, and causal, framework. It is clear that the success probability will depend in this case on the time order of the events. If Alice sends her information first, she can use it in order to communicate Bob what her bit was. Indeed, Bob will succeed all the time. The problem now is that Alice has no clue about Bob’s bit, so the best she can do is just say something at random. The same problem arises if it is Bob the first in sending the information. So, in the best possible scenario, the probability of success is 1 for one of them, the one that acts second, and ½ for the other one, the one that acts first. That means that the best possible probability in a classical causal framework is ¾. So, is there any difference in a quantum mechanics framework? Not really, quantum mechanics is also a theory with a definite causal background and has to fulfill the same constrains. But, what happens if we slightly modify quantum mechanics in order to remove the space-time background, making it only valid locally, but not globally? That is the problem analyzed in Ref. 1 by Oreshkov et al. There, the authors performed a similar experiment, where it is assumed that Alice and Bob can make any kind of quantum operation in their labs. In these labs quantum mechanics holds, but there is not any assumption of a preexisting background time, or global causal structure. In this scenario, that differs from normal quantum mechanics, they show that the limit of the probability of success can be enhanced beyond the causal limit. The rules for the non-causal quantum game are: - Each laboratory is isolated. - Quantum mechanics can be applied locally in the labs, but there is no assumption of what happens globally. - There is also no assumptions about the spatio-temporal location of the experiments. That means that it is not define who makes the measurement before. - They don’t need to communicate in this case. This is a necessary assumption in this case, because in this case there is not a definite spatio-temporal order, so it is not defined who acts first and can communicate and who is second and can not. Based on these assumptions the authors create a new framework based on local quantum mechanics for analyzing the possible options of Alice and Bob. The results are surprising, they find a possibility of reaching a success probability of 0,853, that is higher than the ¾ probability of the best causal scenario. Even, without communication between them. And what does it mean? Is causality broken in this new theory and we can communicate now with our dead grandgrandfather? That could be very interesting for science fiction writers, but it is not like that. The authors claim in their paper that, as quantum mechanics can be applied locally to Alice and Bob’s labs, causality should be preserved. This is due to the noise in the evolution ‘backward in time’ and it is compatible with the Novikov principle. So, if causality itself is not broken, why is this result interesting? First, the analysis of new possible frameworks is always useful. In general relativity, for instance, when one imposes only local constrains new and interesting features arise, as exotic causal structures. It looks like that something similar happens in the quantum regime. Also, this results imply that if quantum mechanics only works locally new kind of correlations appear, stronger than the ones that are usual in normal quantum mechanics, like entanglement. Even, if these correlations can not break the causal order, as is expectable, the potential implications are huge. We should not forget that entanglement leads to interesting applications as quantum computing, quantum teleportation or cryptography. We can not know which applications these new correlations may have. Finally, there is a more important question: Are these correlations something real or just a mathematical trick? About this question, the authors mention in the discussion of their paper that maybe these correlations can be found in regimes where the actual theories are untested, such as, for example, those in which quantum mechanics and general relativity become relevant. So, in my opinion, for the moment this result is purely theoretical, but very interesting in any case. This kind of studies, even if they are just theory, usually open a door to new ways of thinking. Also new theories and potential applications can be realized from it. Only time can show how useful it will be.
<urn:uuid:f735145d-344d-4fa8-aaa9-b6e7a2fb3f40>
CC-MAIN-2021-43
https://mappingignorance.org/2012/12/04/quantum-correlations-with-no-causal-order/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00640.warc.gz
en
0.951041
1,372
3.671875
4
The most pressing questions facing researchers today require deep and broad knowledge, often spanning multiple disciplines. To tackle these problems, Syracuse University is establishing groups, or “clusters,” of scholars from diverse backgrounds dedicated to working on common projects. The clusters were chosen as areas where the University has potential to find breakthrough or breakout solutions to society’s greatest challenges. One field recently added to the research clusters is quantum information science. Why do we need quantum information science? To study the smallest particles that exist, scientists from all over the world came together to construct the largest machine every built. The Large Hadron Collider (LHC) is a 17-mile-long track, built deep underground, on the border of France and Switzerland. It uses almost 10,000 magnets, kept at a temperature of about -450 degrees Fahrenheit, and has cost over $4 billion. It has collected and archived 100 petabytes of data—the equivalent amount of HD-quality video would take more than 800 years to watch. Ideally, the results of particle collision experiments at the LHC would be used to confirm results predicted by theories. However, the events at the LHC are so complex that we cannot yet solve the equations governing them precisely, either by hand or on today’s computers. Until we can do that, we must simply wait and continue collecting data. Elsewhere, drug companies are also studying particle interactions, but with the goal of designing molecules for use in medicines. To test the behavior of these molecules, researchers often use computer simulations. But even the most powerful computers cannot simulate the complexity of large molecules, making it necessary to use approximations, or to abandon the simulations altogether and turn to trial-and-error laboratory tests. In any case, finding the optimal molecule for a given task means testing each possible solution, one at a time. Many of today’s problems rely on computers to store and process vast amounts of data to find a single, optimal result. From finding the fastest route between two locations or predicting whether it will rain next week, to understanding the fabric of our universe or discovering life-saving medicines, even the most powerful computers do not have the ability to predict solutions exactly, and researchers must run intensive experiments or make approximations to run simulations. Although the speed of computers increases each year, so does our demand. Instead of trying to make our current computers do more of the same, we need a different type of machine altogether. The idea for such a computer has been around since the mid-20th century but gathered momentum in the 1980s and 1990s. It is called a quantum computer, and the study of how it works is called quantum information science. Quantum computers offer the new approach we need to store and process data, and they happen to be particularly poised to solve optimization and particle physics problems. What is quantum information? Imagine we both close our eyes and I flip a coin. The coin lands, but we keep our eyes closed. We don’t know the state of the coin—whether it has heads or tails facing up—but we know that the coin is in one of two possible states. Each of us has a fifty-percent chance of guessing the state correctly, but the coin is and always will be heads-up or tails-up. Now imagine that, instead of a coin, I use an object that has different rules. If we open our eyes and look at the object, it will be in one of two states, just like the coin. But if we keep our eyes closed, the object will not be sitting there in that same state, just waiting for us to look. Instead, while our eyes are closed, the object is in a different, third kind of state. How do we know that the object is in this third state without being able to see it? Moreover, how do we know that the everyday coin isn’t in this third kind of state before we open our eyes? These are the kinds of questions that quantum mechanics elicits and then answers, proving over and over that this different kind of state really does exist, and that it really is different than its classical counterpart. In a classical computer, like the one on your desk or in your phone or in your car, information is stored in the physical states of objects in the computer. The nature of these objects is that they can, like coins, be in one of two states (heads or tails, 1 or 0, on or off), and nothing in between. In a computer with a hard drive, these objects are little magnets that can each point only north or south, while in a computer with a solid-state drive, these objects are transistors that are either charged or not charged. These objects are called bits . Just as we build a word that carries meaning by putting specific letters in a specific order, a computer constructs a piece of information from specific values of bits in a specific order. A quantum computer also stores information in the physical states of objects. These objects, called qubits , will also, when measured, each be found to be in either a 1 or 0 state. Unlike a bit, however, the state of a qubit before we measure it is different. It’s a third kind of state. Amazingly, this third state is related to the other two, and we know exactly how. The state of the qubit before being measured is a combination of its likelihoods of being found to be a 1 or a 0. The qubit is a new object with which to do computation. While the information contained in a bit is classical (either 1 or 0) a qubit contains quantum information . And because this is an object with new properties, it can potentially solve new problems, or old problems in new ways, using quantum computation techniques that were not accessible before . The new rules a qubit offers for doing computation are strange (Einstein called one of them “spooky”), but if we can understand them and harness them, the power of computers could grow tremendously. What can quantum information do? Just the fact that the state of a qubit can be something other than 1 or 0 means that a single qubit can hold an amount of information that a classical computer would need whole sequences of bits to construct. It’s true that if we measured the state of the qubit, we would lose all that special information, but it is actually possible to use the qubit for computation without measuring it, while it’s still in that third kind of state. This means that problems that require checking and comparing data, like cracking passwords or finding the best route between two places on a map, will become, literally, exponentially easier. In addition to being able to do old computations in new ways, qubits offer the potential to solve problems that are intractable with classical computers. For example, simulating quantum-mechanical systems becomes much more straightforward using objects that are quantum-mechanical themselves. Particle collisions, like those at the LHC, or molecular interactions for potential pharmaceuticals, are therefore natural candidates for quantum computation. “We are at a critical juncture in the field of quantum information,” says Britton Plourde, a professor of physics in the College of Arts and Sciences . “Applications of quantum computing are already being pursued intensively by many corporate research labs and new startup companies.” Quantum computation’s potential applications have also garnered attention from governments around the world. “The Chinese government has invested $11 billion recently to establish a national lab in this area, and in 2019 the U.S. enacted the National Quantum Initiative, which commits $1.2 billion to quantum technology efforts,” Plourde says. For young researchers, the field offers an opportunity to contribute to cutting-edge applications of experimental and theoretical physics, chemistry, engineering and computer science. “Undergraduates involved in this type of research will have many research options if they choose to attend graduate school and career opportunities if they decide to work in industry,” says Plourde. The Quantum Information Science Cluster at Syracuse University will provide undergraduate and graduate students with a program to explore this field, both through classes and research helmed by a diverse group of scholars.
<urn:uuid:2d6b0504-2d1f-437f-b950-0e41fcd36b8e>
CC-MAIN-2021-43
https://www.syracuse.edu/stories/quantum-information-science/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00561.warc.gz
en
0.94702
1,700
3.53125
4
In a key step toward creating a working quantum computer, Princeton researchers have developed a method that may allow the quick and reliable transfer of quantum information throughout a computing device. The finding, by a team led by Princeton physicist Jason Petta, could eventually allow engineers to build quantum computers consisting of millions of quantum bits, or qubits. So far, quantum researchers have only been able to manipulate small numbers of qubits, not enough for a practical machine. "The whole game at this point in quantum computing is trying to build a larger system," said Andrew Houck, an assistant professor of electrical engineering who is part of the research team. To make the transfer, Petta's team used a stream of microwave photons to analyze a pair of electrons trapped in a tiny cage called a quantum dot. The "spin state" of the electrons – information about how they are spinning - serves as the qubit, a basic unit of information. The microwave stream allows the scientists to read that information. "We create a cavity with mirrors on both ends – but they don't reflect visible light, they reflect microwave radiation," Petta said. "Then we send microwaves in one end, and we look at the microwaves as they come out the other end. The microwaves are affected by the spin states of the electrons in the cavity, and we can read that change." In an ordinary sense, the distances involved are very small; the entire apparatus operates over a little more than a centimeter. But on the subatomic scale, they are vast. It is like coordinating the motion of a top spinning on the moon with another on the surface of the earth. "It's the most amazing thing," said Jake Taylor, a physicist at the National Institute of Standards and Technology and the Joint Quantum Institute at the University of Maryland, who worked on the project with the Princeton team. "You have a single electron almost completely changing the properties of an inch-long electrical system." For years, teams of scientists have pursued the idea of using quantum mechanics to build a new machine that would revolutionize computing. The goal is not build a faster or more powerful computer, but to build one that approaches problems in a completely different fashion. Standard computers store information as classical "bits", which can take on a value of either 0 or 1. These bits allow programmers to create the complex instructions that are the basis for modern computing power. Since Alan Turing took the first steps toward creating a computer at Princeton in 1936, engineers have created vastly more powerful and complex machines, but this basic binary system has remained unchanged. The power of a quantum computer comes from the strange rules of quantum mechanics, which describe the universe of subatomic particles. Quantum mechanics says that an electron can spin in one direction, representing a 1, or in another direction, a 0. But it can also be in something called "superposition" representing all states between 1 and 0. If scientists and engineers can build a working machine that takes advantage of this, they would open up entirely new fields of computing. "The point of a quantum computer is not that they can do what a normal computer can do but faster; that's not what they are," said Houck. "The quantum computer would allow us to approach problems differently. It would allow us to solve problems that cannot be solved with a normal computer." Mathematicians are still working on possible uses for a quantum system, but the machines could allow them to accomplish tasks such as factoring currently unfactorable numbers, breaking codes or predicting the behavior of molecules. One challenge facing scientists is that the spins of electrons, or any other quantum particles, are incredibly delicate. Any outside influences, whether a wisp of magnetism or glimpse of light, destabilizes the electrons' spins and introduces errors. Over the years, scientists have developed techniques to observe spin states without disturbing them. (This year's Nobel Prize in physics honored two scientists who first demonstrated the direct observation of quantum particles.) But analyzing small numbers of spins is not enough; millions will be required to make a real quantum processor. To approach the problem, Petta's team combined techniques from two branches of science: from materials science, they used a structure called a quantum dot to hold and analyze electrons' spins; and from optics, they adopted a microwave channel to transfer the spin information from the dot. To make the quantum dots, the team isolated a pair of electrons on a small section of material called a "semiconductor nanowire." Basically, that means a wire that is so thin that it can hold electrons like soda bubbles in a straw. They then created small "cages" along the wire. The cages are set up so that electrons will settle into a particular cage depending on their energy level. This is how the team reads the spin state: electrons of similar spin will repel, while those of different spins will attract. So the team manipulates the electrons to a certain energy level and then reads their position. If they are in the same cage, they are spinning differently; if they are in different cages, the spins are the same. The second step is to place this quantum dot inside the microwave channel. This allows the team to transfer the information about the pair's spin state – the qubit. Petta said the next step is to increase the reliability of the setup for a single electron pair. After that, the team plans to add more quantum dots to create more qubits. Team members are cautiously optimistic. There appear to be no insurmountable problems at this point but, as with any system, increasing complexity could lead to unforeseen difficulties. "The methods we are using here are scalable, and we would like to use them in a larger system," Petta said. "But to make use of the scaling, it needs to work a little better. The first step is to make better mirrors for the microwave cavity." The research was reported in the journal Nature on Oct. 18. In addition to Petta, Houck and Taylor, the research team includes associate research scholar Karl Petersson, undergraduate student Louis McFaul, post-doctoral researcher Minkyung Jung and graduate student Michael Schroer of the Princeton physics department. Support for the research was provided by the National Science Foundation, the Alfred P. Sloan Foundation, the Packard Foundation, the Army Research Office, and the Defense Advanced Research Projects Agency Quantum Entanglement Science and Technology Program.
<urn:uuid:eb8df66a-605c-4ae9-835d-9890eee4d4c2>
CC-MAIN-2021-43
https://research.princeton.edu/news/breakthrough-offers-new-route-large-scale-quantum-computing
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00001.warc.gz
en
0.938673
1,312
4.25
4
In a previous article, I introduced the recent open-sourcing of quantum computing software by DWave. DWave is the maker of a quantum computer being used and studied by a number of groups, including NASA and Google, and there are other quantum computers in the works too. Although the field is still young, recent progress has been making headlines. If we can make practical quantum computers, they will be very powerful—but to see why requires understanding what makes them different. In this article, I’ll explain the underlying physics that makes quantum computing possible. Quantum computers aren’t just a new, faster model of the computer in front of you. They’re based on a completely different method of storing information and decision-making. It’s like comparing a jet turbine to a propeller: they achieve the same purpose, but the complexity and power are vastly disproportionate. A Bit About Traditional Computers Let’s begin by reminding ourselves how digital computers work. The basic ingredient is the binary digit, or bit, which may take only the values 0 or 1. In modern computers, bits take the form of tiny electrical switches called transistors. Transistors are in one of two states. When they are switched on, they conduct electrical current. This is the “1” state. When switched off, they are not conducting current. This is the “0” state. In a physical computer chip, we might find a series of transistors in the following states: on, on, off, on. In binary, the mathematical language of computation, the series becomes 1101. This might appear to be an inadequately crude method of communicating information—how could we possibly convey the rich tapestry of the world using only this black-and-white mold? The first step is recognizing that bits can represent numbers in our traditional counting system. For example, 1101 represents the number 13 and 0110 represents the number 6. In fact, these are the only ways we can represent 13 and 6 using bits, creating a unique translation dictionary between strings of bits and normal numbers. In this way, we can assemble arbitrarily large numbers by stringing together bits. The MacBook Pro uses a 64-bit processor to express every number up to 18,446,744,073,709,551,615. (Check out this video to learn more about how binary works.) But if computers could merely store numbers, we would not find them very useful. The reason computers have become ubiquitous is we can use these numbers to further represent many other things. Take shades of gray: simply interpolate between pure black (0) and pure white (255, by convention). Colors can be decomposed into red, green, and blue components, each having their value interpolated up to 255. Logic operations, musical notes, letters in the alphabet, internet pages, online dating profiles and many other types of information may be expressed in the same way. Modern computers use billions of transistors and multiple levels of code to produce high-def video and complex apps, but look closely enough, and the digital world reduces to a simple series of bits. How Quantum Computers Are Different We need only look in our pocket to see that traditional computers are powerful. But there are some problems they’re ill-suited to solve. This is where quantum computers come in. A quantum computer can solve a special set of problems many magnitudes of order faster than traditional computers. What makes quantum computers so much faster? They can perform many calculations at once. “The building blocks of quantum computers are not bits and transistors. They are qubits and physical components so small they operate by the rules of quantum physics.” This is possible because the building blocks of quantum computers are not bits and transistors. They are qubits and physical components so small they operate by the rules of quantum physics. These components might literally be elementary particles, such as electrons, suspended in magnetic fields. This is where the weirdness of quantum physics comes into play. The standard shorthand explanation says traditional bits can be either 1 or 0, whereas according to the rules of quantum physics, qubits can be 1, 0, or both at the same time. This is what truly makes a quantum computer quantum. But let’s dig into what that means a bit more. Let’s Take a Quantum Hike To be clear, quantum computers do not offer more discrete states than a traditional computer—the states are still 1 and 0—but there is no longer an exclusive choice between these states required until the very end of a calculation. This may seem paradoxical—how can something be 1 and 0 simultaneously? And even if this is so, why is a choice required at the end? To better understand how this is possible, imagine hiking with a magnetic compass. During the day you navigate as you please and the terrain dictates, glancing at your compass and noting that your direction changes. You might begin walking east, then turn north, spin around to go south, before finally nearing northwest. But at the end of each day, you record only whether your encampment is north or south of your departure point that morning. An example log might read “Day 1: North. Day 2: North. Day 3: South. Day 4: North.” This two-choice answer belies your more elaborate trajectory containing all the other directions available to the compass. North represents “1” and south represents “0,” but of course, there are many other “intermediate” choices which can be expressed. This is similar to a quantum calculation. During the calculation, a qubit may take any value, but in the final answer there is only a 1 or 0 logged. So, the qubit’s initial state—the hike’s trailhead—is the problem it’s trying to solve written in binary. The qubit’s final state—the campsite or destination—is its part of the solution, also written in binary. And simplistically, we can think of the qubit’s interim state as a combination of 1 and 0, just as the other directions you moved throughout your hike were combinations of north and south. The day’s hike around swamps, between hills, and through forests is the quantum calculation—a circuitous route exploring the solution set with a zig northeast, a zag due west, and so on. Eventually, however, each qubit falls into a binary state, and we arrive at our destination. An Exponential Speed-Up During a calculation, a qubit pointing in the east direction isn’t simply weighted 50 percent north, 50 percent south—it will specifically remember that it was an eastern direction. This preservation of the direction is called coherence, and it is the most important property for quantum computers. Coherence is the property of a qubit to experience the full range of values and for qubits to share these values with each other. Four coherent qubits could possess values such as “east, northwest, southeast, west,” whereas incoherent qubits would possess only values “north, north, south, north.” Further, each of their values influences the values of their fellow coherent qubits. Since qubits sharing mixed states speeds up computation—this is how they perform multiple calculations at once—it is absolutely essential the qubit maintain coherence during the calculation. Otherwise, we are just using a simple, slow digital computer only performing one calculation at a time. A coherent quantum computer thus considers both 0 and 1 simultaneously, performing a calculation for the north as well as the south, but weighting the answer in a way that preserves the direction of the compass. Mathematically, this can be done using imaginary numbers, meaning we don’t need to consider east as a direction unique from north or south but only as a strange combination of them. Increasing coherence time has been a major obstacle in making commercially-viable quantum computers. Calculations require at least about 100 nanoseconds, and we have now achieved about 175 nanoseconds. As noted in my last article, this should improve as software improves—the more you can do with a quantum computer, the more resources will pour into the field. The upshot of all this? Quantum computers offer a massive increase in computing power. A single qubit may concurrently perform two calculations, two qubits may perform four, three qubits eight, and so forth, producing exponentially increasing speed. Just thirty qubits can simultaneously perform more than one billion calculations. Aimed at the right problems and with the right software, the rise of quantum computers may mark a very significant moment in the history of computation. Image Credit: Shutterstock
<urn:uuid:269ffece-92e7-4241-bcad-a609064eb457>
CC-MAIN-2021-43
https://singularityhub.com/2017/03/30/this-is-what-makes-quantum-computers-powerful-problem-solvers/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00561.warc.gz
en
0.938221
1,830
3.96875
4
Maybe you’re reading this piece on your laptop. Maybe it’s your smartphone or another mobile device. In either case, your machine works with little pieces of data called bits. Lots of bits. An iPhone X, for instance, has a three-gigabyte processor—about 24 billion bits. The reason your machine needs so many bits is that they can only be in one of two positions: zero or one. And to solve a complex problem, like simulating processes in the human body, your machine manipulates information in this form. That’s why speed matters, too. You need a machine that can do millions of these computations quickly for it to be useful on a practical, everyday basis. But what if those bits could be zero and one at the same time, computing multiple possibilities simultaneously? And what if they could influence one another to make more powerful computations possible? A lot fewer bits would yield much greater problem-solving capacity. Exponentially greater, in fact. A machine like that might be able to solve the most complicated problems in the blink of an eye. Welcome to quantum computing. “Think of a light switch,” explains physics PhD student Katherine Van Kirk. “It can either be on or off. The bits in a conventional computer are the same way: one or zero. But in a quantum computer, all of your individual bits—we call them qubits—are more like dimmer switches. Qubits can be somewhere between one and zero. And by interacting, the ‘brightness’ of one qubit can come to depend on the ‘brightness’ of another. A machine with these properties might solve complex problems more efficiently than our conventional computers.” As a PhD student at Harvard’s Graduate School of Arts and Sciences (GSAS), Van Kirk works with George Vasmer Leverett Professor of Physics Mikhail Lukin to develop a powerful new way of computing—one that operates on the atomic level in the realms of probability and uncertainty. As she does, however, she also keeps her eye on issues of social justice and equity, and the human relevance of technological advances in quantum science. Before the field can transform areas like cryptography, medical science, and machine learning, however, researchers need to understand how to isolate the information they want to get out of the new supercomputers. That’s where Van Kirk’s research comes in. “…a quantum computer might be able to simulate, for instance, some complicated molecule. That's really hard for a classical computer to do. And if you can simulate a complicated molecule, then you can learn about its characteristics, which might be useful in developing pharmaceuticals.” Capturing the Full Picture Van Kirk was an undergraduate at Stanford studying engineering physics when she attended her first class in quantum mechanics. She says that she was both flummoxed and smitten by the concepts she encountered. “When I took quantum mechanics, I fell in love,” she remembers. “I was totally baffled by the fact that an electron can be in two states at once. I wanted to learn everything about it.” Van Kirk continued to explore the quantum realm at Stanford before enrolling at the University of Cambridge for her master’s degree in applied mathematics. Before she came to Harvard, she worked at IBM, one of the leading developers of quantum computing. She says that one of the challenges with quantum devices is isolating the information you want. Your iPhone may have many times the processing power that the first room-sized computers had, but it works on the same unambiguous principle: zeros and ones. Quantum computers, however, operate in the realm of an exponential number of interdependent possibilities. As part of the Harvard Quantum Initiative (HQI), Van Kirk works to isolate the relevant information. “A quantum computer doesn’t always just give you the answer to your query,” she says. “Sometimes it gives you a quantum state. And the answer is encoded in it. You might think ‘Easy! To get the answer, I can just take a snapshot of my system.’ But in quantum mechanics, the full picture isn't that easy to capture. You need an exponential number of measurements to get the full picture. And that’s super expensive! Depending on what you’re trying to do, this costliness may force you to lose the advantage of using a quantum computer in the first place.” At the HQI, Van Kirk is trying not only to come up with a more efficient way to extract information from a quantum system, but also to do it with the tools experimentalists currently have on hand. “I’m designing a protocol that uses as few measurement snapshots as possible to extract only the information relevant to the problem we’re trying to solve,” she says. “The protocol will be a systematic way to take snapshots from just the right ‘angles.’ And the tools I’m using to develop the protocol are the same ones that the experimentalists have access to in the lab. So, it’s exciting because once it’s finished, it will be immediately useful.” While Van Kirk cautions that there are still may questions surrounding the application of quantum computing, she says that cryptography—critical for cybersecurity—is an area where it could be transformational. Big numbers are notoriously difficult for even the mightiest supercomputers to factor. The MIT mathematician Peter Shor has developed an algorithm that shows quantum computers could do this work more efficiently than the fastest classical supercomputers. Another area where quantum could have a big impact is medicine. “Quantum computers might be able to effectively simulate quantum mechanical processes,” Van Kirk says. “What that means is that a quantum computer might be able to simulate, for instance, some complicated molecule. That's really hard for a classical computer to do. And if you can simulate a complicated molecule, then you can learn about its characteristics, which might be useful in developing pharmaceuticals.” Science and Bias As excited as Van Kirk can be about quantum computing, she is also concerned about bias and inequality. At the time she took her first class in quantum mechanics at Stanford, Van Kirk was also running an international non-profit dedicated to empowering women in technology. She worries about how the systems we build can reflect our own biases or even historical inequities. She points to ProPublica’s 2016 study of “risk assessment tools” used by the criminal justice system. “This algorithm gives defendants a ‘risk score,’ which is supposed to capture how likely they are to commit a crime in the future,” she says. “While it doesn’t explicitly consider race as a factor, the algorithm was more likely to rate black defendants as higher risk than white defendants. The tool is used by law enforcement across the country, and in some states, the judges even see the scores during sentencing.” Biased tools like these are built using machine learning, one of the potential applications of quantum computers. Van Kirk studied how bias might be quantified and mitigated on these machines. She tested one possible metric for measuring bias on a quantum computer using public data from this risk assessment tool and found that it reflected the model’s disparate treatment of Black and white defendants. Along those lines, Van Kirk avidly studies social science research, hoping to draw a concrete link between real-world problems and the work she does on quantum computing. She says that she wants to advance science and equity too. “I put aside time every week to read about what's being done in fields other than physics,” she says. “Recently I’ve been reading a lot of computer science and molecular biology, but I also enjoy social science articles too. I'm always trying to learn about the problems that exist and ask myself where quantum computers might be able to make an impact. Van Kirk’s mentor Lukin says that her concern for the social good makes her more than an outstanding student: it makes her a remarkable person. “The combination of the depth and the breadth of Katherine’s knowledge in subjects ranging from physics to computer science is truly exceptional,” he says. “She cares deeply about the implications of her work for both the broader scientific community and for society as a whole, which I find very inspiring. I look forward to her professional development from a talented student to a mature, exceptional young scientist.” Now at the close of her first year at GSAS, Van Kirk says she’s not sure where her research will lead her. She is certain about her goal, though. She wants every project she works on to have a positive effect on society. “I always ask myself if I would have a greater impact than I am having now if I was back growing a nonprofit around some issue that I care about,” she says. “If my answer to that question is ever ‘Yes,’ I will switch gears and try something new.”
<urn:uuid:079e7aaa-5385-4c91-8a8d-95bcff6d9fe2>
CC-MAIN-2021-43
https://gsas.harvard.edu/news/stories/quantum-leap
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00401.warc.gz
en
0.955504
1,898
3.875
4
Back in 2001, an obscure group of theoretical physicists proved a remarkable result. They showed that it was possible to build a quantum computer out of ordinary optical components, such as mirrors, lenses, lasers, and so on. That was something of a surprise. Until then, physicists had thought that quantum computing would only be possible using non-linear crystals and other exotic components that are hard to control. The prospect of using ordinary bits and pieces has an important consequence. It immediately suggests that more powerful devices can be built simply by adding extra components. This problem of scalability has always plagued other attempts to build quantum computers. The reaction from the theoretical physics community was barely controlled excitement. But in practice, this approach has never lived up to its early promise. That’s because it is hard to build even ordinary optical components into chip-like devices that can be scaled like conventional silicon chips. It is just not possible to manufacture them with the required performance and tolerances. Today, Benjamin Metcalf at the University of Oxford and a few pals show how they are tackling these problems while aiming for the ultimate goal of scalable quantum computation. These guys have built the first self-contained photonic chip capable of teleportation, one of the fundamental logic operations necessary for quantum computation. The device is a proof-of-principle demonstration that scalable quantum computers of this type are firmly in the crosshairs of experimental physicists. But it also reveals that significant challenges lay ahead. Quantum teleportation is a standard procedure in quantum optics laboratories all over the world. It aims to transfer the quantum state of an input qubit, Q1, to a target qubit, Q3. The process begins by creating a pair of entangled photons, Q2, and Q3. These share the same quantum existence so that a measurement on one immediately influences the other. This measurement is important. Known as a two qubit Bell state measurement, it is carried out on both Q1 and Q2 at the same time. Because Q2 is entangled with Q3, this results in the quantum state of Q1 being transferred to Q3. In other words, the quantum state of Q1 is teleported to Q3, which may be an entirely different part of the universe. This process is usually carried out using low intensity laser beams and ordinary components such as mirrors and optical fibers. But the new photonic device shrinks all these components onto a single silicon chip. It has source of photons, beam splitters, silica waveguides to channel the photons through the device as well as components for creating and measuring quantum bits or qubits. One of the key questions these guys set out to answer is how well each of these components work and how their limitations contribute to the overall performance of the chip. Until now, one problem with this approach is that it is difficult to create high quality single photons in chip-based devices. What’s more, these photons tend to get absorbed by imperfect beam splitters or scattered in the silica waveguides, dramatically reducing the robustness of the process. The advance that Metcalf and co have achieved is to dramatically improve the quality of their single photon sources while characterizing the losses from other optical components such as beam splitters and waveguides for the first time. In doing so they’ve demonstrated one of the basic logic operations of quantum computing inside a photonic chip for the first time: the teleportation of a qubit. The new chip is by no means perfect: it performs with around 89 percent fidelity. One source of errors is the photon source, which is far from ideal. “Whilst the success of this experiment relies on our development of high-quality single photon sources with exceptional heralded state purity and heralding efficiency, the absence of a true on-demand single photon source continues to limit the achievable fidelity,” they say. A more significant source of errors is the non-ideal beam splitters, which by themselves reduce the fidelity of the device to around 90 percent. That’s good enough for secure communication. “But it is still below the fidelity of 99% thought to be required for a fault-tolerant quantum computer,” admit Metcalf and co. It is inevitable that beam splitters and waveguides made in this way will deviate from their design parameters. The challenge is to ensure that these deviations are kept to a minimum or corrected by other components in real-time. Finally, future photonic chips will need better materials that reduce the loss of photons due to scattering. That becomes particularly important as chips become larger and more complex. So the scale of the future challenges are clear. If physicists want to build photonic chips capable of carrying out quantum computation, they will need better photons guns, less lossy materials and active components that can measure correct aberrations in real-time. That’s a big ask. Large-scale quantum computers are coming but on this evidence, not in the very near future in photonic form. Ref: arxiv.org/abs/1409.4267 : Quantum Teleportation on a Photonic Chip
<urn:uuid:a0687cd6-4681-4ef5-95ee-caf2f194665c>
CC-MAIN-2021-43
https://www.technologyreview.com/2014/09/26/250075/first-quantum-logic-operation-for-an-integrated-photonic-chip/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00202.warc.gz
en
0.945252
1,060
3.59375
4
Experimental physics is littered with freaky effects, often the product of obscure forces moving and changing objects in ways we don’t expect, but almost always leading to perfectly understandable conclusions. One notable exception – arguably the most notable, in fact – is the double-slit experiment. Cut two narrow, parallel slits on an opaque sheet and shine light on them. If the conditions are right, you’ll see an interference pattern on the wall behind the sheet – the result, and proof, of the photons’ wavelike behaviour. But if you stick a small detector on each of those slits to track the movement of waves through each one, the interference pattern will be replaced by one or two small pricks of light on the wall – the result, and proof, of the photons behaving like particles and moving only through one slit or the other. Taken together, the experiment demonstrates the wave-particle duality of quantum objects (objects whose behaviour is dictated by quantum forces, as opposed to macroscopic objects that are dominated by classical forces). In the more than two centuries since the first double-slit experiment, in 1801, many groups of scientists have modified it in different ways to elicit different aspects of the duality, and their implications for the study of the nature of reality. Anil Ananthaswamy’s 2018 book Through Two Doors At Once wends its way through this history, at each step stopping to identify more and more strange discoveries that have only complicated, instead of simplified, the behaviour of particles like photons. One especially weird possibility is contained in a thought-experiment called the Elitzur-Vaidman bomb tester. Another is contained in a series of famous thought-experiments that American physicist John Wheeler proposed from the late 1970s. Essentially, he asked if each photon could make a ‘decision’ about whether it would travel as a particle or as a wave based on the experimental setup in front of it, if this decision happened in a certain time frame, and if an observer could anticipate this decision-making moment and interfere with it. As bizarre as this sounds, physicists have been able to set up experiments whose results have been consistent with some of Wheeler’s hypotheses. For example, say you shine a laser at a beam-splitter. The beam is split in two perpendicular directions; let’s call them A and B. A is made to bounce off a mirror by 90º and moves to a point, which we’ll call P. B is also turned 90º by a mirror in its path and directed to P. If there is a detector at P, physicists have observed a prick of light – indicating both A and B beams were composed of particles. But if there is another beam-splitter at P, then the combined A and B beams are split once again into two beams – and one of them has shown an interference pattern. If A and B were composed of particles until they struck the detector or splitter at P, where did the waves come from? Or, according to Wheeler’s hypothesis, did the photons travelling as part of A and B anticipate that there would be a splitter instead of a detector at P, and decided to become waves? We don’t know. Specifically, there are different interpretations of the experiment’s outcomes that try to make sense of what happened, but we don’t have objective data that supports one exact possibility, in a classical sense. Wheeler himself concluded that there are no phenomena in the natural universe that are independent of their observations. That is, until you observe something (quantummy) happening, Wheeler figured it wouldn’t have happened (at least not the way you think it did). But more importantly (for this post), both Wheeler’s ideas and the experiments that physicists used to elucidate wave-particle duality kept the focus on the particle, the observer and the test setup. A new study by scientists in the US may complicate this picture even more: they’ve reported evidence that the source of the particles could also influence how the particles behave in an experiment. Theoretical physicists have anticipated such a finding. For example, one paper published in February 2020 said that when its authors set out to quantify the extent to which a setup would produce an interference pattern or pinpricks of light, they found a simple mathematical relationship between this measure and the purity of the photon source. In the new study, simply put, physicists flashed a specifically tuned laser onto two crystals, of lithium niobate. The crystals then emitted two photons each, which the physicists called the ‘signal’ and the ‘idler’. They directd the signal photons from both crystals to an interferometer – a device that splits a beam of light into two and recombines them to produce an interference pattern – to observe the characteristic proof of wave-like behaviour; they also directed the two idler photons to two detectors, to confirm their particle-like behaviour. Each pair of signal and idler photons produced by each crystal would be entangled. Wikipedia: “Quantum entanglement is a physical phenomenon that occurs when a group of particles are generated, interact or share spatial proximity in a way such that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance.” One implication of this relationship is that if we discover, or observe, one of two entangled particles in a certain quantum state, we can determine the state of the other particle without observing it. In their experiment, the physicists effectively mapped source purity with “the likelihood that a particular crystal source will be the one that emits light” (source). That is, by increasing or decreasing the chances of one of the two crystals emitting photons – by adjusting the strength of the incident laser – the physicists could control the value of the source purity they needed to plug into the equation. They found that when one of the crystals became very likely to emit paired photons, the interference pattern became very feeble – i.e. the photons at the interferometer were behaving like particles. The interference pattern was sharpest when both crystals were equally likely to emit paired photons. These results confirmed the (theoretical) findings of the February 2020 paper, but the physicists were able to do one better. The February 2020 paper posited that source purity (µ), interference visibility (V) and ‘particle location distinguishability’ (D) were related thus: V2 + D2 = µ2. The new paper also found that µ2 = 1 – E2, where E is a measure of the extent of entanglement between an idler photon and the detector detecting it. This is new, and we don’t yet know how other physicists will exploit it to delve even more into the seemingly bottomless pit that is wave-particle duality. Equally, the experiment also demonstrates, according to Xiaofeng Qian, one of the authors of the February 2020 paper, that a “quantum particle can behave simultaneously, but partially, as both” wave and light.
<urn:uuid:a6600561-41aa-412c-8589-59c25e800707>
CC-MAIN-2021-43
https://rootxprivileges.wordpress.com/2021/09/03/part-wave-part-particle-same-time/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00241.warc.gz
en
0.960149
1,484
3.578125
4
A quantum computer doesn’t need to be a single large device but could be built from a network of small parts, new research from the University of Bristol has demonstrated. As a result, building such a computer would be easier to achieve. Many groups of research scientists around the world are trying to build a quantum computer to run algorithms that take advantage of the strange effects of quantum mechanics such as entanglement and superposition. A quantum computer could solve problems in chemistry by simulating many body quantum systems, or break modern cryptographic schemes by quickly factorising large numbers. Previous research shows that if a quantum algorithm is to offer an exponential speed-up over classical computing, there must be a large entangled state at some point in the computation and it was widely believed that this translates into requiring a single large device. In a paper published in the Proceedings of the Royal Society A, Dr Steve Brierley of Bristol’s School of Mathematics and colleagues show that, in fact, this is not the case. A network of small quantum computers can implement any quantum algorithm with a small overhead. The key breakthrough was learning how to efficiently move quantum data between the many sites without causing a collision or destroying the delicate superposition needed in the computation. This allows the different sites to communicate with each other during the computation in much the same way a parallel classical computer would do. We provide algorithms for efficiently moving and addressing quantum memory in parallel. These imply that the standard circuit model can be simulated with low overhead by the more realistic model of a distributed quantum computer. As a result, the circuit model can be used by algorithm designers without worrying whether the underlying architecture supports the connectivity of the circuit. In addition, we apply our results to existing memory intensive quantum algorithms. We present a parallel quantum search algorithm and improve the time-space trade-off for the Element Distinctness and Collision Finding problems. In classical parallel computing, sorting networks provide an elegant solution to the routing problem and simulation of the parallel RAM model. In this paper, we have demonstrated that they can be applied to quantum computing too. The information about the connectivity of a quantum circuit is available before we run the algorithm (at compile time). Using this classical information we have designed an efficient scheme for routing quantum packets. The application of this data-moving algorithm is to distributed quantum computing. We provide an efficient way of mapping arbitrary unconstrained circuits to limited circuits respecting the locality of a graph. Our results already apply to nearest neighbour architectures in the case of a circuit that is highly parallel. The case of emulating a circuit with many concurrent operations on a 1D nearest neighbour machine was covered by Hirata et al. The approach is to use the Insertion/Bubble sort to perform all of the operations in O(N) time-steps which compares favorably to performing each gate in turn in O(N2) depth. We put this idea in a general framework applying to any (connected) graph. Along the way we are able to prove that up to polylogarithmic factors, this approach is optimal. We have shown how the addition of a few long-range (or flying) qubits dramatically increases the power of a distributed quantum computer. Using only O(logN) connections per node enables efficient sorting over the hypercube. A distributed quantum computer with nodes connected according to the hypercube graph would be able to emulate arbitrary quantum circuits with only O(log2 N) overhead. One might expect that a quantum computer requires O(N) connections per node so that each qubit can potentially interact with any other qubit. Our result demonstrates that this is not the case: for a small overhead O(logN) connections suffice. We have presented a new algorithm for accessing quantum memory in parallel. The algorithm is a modification of the data-moving algorithm used in Sections 2 and 3 but where the destinations are quantum data and no longer restricted to form a permutation. The algorithm is extremely efficient; it has an overhead that is scarcely larger than any algorithm capable of accessing even a single entry from memory. Theorem 5 implies that N processors can have unrestricted access to a shared quantum memory. It tells us that the quantum parallel RAM and the circuit models are equivalent up to logarithmic factors. Finally, we demonstrated that the parallel look-up algorithm can be used to optimize existing quantum algorithms. We provided an extension of Grover’s algorithm that efficiently performs multiple simultaneous searches over a physical database, and answered an open problem posed by Grover and Rudolph by demonstrating an improved spacetime trade-off for the Element Distinctness problem. It seems likely that this framework for efficient communication in parallel quantum computing will be a useful subroutine in other memory-intensive quantum algorithms, such as triangle finding, or more generally for frameworks such as learning graphs. Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology. Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels. A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
<urn:uuid:9499c22f-363a-45c7-b4bf-494a50b67214>
CC-MAIN-2021-43
https://www.nextbigfuture.com/2013/02/quantum-hypercube-memory-will-enable.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00440.warc.gz
en
0.924567
1,189
3.921875
4
How feasible is it to build a Jupiter brain, a computer the size of a planet? Just in the past few decades, the amount of computational power that’s available to humanity has increased dramatically. Your smartphone is millions of times more powerful than the NASA computers used to send astronauts to the moon on the Apollo 11 mission in 1969. Computers have become integral to our lives, becoming the backbone of our communications, finances, education, art, health care, military, and entertainment. In fact, it would be hard to find an area of our lives that computers didn’t affect. Now imagine that one day we make a computer that’s the size of an entire planet. And we’re not talking Earth, but larger, a megastructure the size of a gas giant like Jupiter. What would be the implications for humans to operate a computer that size, with an absolutely enormous, virtually limitless, amount of computing power? How would our lives change? One certainly begins to conjure up the transformational effects of having so much oomph, from energy generation to space travel and colonization to a fundamental change in the lifespan and abilities of future humans. But while speculation of that sort can easily lead us into the fictional realm, what are the known facts about creating such an impressive computer? How hard would it be? The limits of a Jupiter brain Building a Jupiter brain would be dependent on specific factors that limit the power of a computer, as outlined by the Swedish computational neuroscientist and transhumanist Anders Sandberg in his seminal 1999 paper on the subject. His work, titled “The Physics of Informational Processing Superobjects: Daily Life Among the Jupiter Brains,” focused on the stipulations of building such an enormous computer. As Anders writes in his paper, the “laws of physics impose constraints on the activities of intelligent beings regardless of their motivations, culture or technology.” Even more specifically, he argues, each civilization is also limited by the physics of information processing. The specific physical constraints Sanders found in supersizing a computer are the following: 1. Processing and memory density The elements that constitute a computer and its memory units, all the chips and circuits involved, have a finite size, which is limited by physics. This fact creates “an upper limit” on the processing and memory density of any computing system. In other words, you can’t create computer parts that are smaller than a certain shape, beyond a certain size they will stop functioning reliably. 2. Processing speed The speed of information processing or memory retrieval is related to how fast electrical signals can travel through the computer, determined by the “natural timescales of physical processes,” writes Sandberg. 3. Communication delays If we build a gigantic computer the size of a planet, it might experience delays in communication between its various extended parts due to the speed of light. In fact, the faster its processing speed, the longer the delays might feel “from an internal subjective point of view,” as the scientist describes. If we want to have fewer delays, the distances in the system need to be as small as possible, or else not need to utilize communication over long distances. 4. Energy supply As you might imagine, an extremely large computing system would be a major power hog. Computation on such a scale would need tremendous amounts of energy and the management of heat dissipation. In fact, looking for the heat emissions from large computing system is one potential way to scour the sky for advanced alien civilizations. Sandberg suggests some ways to deal with these challenges. While the power and speed of individual processors may have a limit, we must turn our focus to figuring out how to make parallel systems where all the disparate elements work in unison. He gives the example of the human brain where “even fairly slow and inefficient elements can produce a very powerful computing system.” The processing factors and the delays in communication may have to be handled by creating a computing system that’s more concentrated and modular. Among other considerations, he also proposes giving “reversible computing” (a theoretical form of quantum computing in which the computational process is to some extent time-reversible) a closer look, as it may be possible to achieve this type of computation without having to expend additional energy. It involves no bits being erased and is based on reversible physics. An example of this would be copying and pasting a record, along with its inverse. Such machines could be potentially built by utilizing reversible circuits and logical boards as well as quantum computation, among several other approaches proposed by Sanders. Technologies you would need One of the fun parts of trying to design a Jupiter brain is figuring out the technology that would be necessary to accomplish this mammoth task. Besides the potential army of self-replicating swarms of nanorobots that would need to be employed to put this immense computer together; in an appendix to his paper, Sanders suggests a design for what it would take to make a Jupiter brain he called “Zeus.” Zeus would be a sphere 11,184 miles (18,000 kilometers) in diameter, weighing about 1.8 times the mass of Earth. This super-object would be made out of nano diamonds called diamondoids. These would form a network of nodes around a central energy core consisting of quantum dot circuits and molecular storage systems. Another way to organize the nodes and distribute information could be through a cortex “with connections through the interior” which Sanders finds most “volume-efficient” and best for cooling. Each node would be a processing element, a memory storage system, or both, meant to act with relative independence. Internal connections between the nodes would be optical, employing fiber optics/waveguides or utilizing “directional signals sent through vacuum.” Around the sphere would be a concentric shield whose function would be to offer protection from radiation and dissipate heat into space via radiators. Zeus would be powered by nuclear fusion reactors dispersed on the outside of that shield. This would make a Jupiter brain particularly distinct from other hypothetical megastructures like a Dyson Sphere or a Matrioshka Brain that Type II civilizations on the Kardashev Scale could theoretically create to harness energy from stars. Where would we get the supplies to make a Jupiter brain? Sanders proposes gathering the carbon located in gas giant cores or through star lifting, any one of several hypothetical processes that would allow Type II civilizations to repurpose stellar matter. If planet-size computers are not enough of a challenge, Sanders also proposes some information processing solutions that even he termed “exotica”, as they involve developing or purely theoretical technologies. Among these are using quantum computers, which are not only quantitatively but “qualitatively more powerful than classical computers.” Sanders also believes they allow for reversible computation and are the “natural choice” when it comes to computing systems on the nanoscale or the even smaller femtoscale. Black Holes could potentially be used as processing elements if they do not destroy information, a currently contested notion. If information is released from black holes via Hawking radiation, they could possibly be tapped as information processors, conjectures the scientist. A network of wormholes, theoretical tunnels that connect distant parts of the space and time continuum, is another yet-to-be-proven hypothetical structure that may serve as “extremely useful” for information processing and communications. Another philosophical nugget that would be at home in any discussion involving The Matrix also emerged from Sandberg’s paper: As a civilization grows and expands its information processes to the limits of physical laws and technology, it will at some point become “advantageous in terms of flexibility and efficiency for individual beings to exist as software rather than (biological) hardware." Why is that so? Fewer of the increasingly scarce resources would be required to sustain such a being, which will evolve automatically as code. The limits of this virtual existence are bounded by the computing system it exists in. “As technology advances the being will be extended too,” writes Sanders. The Swedish philosopher and computational neuroscientist Nick Bostrom wrote a now-famous paper on the Simulation Hypothesis titled “Are we living in a computer simulation?” In it, he estimates that all the brain activity by all the humans who ever lived would amount to somewhere between 1033 and 1036 operations. By comparison, a planet-sized computer like a Jupiter brain would be able to execute 1042 operations per second. It would be able to simulate all of human brain activity ever, all the consciousnesses of all the people who ever lived, “by using less than one millionth of its processing power for one second,” writes Bostrom. Certainly, these technologies and their implications are highly speculative at this point, but visualizing the futuristic gadgetry is one step in making it real eventually, as has happened with other tech developments. If we can imagine it, well, perhaps we can build it.
<urn:uuid:d99b2f83-598a-4a48-bcf2-09aaafd5cc1c>
CC-MAIN-2021-43
https://interestingengineering.com/how-to-make-a-jupiter-brain-a-computer-the-size-of-a-planet?utm_source=rss&utm_medium=article&utm_content=12102021
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00401.warc.gz
en
0.944748
1,871
3.765625
4
TABLE OF CONTENT Computing technology is about to reach a landmark moment in its growth. As the academic and business realms brace for the dawn of the quantum age, there is understandable apprehension about what lies ahead, which brings up questions like “what is quantum computing, its applications, and implications?” One fact is for sure – the world is about to see a new generation that’s tied to quantum computing security threats. The main danger is the disruption of the traditional encryption systems, resulting in all kinds of private information being revealed. Although brute force attacks will take months to break through high-level algorithms, quantum attacks can break traditional public-key encryptions in a fraction of the time. Even if quantum computers do not arrive for another decade, today's public key encryption has yet to be proven reliable against mathematical attacks. Furthermore, the challenge faced by a potential quantum computer has an immediate impact on data: the “download now, decode afterward” attack vector ensures that (encrypted) confidential information can be collected now and processed offline until quantum computers emerge. However, there are ways to protect your data, called quantum security. Quantum-safe encryption is the best way to maintain the protection of encryption keys and achieve long-term privacy. Quantum computing security would secure the data from today's brute force attacks, guarantee that long-living data is protected from potential attacks, and shield high-value data in a post-quantum computing environment. Dangers Posed by Quantum Computer Hacking Quantum computers are data storage and computing devices that make use of quantum mechanical properties. This can be immensely beneficial for some projects, where they can significantly outperform even one of the most powerful supercomputers. Traditional devices, such as phones and laptops, store data in binary "bits" that can be either 0s or 1s. A quantum bit, or qubit, is the fundamental memory device of a quantum computer. Physical systems, including the spin of an electron or the photon's angle, are used to create qubits. Quantum superposition is a property that causes these devices to be in several configurations at the same time. Quantum entanglement is a process that allows qubits to be inextricably bound. As a consequence, a set of qubits will represent several items at the same time. A traditional device, for example, can reflect any figure between 0 and 255 with only eight bits. However, an eight-qubit quantum machine will simultaneously represent all numbers between 0 and 255. This is crucial for massively faster processing speeds, which are needed to replicate quantum mechanics at the molecular level. Now a vital question arises – what is quantum computing used for? In 2019, Google made headlines for declaring quantum superiority, claiming that its machines could execute tasks that a traditional machine couldn’t. IBM has also been reporting about its efforts to build a 1000-qubit quantum computer by 2023. Below are some of the most common quantum computing technologies in practice: - Artificial Intelligence (AI) and Machine Learning (ML): Speech, image, and handwriting recognition are only a few of the typical AI and ML applications we see daily. And this is where quantum computation could help solve difficult problems in a fraction of the time, whereas it would take conventional computers thousands of years to solve. - Computer-Aided Chemistry: The number of quantum states, also in the tiniest of molecules, is thought to be enormous, making it impossible for traditional computational memory to process. Creating a superconductor (at room temperature), eliminating carbon dioxide for a healthier atmosphere, and optimizing the nitrogen-fixation method for ammonia-based fertilizer are only a few of the crucial problems that quantum computing could solve. - Drug Development & Design: Researchers agree that quantum computation can be a valuable method for understanding medications and their effects on humans, saving pharmaceutical companies a lot of money and time. - Cryptography & Cybersecurity: Thanks to the number of cyber-attacks that occur regularly worldwide, the online security environment has become very fragile. Quantum computing cybersecurity, combined with machine learning, will help develop different strategies to fight cyber threats. - Financial Analysis: Companies can increase their solutions' accuracy and lessen the time needed to produce them by using quantum technology to execute large and complicated calculations. Quantum computers could, however, assist hackers in gaining access to our most sensitive information by breaking cryptography that would usually take thousands of years to crack, including with supercomputers. To demonstrate that traditional encryption (RSA + AES) is approaching its limit, Active Cypher designed QUBY, a mini-quantum computer, by repurposing hardware running quantum algorithms. With hardware costing $600, QUBY is compact enough to fit in a backpack. Performing a vast superposition of potential outcomes for modern encryption algorithms requires a quantum processor with millions of qubits — and the biggest quantum machine on the market currently has just 72 qubits. However, quantum emulators can already speed up the breaching of encryption protocols by using sophisticated cracking algorithms on non-quantum hardware platforms. This fact makes matters of quantum security even more important today. Present defense measures would be vulnerable to new forms of cyber threats due to quantum computing, posing a serious problem for advanced technical networks such as networked cars or industrial control systems. Cyber-attacks on industrial facilities may result in the theft of trade secrets or disruption of the manufacturing cycle, resulting in colossal economic damage. These aspects also increase the significance of quantum security. Importance of Quantum Security Without any quantum-safe encryption, any data transferred on public media now and even in the future would be susceptible to spying. Also, material protected from current attacks could be stored for later decryption until a functional quantum computer is accessible. It could also result in undetected data manipulations. It would be difficult to claim the credibility and authenticity of transmitted content. This would breach existing regulatory standards for data confidentiality and safety from a business, ethical, and legal viewpoint. Security specialists are concerned that today’s common algorithms protection is based on one of three difficult mathematical problems: integer factorization, discrete logarithm, or elliptic-curve discrete algorithm. Most of these challenges can be easily resolved using Shor's algorithm on a reasonably efficient quantum computer. Although existing theoretical quantum computers lack the computational power to hack any actual cryptographic algorithm, many cryptographers are developing new algorithms to plan for the day when quantum computing becomes a threat. Encryption efficiently preserves privacy because data encryption and decryption are relatively simple programming processes, but cracking an encryption scheme is incredibly difficult. A brute force attack on the encryption key is incredibly difficult. Since an intruder can't access data without the encryption keys, they'll want to steal them. If the hacker cannot access the keys by any means, they will resort to a "brute force" attack, in which they try all available encryption keys. Quantum security algorithms keep the principle that without an encryption key, an intruder will be unable to decrypt data without compromising the data's integrity. When addressing encryption, it's important to understand that most security schemes employ one of two encryption forms – Asymmetric Encryption and Symmetric Encryption. - Asymmetric Encryption: This encryption utilizes a key pair that comprises a public and private key. Every node will have its pair of keys. The public key can be exchanged with other nodes; however, the private key should be kept confidential. A negative aspect of asymmetric encryption is that it can use 100 times the amount of CPU cycles as symmetric encryption. The solution is to establish an initial encrypted link for sharing a hidden symmetric key by first running an asymmetric session. - Symmetric Encryption: It allows the use of a single hidden key that all interacting participants obtain. The question is how to share keys without them being intercepted. This is where asymmetric encryption is used, which allows for the safe sharing of the symmetric encryption key. Discovering, developing, and implementing modern quantum-safe encryption algorithms has become a priority for academic, technology, and government organizations worldwide. The aim is to develop one or more algorithms that can withstand quantum computation with certainty. Quantum encryption is accomplished using a mathematical approach that is impossible to overcome with both standard and a quantum computer. The present RSA and ECC encryption algorithms are focused on algebraic problems involving very big random numbers. They are extended to both public and private keys so that the private key is never revealed. Surprisingly, symmetric encryption and hash algorithms are currently untouched by quantum computing and do not require replacement. Many encryption schemes that rely on asymmetric encryption to create keys for symmetric encryption operations are at risk of being broken. Quantum-safe encryption aims to replace the asymmetric encryption algorithms currently used for key exchange and digital signatures.
<urn:uuid:b8c260e4-750c-4d0c-bed9-ebf0ddb84821>
CC-MAIN-2021-43
https://develux.com/blog/quantum-security
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00002.warc.gz
en
0.92904
1,793
3.609375
4
Alternate format: ITSAP.40.016 Using encryption to keep your sensitive data secure (PDF, 391 KB) Encryption technologies are used to secure many applications and websites that you use daily. For example, online banking or shopping, email applications, and secure instant messaging use encryption. Encryption technologies secure information while it is in transit (e.g. connecting to a website) and while it is at rest (e.g. stored in encrypted databases). Many up-to-date operating systems, mobile devices, and cloud services offer built-in encryption, but what is encryption? How is it used? And what should you and your organization consider when using it? What is encryption? Encryption encodes (or scrambles) information. Encryption protects the confidentiality of information by preventing unauthorized individuals from accessing it. For example, Alice wants to send Bob a message, and she wants to ensure only he can read it. To keep the information confidential and private, she encrypts the message using a secret key. Once encrypted, this message can only be read by someone who has the secret key to decode it. In this case, Bob has the secret key. Eve is intentionally trying to intercept the message and read it. However, the message is encrypted, and even if Eve gets a copy of it, she can’t read it without acquiring the secret key. If an individual accidentally receives a message that includes encrypted information, they will be unable to read the encrypted contents without the key to decrypt the message. How is encryption used? Encryption is an important part of cyber security. It is used in a variety of ways to keep data confidential and private, such as in HTTPS websites, secure messaging applications, email services, and virtual private networks. Encryption is used to protect information while it is actively moving from one location to another (i.e. in transit) from sender to receiver. For example, when you connect to your bank’s website using a laptop or a smartphone, the data that is transmitted between your device and the bank’s website is encrypted. Encryption is also used to protect information while it is at rest. For example, when information is stored in an encrypted database, it is stored in an unreadable format. Even if someone gains access to that database, there’s an additional layer of security for the stored information. Encryption is also used to protect personal information that you share with organizations. For example, when you share your personal information (e.g. birthdate, banking or credit card information) with an online retailer, you should make sure they are protecting your information with encryption by using secure browsing. Many cloud service providers offer encryption to protect your data while you are using cloud based services. These services offer the ability to keep data encrypted when uploading or downloading files, as well as storing the encrypted data to keep it protected while at rest. When properly implemented, encryption is a mechanism that you and your organization can use to keep data private. Encryption is seamlessly integrated into many applications to provide a secure user experience. How can I use encryption? Your organization likely already uses encryption for many applications, such as secure browsing and encrypted messaging applications. If you access a website with padlock icon and HTTPS in front of the web address, the communication (i.e. the data exchanged between your device and the website’s servers) with the website is encrypted. To protect your organization’s information and systems, we recommend that you use HTTPS wherever possible. To ensure that users are accessing only HTTPS-supported websites, your organization should implement the web security policy tool HTTP Strict Transport Security (HSTS). HSTS offers additional security by forcing users’ browsers to load HTTPS supported websites and ignore unsecured websites (e.g. HTTP). Encrypted messaging applications Most instant messaging applications offer a level of encryption to protect the confidentiality of your information. In some cases, messages are encrypted between your device and the cloud storage used by the messaging service provider. In other cases, the messages are encrypted from your device to the recipient’s device (i.e. end-to-end encryption). When using end-to-end encryption services, not even the messaging service provider can read your encrypted messages. In deciding which tools to use, you need to consider both the functionality of the service and the security and privacy requirements of your information and activities. For further information, refer to protect how you connect. Encryption is just one of many security controls necessary to protect the confidentiality of data. What else should I consider? Encryption is integrated into many products that are commonly used by individuals and organizations to run daily operations. When choosing a product that uses encryption, we recommend that you choose a product that is certified through the Common Criteria (CC) and the Cryptographic Module Validation Program (CMVP). The CC and the CMVP list cryptographic modules that conform to Federal Information Processing Standards. Although the CC and the CMVP are used to vet products for federal government use, we recommend that everyone uses these certified products. The CCCS recommends The cccs recommends - Evaluate the sensitivity of your information (e.g. personal and proprietary data) to determine where it may be at risk and implement encryption accordingly. - Choose a vendor that uses standardized encryption algorithms (e.g. CC and CMVP supported modules). - Review your IT lifecycle management plan and budget to include software and hardware updates for your encryption products. - Update and patch your systems frequently. Prepare and plan for the quantum threat to cyber security. For more information, please see ITSE.00.017 Addressing the Quantum Computing Threat to Cryptography. Encryption for highly sensitive data Systems that contain highly sensitive information (e.g. financial, medical, and government institutions) require additional security considerations. Contact us for further guidance on cryptographic solutions for high-sensitivity systems and information: email@example.com.
<urn:uuid:4d3969fa-bf88-4470-b823-c6f4a2e80f96>
CC-MAIN-2021-43
https://cyber.gc.ca/en/guidance/using-encryption-keep-your-sensitive-data-secure-itsap40016
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00042.warc.gz
en
0.910774
1,246
3.5625
4
Researchers at IBM have created an elusive molecule by knocking around atoms using a needle-like microscope tip. The flat, triangular fragment of a mesh of carbon atoms, called triangulene1, is too unstable to be made by conventional chemical synthesis, and could find use in electronics. This isn't the first time that atomic manipulation has been used to create unstable molecules that couldn’t be made conventionally — but this one is especially desirable. “Triangulene is the first molecule that we’ve made that chemists have tried hard, and failed, to make already,” says Leo Gross, who led the IBM team at the firm’s laboratories in Zurich, Switzerland. The creation of triangulene demonstrates a new type of chemical synthesis, says Philip Moriarty, a nanoscientist who specializes in molecular manipulation at the University of Nottingham, UK. In conventional synthesis, chemists react molecules together to build up larger structures. Here, by contrast, atoms on individual molecules were physically manipulated using a microscope. But making molecules one at a time will be useful only in particular situations. And the method is unlikely to work for those with complicated shapes or structures that make it hard to identify or target individual atoms. Triangulene is similar to a fragment of graphene, the atom-thick material in which carbon atoms are joined in a hexagonal mesh. The new molecule is made up of six hexagons of carbon joined along their edges to form a triangle, with hydrogen atoms around the sides (see ‘Radical triangle’). Two of the outer carbon atoms contain unpaired electrons that can’t pair up to make a stable bond. Such a molecule is highly unstable because the unpaired electrons tend to react with anything around them. “As soon as you synthesize it, it will oxidize,” says Niko Pavliek, a member of the IBM team. So far, the closest conventional synthesis has come to making molecules of this sort involves buffering the reactive edges with bulky hydrocarbon appendages2. The IBM team turned to a scanning probe microscope, which has a needle-sharp tip that ‘feels’ a material’s shape. The technique is usually used to image molecules, by measuring attractive forces between the tip and sample, or the electric currents that pass between them. The IBM team has demonstrated3 that, if the tip has a small molecule such as carbon monoxide attached to it, force microscopy can provide images of such high resolution that they resemble the ball-and-stick diagrams of chemistry textbooks. Gross’s team has already shown how the microscope can be used to direct the course of chemical reactions and make unstable 'intermediate' molecules4. To produce triangulene, the team began with a precursor molecule called dihydrotriangulene, which lacks the reactive unpaired electrons. The precursors were synthesized by chemists at the University of Warwick in Coventry, UK. The researchers deposited these molecules on a surface — salt, solid xenon and copper are all suitable — and inspected them under the microscope.They then used two successive voltage pulses from the tip, carefully positioned above the molecules, to blast off two hydrogen atoms and create the unpaired electrons. The work is published in Nature Nanotechnology1. The team then imaged the products with the microscope, first picking up a carbon monoxide molecule to acquire the high resolution. The images had the shape and symmetry predicted for triangulene. Under the high-vacuum, low-temperature conditions of the experiments, the molecules remained stable for as long as the researchers looked. “To my knowledge, this is the first synthesis of unsubstituted triangulene,” says chemist Takeji Takui of Osaka City University in Japan, who has previously synthesized triangulene-type molecules2. Moriarty calls the work elegant, but is surprised that triangulene remained stable on a copper surface, where he might have expected it to react with the metal.In one set of experiments, says Pavliek, the molecule was still sitting on the copper four days after the team made it. The researchers also probed triangulene’s magnetic properties. They found that, as they had expected, the two unpaired electrons have aligned spins — the quantum-mechanical property that gives electrons a magnetic orientation. This property could make triangulene useful in electronics, they say. Takui agrees, and foresees applications in quantum computing, quantum information processing and a field known as spintronics, in which devices manipulate electron spins to encode and process information. Making molecules one at a time might not seem very promising, but Gross points out that current quantum computers, such as the Quantum Experience developed at IBM, use only a handful of quantum bits, or qubits, each of which could correspond to a single molecule. Even if you need to make 100 such molecules “by hand”, he says, “it would be worth going through that manual labour”. And although it’s not clear how easily the approach could be applied to molecules that aren’t flat, Gross says that such atom manipulation can be performed for 3D molecules to some extent. Even with triangulene and related graphene-like fragments, “there’s a lot of exciting science still to be done”, says Moriarty. The IBM team “continues to set a high bar for the rest of us”, he adds. This article is reproduced with permission and was first published on February 13, 2017.
<urn:uuid:88d94af1-d494-4a3c-980e-9169e466b048>
CC-MAIN-2021-43
https://www.scientificamerican.com/article/elusive-triangulene-created-for-the-first-time/?utm_campaign=The%20Exponential%20View&utm_medium=email&utm_source=Revue%20newsletter
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00403.warc.gz
en
0.945674
1,177
4
4
In the last six decades, computers have become faster, compact, and cheaper. However, for engineers, the options have almost saturated for how small the silicon transistors can be made and how fast they can transfer electricity via devices to form digital ones and zeros. Such a restriction has led Jelena Vuckovic, a Stanford electrical engineering Professor, to turn towards quantum computing, which is dependent on light and not electricity. Quantum computers operate by distancing the spinning electrons inside an innovative kind of semiconductor material. Once the electron is struck by a laser, it emits one or more quanta (or particles) of light to display the manner in which the electron spins. These spin states replace the ones and zeros of conventional computing. Vuckovic, one of the leading scientists in the field, stated that quantum computing is perfect for analyzing biological systems, performing cryptography, or data mining — or even solving any challenge with a number of variables. When people talk about finding a needle in a haystack, that’s where quantum computing comes in. According to Marina Radulaski, a postdoctoral fellow in Vuckovic’s laboratory, the problem-solving ability of quantum computers arises from the complexity of the interactions between laser and electron fundamental to the concept. With electronics you have zeros and ones. But when the laser hits the electron in a quantum system, it creates many possible spin states, and that greater range of possibilities forms the basis for more complex computing. Acquiring information related to the interactions between electrons and light is difficult. Few of the major technology companies around the world are endeavoring to construct massive quantum computers that are dependent on materials that are super-cooled to near absolute zero, which is the theoretical temperature at which the movement of atoms is restricted. Vuckovic’s two decades of own research has focused on one facet of the problem, namely, developing innovative quantum computer chips that will be the building blocks of prospective systems. To fully realize the promise of quantum computing we will have to develop technologies that can operate in normal environments. The materials we are exploring bring us closer toward finding tomorrow’s quantum processor. The obstacle to be overcome by Vuckovic and her colleagues is to create materials with the ability to trap a single, isolated electron. The research team has worked alongside international collaborators and has recently investigated three disparate ways to overcome the problem. One way is to enable operations at room temperature, which is a crucial step if quantum computing is to be developed into a practical tool. For all three approaches, the researchers began by using semiconductor crystals, which are materials that have a regular atomic lattice similar to the girders of a skyscraper. When the lattice is slightly modified, a structure can be developed in which the atomic forces applied by the material have the ability to trap a spinning electron. We are trying to develop the basic working unit of a quantum chip, the equivalent of the transistor on a silicon chip. One method of developing such a laser-electron interaction chamber is by means of a structure called as a quantum dot. In physical terms, the quantum dot is a small quantity of indium arsenide enclosed inside a gallium arsenide crystal. The atomic characteristics of the two materials are known to confine a spinning electron. In a latest paper published in the journal Nature Physics, Kevin Fischer (who is a graduate student at Vuckovic’s laboratory) has reported the ways in which laser-electron processes can be used within such a quantum dot to regulate the input and output of light. When more laser power is applied to the quantum dot, it can be forced to emit precisely two photons in the place of one. According to the researchers, the quantum dot has practical benefits when compared to other major quantum computing platforms. Yet, it mandates cryogenic cooling, and hence might not prove handy for general-purpose computing. However, it can be used for developing tamper-proof communication networks. Vuckovic employed a varied approach to electron capture in two other papers, where she modified a single crystal to confine light in the so-called color center. In a latest paper in the journal NanoLetters, Vuckovic and her colleagues have analyzed color centers in diamond. Naturally, the crystalline lattice in diamond is made of carbon atoms. Jingyuan Linda Zhang (who is a graduate student in Vuckovic’s laboratory) reported the manner in which a 16-member research group substituted certain carbon atoms with silicon atoms. The single modification led to the formation of color centers that could efficiently confine spinning electrons inside the crystalline lattice in diamond. However, similar to the quantum dot, most of the diamond color center experiments mandate cryogenic cooling. Despite the fact that it is an enhancement over other techniques that mandated an elaborate cooling, Vuckovic aspired to achieve more. Therefore, she collaborated with another international team of researchers to analyze a third material, namely, silicon carbide. Silicon carbide is generally called as carborundum, and is a hard, transparent crystal used for manufacturing brake pads, clutch plates, and bulletproof vests. Earlier studies have demonstrated that silicon carbide can be altered to form color centers at ambient temperature. However, this potential has not been made adequately efficacious to synthesize a quantum chip. Vuckovic and her colleagues removed specific silicon atoms from the silicon carbide lattice to form highly efficacious color centers. They further produced nanowire structures around the color centers to enhance photon extraction. Radulaski was the first author of that study, which was reported in another paper published in NanoLetters. According to Radulaski, the net outcomes, such as efficacious color center, working at ambient temperature, in a material well known in the industry, were highly advantageous. We think we’ve demonstrated a practical approach to making a quantum chip. However, this field is just emerging and electron confinement is not so easy. Not even the scientists are confident on the technique, or techniques, that will be effective. We don’t know yet which approach is best, so we continue to experiment.
<urn:uuid:f0641683-b220-4c52-8992-0cd79d648f3f>
CC-MAIN-2021-43
https://www.azoquantum.com/News.aspx?newsID=5423
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00645.warc.gz
en
0.929433
1,269
3.984375
4
For a long time, the development of quantum computers was concerned with theoretical and hardware aspects. But as the focus shifts towards programming, software and security issues, the classical computer sciences are coming back into play. Physicists had long nurtured the ambition to build a quantum computer. In the early 1980s, one of the most famous among them, Richard Feynman (1918 -1988), questioned whether it would ever be possible to efficiently compute and simulate quantum physics phenomena using a conventional computer. He argued that digital computers couldn’t compute fast enough to calculate and simulate the quantum effects that typically occur within atoms and molecules and between elementary particles - at least not within a reasonable period of time. Initially, he proposed building a quantum computer based not on digital coding but rather on a direct imitation of quantum systems. His core idea, which continues to inspire the development of quantum computers to this day, was that certain properties of quantum mechanics could be harnessed for computation. Specifically, this would mean taking advantage of two quantum states of particles: superposition and entanglement. The principle of superposition, for example, can be exploited by quantum computers to carry out faster calculations. While digital computers use binary bits that can only take on the states of one or zero, quantum computers use quantum bits, or qubits, to process information. Qubits can be one or zero, and they can also be both one and zero at once, a state we call superposition. This crucial difference enables a huge leap in computing speed for certain computational problems. In future, quantum computers promise to perform ultra-efficient calculations that normal computers cannot solve in a reasonable period of time, a milestone sometimes referred to as quantum supremacy. Although scientists have yet to find conclusive proof of the existence of quantum supremacy, recent technical advances have been impressive. In 2019, Google claimed to have achieved quantum supremacy for a specific computational problem for the first time, having built a quantum computer that required only 200 seconds to solve a problem that would have taken a conventional computer 10,000 years. Encryption could be cracked Right now, quantum computers are too small and error-prone to pose any serious threat to today’s digital computers, which are capable of performing billions of computations per second. Even Google’s quantum computer was only able to prove its supremacy in a single, specific task. Nonetheless, quantum technologies have now reached a stage where their development lies in the hands of more than just physicists. Today, many computer scientists are "quantum curious", according to ETH Computer Science Professor Kenneth Paterson. He conducts research in the field of cryptography and works on ways of securely processing, transferring and storing information. "We’ve been ’quantum aware’ in my area of research ever since quantum computing started to become a bigger issue in cryptography about ten years ago," says Paterson. "As soon as someone builds a quantum computer that is sufficiently large-scale and reliable, the current encryption framework of the internet will cease to be secure, because quantum computing could be used to crack that encryption." The encryption and security protocols that run behind the scenes whenever we log on to social media, make an online purchase, use online banking or send an email are all based on integer factorisation and related problems that are vulnerable to Shor’s algorithm. Integer factorisation is the process of breaking down a large composite integer into its prime factors. This requires huge computing power, which is why there is still no algorithm - that is, no calculating procedure - that a digital computer can use to efficiently solve a factorisation problem. Back in 1994, however, mathematician Peter Shor created an algorithm specially designed for quantum computing, which can find the prime factors of composite integers significantly faster than classical algorithms. Shor’s ideas can be used to crack the other forms of public key cryptography in use today. Today’s quantum computers are too small and error-prone to run Shor’s algorithm. In principle, however, it is clear that any quantum computer that is powerful and reliable enough to do so would be able to perform factorisation within a reasonable period of time. The moment this situation occurs, factoring-based cryptography and related techniques currently in widespread use will no longer be secure. Not all of cryptography will be affected, of course; for example, quantum computing won’t seriously affect the security of encryption methods that rely solely on secret-key cryptography. But public-key cryptography - which currently forms the basis for securing over 90 percent of web traffic - will definitely be at risk. According to Paterson, a quantum computer would need millions of quantum bits to crack a security key. Scientists at ETH Zurich are currently running quantum computers with up to 17 qubits. On the development side, researchers are on the brink of reaching a new phase of mid-sized quantum computing systems with 50 to 100 qubits, though these are still susceptible to errors. "But we might see a sudden breakthrough in the power of quantum computers, and it could take at least ten years to modify today’s public key cryptography. That’s why we’re getting ready now," says Paterson. His group has co-developed a new quantum-safe algorithm that is being evaluated in an on-going worldwide competition to select new, quantum-secure algorithms. People sometimes ask Benjamin Bichsel whether he feels his research will have been in vain should large-scale, reliable quantum computers eventually turn out to be unfeasible. "I think that’s the wrong question," he says. "But I do wonder what we’ll do if quantum computers end up working brilliantly and we don’t have a clue how to programme them efficiently!" Bichsel works in the research group led by computer science professor Martin Vechev, whose group developed the first intuitive high-level programming language for quantum computing in 2020. It will take special programming languages to properly exploit the potential of quantum computers. "Quantum programming languages are essential to translate ideas into instructions that can be executed by a quantum computer," wrote Microsoft researchers in 2020 in the science journal Nature. The authors included Bettina Heim and Matthias Troyer, who had previously worked as researchers at the ETH Institute for Theoretical Physics. Today’s quantum programming languages are tied closely to specific hardware. These "hardware description languages" focus on the behaviour of circuits and how to optimise them. In contrast, the Silq programming language developed by Martin Vechev’s group abstracts from the technical details. Over a year has passed since Silq was launched; as the first high-level quantum programming language, it has already won acclaim for its elegance and internal coherence. Martin Vechev and his team have also earned praise for their innovative contribution towards reducing errors in quantum computing. In a further article about Silq, Nature explicitly refers to the "uncomputation" feature that enables Silq to automatically reset temporary values "rather than forcing programmers to do this tedious work manually". A computer processes a task in several intermediate steps, creating intermediate results or "temporary values" in the process. In classical computers, these values are erased automatically to free up memory. This task is a lot more complex in the case of quantum computers, however, since the principle of entanglement means that previously calculated values may interact with current ones and jeopardise the calculation process. That makes the ability to automatically clean up temporary values a key part of quantum computing. A holistic view of computing The question of whether Silq can hold its own against the quantum programming languages developed by technology giants Microsoft, IBM and Google - Q#, Qiskit and Cirq, respectively - is still very much up in the air. But, in the meantime, Vechev’s team have also succeeded in transferring automatic uncomputation to Qiskit. "It’s very encouraging to see that we can transfer key Silq concepts to other languages - especially since automatic uncomputation improves the efficiency of quantum computing with Qiskit," says Martin Vechev. In the long run, there will be less of a focus on computer scientists writing languages and software for hardware developed by physicists. Instead, the emphasis will shift to developing programming languages hand in hand with quantum algorithms, quantum hardware, quantum software, quantum applications and workflows. "If we genuinely want to make quantum computing a reality, we will need to make this new approach part of a fully fledged computer system in which multiple components combine to solve specific problems efficiently," says Paterson. This text appeared in the 21/03 issue of the ETH magazine Globe. Kenneth Paterson is Professor of Computer Science at the Institute of Information Security, where he leads the Applied Cryptography Group. Martin Vechev is a professor at the Institute for Programming Languages and Systems and heads the Secure, Reliable, and Intelligent Systems Lab (SRI) research group.
<urn:uuid:b03275fa-dad0-4cad-8949-3206f46bca97>
CC-MAIN-2021-43
https://www.myscience.org/news/2021/computer_scientists_take_on_the_quantum_challenge-2021-ethz
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00047.warc.gz
en
0.938255
1,835
3.546875
4
You may not realize it, but you probably have valuable information stored in your computer. And if you don’t keep them safe, others can access and misuse them. This is where encryption comes in handy. Encryption is a way of scrambling information so that it cannot be easily read by anyone except for the person with the key to decrypt it. It’s a good idea to encrypt any information that is very sensitive or of high value before uploading online. Here are some ways to encrypt your files using asymmetric encryption: Encryption is the process of converting information to make it unreadable without some form of authorization. A popular encryption method is asymmetric encryption, which is done using two keys: a public key and a private key. The public key allows anyone to encrypt data they send to you, but your private key is what lets you decrypt the message. This means that as long as your private key remains safe, no one can read messages that are encrypted with your public key without breaking the code. To help you get started with this process, here are some steps for how to encrypt your files using asymmetric encryption. Data encryption is that the main process of converting readable information in to a form that can’t be read by someone without access to a secret key. Data encryption offers protection from privacy and security breaches. It also helps to ensure compliance with regulatory standards. There are many ways to encrypt your files, but there is one type of data encryption which is more complex and harder to crack than others: asymmetric encryption. Asymmetric encryption uses a pair of keys, one public and one private, and allows for messages to be encrypted using the public key and decrypted using the private key. This blog post will teach you about how asymmetric encryption works and will give you tips on how to use it. It is a fact that what you don’t know about encryption could hurt you. What if your personal data, or your most sensitive information was stolen? The unfortunate truth is that all data on the internet is vulnerable to being hacked and intercepted. What can you do to protect yourself? There are many security options out there, but one of the best ways to keep your files secure is by using asymmetric encryption. This guide will teach you how to encrypt your files with this method so that no one else can access them without your permission. Encrypting your files is important for privacy. As a result, you should encrypt any information you would not want people to see and store it in a secure location. For example, if you have confidential data in an Excel document, you can encrypt the file by using asymmetric encryption. This article will teach you how to do this process with a few simple steps outlined below: -Download and install the software McAfee Endpoint Encryption. -Launch the software. -On the welcome screen, click Next. -Select Automatically activate product. -Click on Next. -Enter your email address and company name (optional). -Click Activate Now! One of the most important things you can do to keep your files safe is to encrypt them. This is especially true if you have a laptop or other device that you take with you everywhere. If someone steals your device, they could potentially access all of your files and see personal information like passwords and credit card numbers. Encrypting your files requires a special type of software called an encryption program. Many people use these programs to protect their data and personal items from hackers or thieves. Here are some of the best encryption programs available for free download today. Data breaches are getting more and more of the common. With the invention of quantum computing, hackers now have an easier time breaking into data. Encryption is one way to protect your files from prying eyes. It can also be used to sign important documents securely. The idea is pretty simple: you want to create a message that only you can read by using a secret key. There are two styles of encryptions, one is symmetric and second one is asymmetric encryption. Symmetric encryption uses the same key for both encrypting and decrypting text messages, meaning that if someone were to get their hands on that key, they’d have access to all your information. Asymmetric encryption uses two keys—a public key and a private key—which means that if Encryption is the process of encoding a message or information in such a way that only authorized parties can read it. It’s also used to secure data and protect sensitive information. This article will show you how to encrypt your files using asymmetric encryption! If you want to keep your files safe, make sure you follow these steps: (1) Gather all the files you want to encrypt (2) Encrypt them with an asymmetric key (3) Decrypt them with the same asymmetric key. This article is designed for people who are interested in the technical side of encryption. If you just want a few tips on how to protect your personal photos from prying eyes, skip this article. If you work with sensitive data, it is important to protect your files from prying eyes. One way to do this is by using asymmetric encryption. Asymmetric encryption uses a pair of keys to encrypt and decrypt files. One key encrypts the data while the other key decrypts it. This means that only authorized people will be able to read the encrypted files and no one else can see them. In this article, we’ll take a look at how asymmetric encryption works and how you can use it to encode your files in easily understandable steps. One of the most important tasks a computer user can perform is securing their files. But, how do you know what the best way to encrypt your files is? Asymmetric encryption is one of the safest and most secure methods of encrypting your files. It’s easy to use, and it’s not expensive to set up. This blog post will teach you everything you need to know about asymmetric encryption so that you can keep your data safe.
<urn:uuid:723db28b-3038-4e3c-aac3-8ca884401529>
CC-MAIN-2021-43
https://mhdworld.live/how-to-encrypt-your-files-using-asymmetric-encryption/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00127.warc.gz
en
0.935582
1,250
3.640625
4
In the previous tutorial https://acsharmablog.com/2018/07/02/quantum-computing-for-newbies/ ,we saw the evolution of quantum computing and find out that every particle at subatomic level behaves like a wave ,so if a single particle can carries all the information which is encoded in entire spectrum of wave ,then it’s amazing ,then it seriously looks like that quantum computing can give us exponential speed up on traditional computing. Now the next set of questions arises – - How to measure the wave function of a particle - How to control the wave function of a particle - How to measure the superimposed wave function of two particles To understand all this let’s revisit the double slit experiment once more . As we have see before ,that when researchers send one electron in double slit experiment ,it creates a diffraction pattern on the other wall .But that sound’s weird because light creates diffraction pattern on the other wall ,because some light waves are blocked by the wall ,but some light waves passes through the two holes simultaneously and after passing from the these two holes interact with each other and these two light waves either cancel out each other(destructive interference) ,where the phase of the light waves will be opposite or creates a more stronger intensity light wave (constructive interference)where phase of these light waves will be same ,because of this reason we show this bright and dark spot or striped patterns on the other wall. So this diffraction pattern was produced by interference of two light waves ,but in double slit experiment of electron we are sending only one electron at a time ,so if it is still creating the diffraction pattern ,it means it is passing from both the slits simultaneously that’s weird huh. Anyway to measure this weird phenomenon ,scientist put the detector on both the slits ,so if one electron passes through both the slits simultaneously ,then detectors should be able to detect it Now boom!!!!! ,that didn’t happen ,as soon as scientist put the detectors on both the slits ,electron started behaving like other particles like(tennis ball) ,diffraction or striped pattern disappeared ,instead now electron created brightest points on the portion of wall which was just behind the slit or in other words ,same phenomenon which was experienced with tennis balls .Read this https://acsharmablog.com/2018/07/02/quantum-computing-for-newbies/ first to understand it better. Very good video on this – Quantum Mechanics: Animation explaining quantum physics Wow ,so now when we try to measure the wave function of electron ,its wave function collapsed ,that’s really spooky But doesn’t it makes this all the more complicated Because to know the information carried by a particle ,we need to know its wave function ,but we can’t do that ,because electron exhibits wave like properties only when we don’t measure it Then how we will utilize this wave like behavior of electron for exponential speed up on traditional computers. As mentioned in previous blog ,Wave function of a matter wave is defined by below equation where V=V(r,t)=r and t are the position vector and time respectively, Ђ=h/2π where h is momentum So basically to define the wave function of a matter wave we need momentum and position at different time steps And how do we measure the position of an object ,we check it time to time and if object is displaced from position x to position y in time t ,then we know its velocity ,and once we know its velocity we can calculate its momentum as well . But in case of sub atomic level ,its not that easy ,position and momentum of an electron or photon or any particle at sub atomic level are conjugate with each other ,means we can’t measure both position and momentum of a particle with complete precision .How much inaccuracy will be there in this measurement is denoted by heisgenberg uncertainty principal. As per that principal ,the product of variance of measurement in position ϭx and variance in measurement of momentum ϭy should be greater than planck’s constant ђ/2 But this looks weird ,why we can’t measure the position and momentum both precisely at subatomic level ?We can do it very precisely for bigger objects like trains,cars etc. To understand this ,first let’s see what’s the very basic step in calculating position and momentum of bigger object’s – To measure position and momentum of these object ,we need to see these objects and how we will see them ,we will throw some light on it ,that’s where the catch is 😊 A light ray contains multitude of photons ,so if we will throw light on a single electron ,to see it clearly ,it will displace it right ? ,its like that – to measure the position of a small stone ,we are throwing a very big stone ,this big stone will displace the smaller stone ,so if we want to correctly measure the position of smaller stone ,we need to throw a smaller stone right ? In case of light ,smaller stone means ,we need to throw a light ,which comprises only one photon ,now light with one photon means very dim light ,in this light you can’t see the position of electron very clearly ,you will get to know that electron present in this region ,but you won’t know ,that where exactly it present , and if you want to measure the position correctly ,you have a throw a light with more number of photons ,but this light will displace the electron from its position a little bit ,so because of this displacement ,we won’t be able to calculate the momentum correctly . That’s why this uncertainty constraint ,so basically we can measure the position and momentum of a particle at sub atomic level ,but with this uncertainty threshold ,and because position and momentum is error prone ,then the wave function which is dependent on this ,will also be error prone . I hope all of you now must have some understanding of heisenberg uncertainty principle.To understand the concept visually ,Please refer this- https://www.youtube.com/watch?v=qwt6wUUD2QI I hope till now you all have understood the wave function of an electron ,uncertainty associated with measurement of position and momentum of subatomic particle ,and the biggest thing ,if we will try to measure an electron its wave function collapses ,it starts behaving like a normal particle . So if we want to use electron wave function ,to store spectrum of information simultaneously ,what do we need ? - We should be able to measure the wave function without collapsing it - We should know that how much error prone this measurement is (because of heisenberg uncertanity) and what are the counter measures to correct this error - And the most important question ,how we are going to make a qubit In next tutorial ,we will explore possible answers of these striking questions.
<urn:uuid:40a0d4e4-1e80-43e8-8489-5be939ced912>
CC-MAIN-2021-43
https://acsharmablog.com/2019/01/11/hesienberg-uncertainty-principle/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00367.warc.gz
en
0.878242
1,474
4
4
The Great Questions of Philosophy and Physics 01: Does Physics Make Philosophy Superfluous? Trace the growth of physics from philosophy, as questions about the nature of reality got rigorous answers starting in the Scientific Revolution. Then see how the philosophy of physics was energized by a movement called logical positivism in the early 20th century in response to Einstein’s theory of relativity. Though logical positivism failed, it spurred new philosophical ideas and approaches. 02: Why Mathematics Works So Well with Physics Physics is a mathematical science. But why should manipulating numbers give insight into how the world works? This question was famously posed by physicist Eugene Wigner in his 1960 paper, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Explore proposed answers, including Max Tegmark’s assertion that the world is, in fact, a mathematical system. 03: Can Physics Explain Reality? If the point of physics is to explain reality, then what counts as an explanation? Starting here, Professor Gimbel goes deeper to probe what makes some explanations scientific and whether physics actually explains anything. Along the way, he explores Bertrand Russell’s rejection of the notion of cause, Carl Hempel’s account of explanation, and Nancy Cartwright’s skepticism about scientific truth. 04: The Reality of Einstein’s Space What’s left when you take all the matter and energy out of space? Either something or nothing. Newton believed the former; his rival, Leibniz, believed the latter. Assess arguments for both views, and then see how Einstein was influenced by Leibniz’s relational picture of space to invent his special theory of relativity. Einstein’s further work on relativity led him to a startlingly new conception of space. 05: The Nature of Einstein’s Time Consider the weirdness of time: The laws of physics are time reversable, but we never see time running backwards. Theorists have proposed that the direction of time is connected to the order of the early universe and even that time is an illusion. See how Einstein deepened the mystery with his theory of relativity, which predicts time dilation and the surprising possibility of time travel. 06: The Beginning of Time Professor Gimbel continues his exploration of time by winding back the clock. Was there a beginning to time? Einstein’s initial equations of general relativity predicted a dynamic universe, one that might have expanded from an initial moment. Einstein discarded this idea, but since then evidence has mounted for a “Big Bang.” Is it sensible to ask what caused the Big Bang and what happened before? 07: Are Atoms Real? Compare proof for the reality of atoms with evidence for the existence of Santa Claus. Both are problematic hypotheses! Trace the history of atomic theory and the philosophical resistance to it. End with Bas van Fraassen’s idea of “constructive empiricism,” which holds that successful theories ought only to be empirically adequate since we can never know with certainty what is real. 08: Quantum States: Neither True nor False? Enter the quantum world, where traditional philosophical logic breaks down. First, explore the roots of quantum theory and how scientists gradually uncovered its surpassing strangeness. Clear up the meaning of the Heisenberg uncertainty principle, which is a metaphysical claim, not an epistemological one. Finally, delve into John von Neumann’s revolutionary quantum logic, working out an example. 09: Waves, Particles, and Quantum Entanglement Quantum mechanics rests on an apparent category mistake: Light can’t be both a wave and a particle, yet that’s what theory and experiments show. Analyze this puzzle from the realist and empiricist points of view. Then explore philosopher Arthur Fine’s “natural ontological attitude,” which reconciles realism and antirealism by demonstrating how they rely on different conceptions of truth. 10: Wanted Dead and Alive: Schrödinger's Cat The most famous paradox of quantum theory is the thought experiment showing that a cat under certain experimental conditions must be both dead and alive. Explore four proposed solutions to this conundrum, known as the measurement problem: the hidden-variable view, the Copenhagen interpretation, the idea that the human mind “collapses” a quantum state, and the many-worlds interpretation. 11: The Dream of Grand Unification After the dust settled from the quantum revolution, physics was left with two fundamental theories: the standard model of particle physics for quantum phenomena and general relativity for gravitational interactions. Follow the quest for a grand unified theory that incorporates both. Armed with Karl Popper’s demarcation criteria, see how unifying ideas such as string theory fall short. 12: The Physics of God The laws of physics have been invoked on both sides of the debate over the existence of God. Professor Gimbel closes the course by tracing the history of this dispute, from Newton’s belief in a Creator to today’s discussion of the “fine-tuning” of nature’s constants and whether God is responsible. Such big questions in physics inevitably bring us back to the roots of physics: philosophy.
<urn:uuid:c08fbd6e-e932-4df1-b223-d6b56a767d63>
CC-MAIN-2021-43
https://www.wondrium.com/the-great-questions-of-philosophy-and-physics?bvrrp=Plus-en_CA/reviews/product/2/60000.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00211.warc.gz
en
0.906438
1,089
3.546875
4
What the heck is a quantum network? Today's supercomputers could one day provide ultra-secure encryption. While it’s currently possible to send encrypted messages with apps like Signal, no system is completely unhackable. But one day, encryption could be much, much harder to crack—thanks to networks that take advantage of quantum mechanics, the esoteric branch of physics that governs the universe at the tiniest scales. You’re almost certainly reading this story on an electronic device that operates, at its most basic level, with bits built from silicon-based transistors. In the non-quantum world, what scientists call the “classical” world, each of those bits holds a single number: a zero or a one. Quantum devices use their own quantum bits, or “qubits” (pronounced like “Q-bits”), which play by the rules of quantum mechanics. That allows qubits to act in weird and wondrous ways. A qubit can, for instance, hold both zero and one at the same time. A quantum network can transmit these curious qubits: for example, photons, which scientists can send through the fiber-optic lines that underpin the classical internet. These networks, still in the experimental stage, serve to link quantum devices together. “Now that quantum computers are really starting to be built, people are really starting to think more seriously about networking them,” says Christoph Simon, a researcher specializing in quantum optics at the University of Calgary. It’s already hard to build a quantum computer, and it’s even harder to make quantum computers bigger. “So one way to scale up the processing power would be to entangle several networked quantum computers to create a single ‘super’ quantum computer,” says Oliver Slattery, a physicist at the National Institute of Standards and Technology. But the original (and best-known) use of quantum networks is to create connections that are—in theory—far more inscrutable than anything on the highly fallible classical internet. These hyper-secure connections take advantage of a principle called quantum entanglement. Simply put, you can create particles that are “entangled.” If you then observe the state of one of them, you’ll affect the state of its entangled partner, no matter how far away that other particle is. You can use that to encrypt information. Suppose you want to send a message to your spy friend in the next city over. The both of you would each receive one of a pair of entangled photons. Measuring those photons’ states would give both you and your colleague a unique key, which you could use to encrypt a message, and which your friend could in turn use to decrypt it. If somebody tried to tap in for the key, that very act would influence the photons, and you’d know. “You can’t eavesdrop and make measurements on the channel without people being able to detect that,” says Nathalie de Leon, a professor of electrical and computer engineering at Princeton University. “Also, you can’t just intercept and copy the information.” You can’t copy a qubit thanks to another quantum quirk called the “no-cloning principle.” But that very principle is also a quantum network’s fatal flaw. If you send a qubit down a line, then it can only go so far before it fades. In the classical internet, you can simply forward that information along. But that won’t fly in the quantum world, because you can’t copy a qubit. As a result, current quantum networks can only send qubits a few kilometers away. That means if you send qubits through fiber right now, you can’t do it at a scale larger than a city. “Being able to do anything at longer distances requires fundamentally new technologies,” says de Leon. There are shortcuts, but those aren’t necessarily secure. They’re like relaying your message via middlemen—and you can’t always trust middlemen. It’s also possible to avoid fiber entirely and send a qubit across what researchers call “free space”—literally the open air. It’s like flashing a light from one mountaintop to another. You need to physically see the other side, making it impractical for most cases. And it’s prone to atmospheric interference. But it does work in the vacuum of space. That’s what allowed the Chinese satellite QUESS to “teleport” a qubit from orbit to the ground in 2017. It’s slow and not especially efficient, but the scientists behind QUESS (and the Chinese government) hope that the technology could form the basis for a quantum satellite network. As impressive as the accomplishment is, de Leon says it builds on existing work. “It was a very important demonstration … and I think we do learn a lot as a community,” she says. “But everything that they did, you could have written down ten years ago, fifteen years ago.” Still, that’s where some scientists are turning their attention, building ground stations to receive qubits from space. QUESS soon won’t be alone: Another satellite, QEYSSat, will be stewarded by a number of scientists from Canadian institutions, including Christoph Simon. “We are in the process of determining what’s possible and reasonable,” says Simon. “Frankly, we are thinking about the next [satellite].” So could all these links eventually evolve into a “quantum internet”? After all, today’s classical internet began as a fledgling network of connections spindled between labs and universities. There’s a fair distance to go before that can happen, and more than a few technical conundrums to overcome along the way. Quantum computers need to run at ultra-cold temperatures, for instance, barely above absolute zero. But most fiber-optic cables don’t run at ultra-cold temperatures. So any linkage between the two needs to overcome the temperature difference. But perhaps the biggest challenge is that nobody agrees on what to actually build a quantum network from. Today’s quantum networks largely use relatively simplistic equipment. Moving forward, scientists are trying to build more sophisticated nodes that could use quantum trickery, get around the no-cloning principle, and make longer quantum networks. “We haven’t … identified the thing that’s like the silicon-based transistor,” says de Leon. Some researchers want to read qubits by trapping them in rubidium vapor. Others want to do something similar with a cage of magnets and lasers. De Leon’s group wants to use something (literally) brilliant: diamonds. A type of imperfection in diamonds called the “nitrogen-vacancy center” can act as a sort of quantum memory. “The basic unit is still up for grabs,” says de Leon. Until fundamental issues like these are sorted out, then quantum networks will, for the most part, remain lab-bound. And as curious as quantum networks might be, it’s unlikely they’ll fully replace the Internet anytime soon. “It is almost certain that classical networks will need to run alongside quantum networks to make them usable in a practical sense,” says Slattery.
<urn:uuid:c9d87d6c-92ec-40d2-8674-29c0e633e5d8>
CC-MAIN-2021-43
https://www.popsci.com/science/what-are-quantum-networks/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00254.warc.gz
en
0.925005
1,581
3.796875
4
Before quantum computing and self-driving cars, a different kind of cutting edge was sweeping the world: metal smithing. To ancient people living over 6,000 years ago, mining raw metal from the Earth and carefully melting it to craft into currency, tools, and even ornate ritualistic objects, was the height of innovation. In a recent study in the Journal of Archaeological Science: Reports, scientists describe an archaeological site that may have been the first place in the world to host this technology's secret sauce — a furnace. Chemical analysis of remnants at an ancient copper-smelting site in Israel points to a two-stage crafting process for metal objects. Not only that, but the site appears to have used copper ore from mines located over 60 miles away. The combined evidence of an elaborate supply network and the specialized, multi-step process is a testament to the importance of this ancient, cutting-edge technology, the researchers say. Melting metal is no easy feat. Lead researcher on the study and professor of archeology at Tel Aviv University Erez Ben-Yosef said in a statement the reality is an incredibly delicate and precise process that requires serious skill. "It's important to understand that the refining of copper was the high-tech of that period. There was no technology more sophisticated than that in the whole of the ancient world," Ben-Yosef says. "Tossing lumps of ore into a fire will get you nowhere. You need certain knowledge for building special furnaces that can reach very high temperatures while maintaining low levels of oxygen." Older studies suggest people living some 6,000 years ago in what is now the Middle East used clay crucibles — which resemble vases — for smelting copper ore. But when archaeologists excavated this site in 2017, they found evidence of a different kind of technology: a small furnace made of tin and clay. "This provides very early evidence for the use of furnaces in metallurgy and it raises the possibility that the furnace was invented in this region," said Ben-Yosef. Reconstructing history — The archeologists first conducted a chemical analysis on uncovered remnants of the site's metal works using a portable X-ray instrument. After studying 14 crucible and 18 furnace fragments, as well as metallurgy's glass-like byproduct, 'slag,' the team retraced these ancient innovator's steps to imagine what their process would've looked like. In the study, they describe there was likely a two-step metal-smithing process that began with melting ore in a clay-lined, pit furnace, and then scraping it into a smaller crucible to be remelted. Finally it would be poured into a sand-based mold in the ground to cool and form transportable lumps. The irregularity of these final forms and the lack of other casting remnants led researchers to believe that this site was not constructing objects themselves, but instead processing the metal for other communities to use. In addition to the copper found, the team also found reoccurring signatures of phosphorous, which they think may have come from burnt bones. While there isn't enough evidence to know for sure, the researchers write that it's possible an animal sacrifice was made during the smelting process as a form of organic fuel. The analyses also reveal the site used an ore found more than 60 miles away, in what is now the Jordan Valley. In future centuries, these smelting sites and mines would move closer together for practical and economic reasons, but the researchers write that the more ancient, long-distance network uncovered here is further evidence that the process of smelting these metals was highly specialized and safe-guarded by each community — like a secret family recipe, or how a tech company protects its intellectual property with NDAs. "At the beginning of the metallurgical revolution, the secret of metalworking was kept by guilds of experts. All over the world, we see metalworkers' quarters within Chalcolithic settlements, like the neighborhood we found in Beer Sheva," explains Ben-Yosef. A first... or not? — The evidence suggests this Israeli site may be one of the first in the ancient world to begin using a furnace for copper smelting. But the technology may have been invented and used around the same time in neighboring regions, Ben-Yosef says. Nevertheless, the discovery cements a place in history for this community as an ancient, "technological powerhouse," he adds. "[T]here is no doubt that ancient Beer Sheva played an important role in advancing the global metal revolution," he says. Abstract: Recent discoveries at Horvat Beter (Beersheva, Israel) shed new light on the earliest phase of Southern Levantine metallurgy (second half of the 5th millennium BCE). Multiple fragments of furnaces, crucibles and slag were excavated, and found to represent an extensive copper smelting workshop located within a distinct quarter of a settlement. Typological and chemical analyses revealed a two-stage technology (furnace-based primary smelting followed by melting/refining in crucibles), and lead isotope analysis indicated that the ore originated exclusively from Wadi Faynan (MBS Formation), more than 100 km away. These observations strengthen previous suggestions that metallurgy in this region started with furnace-based technology (possibly not locally invented). Furthermore, the absence of any artifact related to the contemporary industry of copper-based alloys indicates a high degree of craft specialization, and together with other regional observations testifies to the important role of metallurgy in the society of the Beer-sheba Valley during this formative time.
<urn:uuid:72f892e7-ef89-4a70-91a2-80f33f84c111>
CC-MAIN-2021-43
https://www.inverse.com/innovation/ancient-tech-powerhouse
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00454.warc.gz
en
0.950547
1,176
3.6875
4
Elliptic Curve Cryptography Cryptography has been used for many centuries now. One of the most well-known and a rather old cryptographic cipher is the Caesar cipher. There are two main different types of encryption - symmetric encryption, which uses one key to both encrypt and decrypt (e.g. AES), and asymmetric encryption, which uses two different keys (e.g. RSA). These are often called a public and private key, where the private key is not to be disclosed. RSA uses integer factorization cryptography based on algebraic number theory, while elliptic curve cryptography (ECC) uses integer factorization cryptography based on elliptic curves. In this post, we will delve deeper into ECC and discuss an application of it, Elliptic-Curve Diffie-Hellman (ECDH). Elliptic Curve Cryptography is a choice for public-key-cryptography, based on elliptic curves over finite fields. What are elliptic curves? Let K be a field with characteristic not equal to 2 or 3. Let a, b ∈ K with 4a3 + 27b2 ≠ 0. Then the elliptic curve over K is represented by the Weierstrass equation: We are interested in the case K = Fp where p is prime. The set of points of such an elliptic curve over K is the collection of ordered pairs (x,y) with coordinates in K and x, y satisfying the equation (1) plus an additional point called the point at infinity or zero point. Lets have look at an example of a random elliptic curve in the following Figure. An elliptic curve E has the crucial property that we can define as the addition of two points on the elliptic curve so that we obtain a third point also being on the curve. This happens in the following way: Let P and Q be Points on E. To add these two points together, we pass a line through them. Through reflection of the intersection of the line with E, we get a third point R = (P+Q). The idea behind the described group operation is that the three points P, Q, -R lie on a common line with the same slope and the points, which are the reflection of each other over the z-axis are considered to add up to be zero (see the previous Figure). EC addition is described very nicely in the original Lenstra paper "Factoring integers with elliptic curves". A very nice tool to experiment with the mathematical properties of elliptic curves can also be found here. With the addition of two points we can define the multiplication kP with k a positive integer and P a point obtained through add P k times to itself, as example 2P = P + P. Note that there are a multitude of curves available, and picking one can make a difference. Some of the published curves include Curve25519, Curve448, P-256, P-384, and P-521, with P-256 being the most popular one, followed by Curve25519 (which promises to be faster than P-256). There is also the major difference that Curve25519 (and its selection of parameters) is documented openly, while P-256, which is published by NIST, is not. This is likely one of the reasons that SSH has adopted Curve25519 as its curve. But how does Elliptic Curve Cryptography work exactly? Using the operations defined in the previous paragraph, a key exchange method based on Elliptic Curves can be devised. Specifically, we will take a look at Elliptic-Curve Diffie-Hellman (ECDH) next. Note that, similarly to standard Diffie-Hellman, while ECDH protects against passive attacks such as eavesdropping, it does not protect against active ones such as a Man-in-the-Middle attack. To devise a secure key exchange, additional measures such as authentication are required. Now, the following steps are needed to securely exchange a key between two parties Alice and Bob (with Cathy being an adversary that eavesdrops) without having a pre-shared key: Alice, Bob and Cathy agree on a public elliptic curve and a public fixed curve point G. Alice picks a private random integer α. From now on, α is her private key. Now Alice computes her public key. That is the curve point A = αG. She publishes her public key. Bob picks a private random integer β. β is his private key. Now Bob computes his public key. His public key is the curve point B = βG. He also publishes his public key. Cathy does the same way. Now we suppose Alice wants to send Bob a message. Alice can simply compute P = αB = α(βG) and uses P as the private key for the conversation. Bob can simply compute P =βA = β(αG) and uses P as the private key for the conversation. You see αB = α(βG) = β(αG) = βA, so just Alice and Bob know the private key P for their conversation. Suppose Cathy wants to read the conversation between Alice and Bob. She knows the elliptic curve, the point G, the order of G and the public keys A and B from Alice and Bob. What she does not know are the private keys - she would have to compute the private key P to do so. The Figure below illustrates the steps above and serves as a graphical aid. The security of this method (against passive attacks) is believed to be adequate for current computers (Haakegard et al.), however, it is also believed that Quantum Computing could render ECC insecure (Roetteler et al.). Advantages of ECC For one, key bit length plays an important role - the key lengths of ECC keys are much smaller than the ones of RSA, given the same security level is required. ECC also arguably offers a largely better performance, with ECC-512 (comparable to RSA-15360) being up to 400 times as fast as RSA for both encryption and decryption, as per Lauter. In summary, ECC is a very interesting method - as a matter of fact, Github uses ECC keys in their documentation examples (e.g. when generating SSH keys) due to its performance. *Figures are not representative of a product, and were made by the author. Lenstra, "Factoring integers with elliptic curves" Lauter, "The advantages of elliptic curve cryptography for wireless security" "Faktorisierung großer Zahlen" Haakegard et al., "The Elliptic Curve Diffie-Hellman (ECDH)" Roetteler et al., "Quantum resource estimates for computing elliptic curve discrete logarithms"
<urn:uuid:7913b2bb-c302-413a-be49-e034623f829f>
CC-MAIN-2021-43
https://www.axiros.com/blog/2021/08/19/elliptic-curve-cryptography
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00015.warc.gz
en
0.951525
1,431
4.0625
4
What Is Neuromorphic Computing? There are a number of types and styles of artificial intelligence, but there's a key difference between the branch of programming that looks for interesting solutions to pertinent problems and the branch of science seeking to model and simulate the functions of the human brain. Neuromorphic computing, which includes the production and use of neural networks, deals with proving the efficacy of any concept of how the brain performs its functions -- not just reaching decisions, but memorizing information and even deducing facts. Both literally and practically, "neuromorphic" means "taking the form of the brain." The keyword here is "form," mainly because so much of AI research deals with simulating or at least mimicking, the function of the brain. The engineering of a neuromorphic device involves the development of components whose functions are analogous to parts of the brain, or at least to what such parts are believed to do. These components are not brain-shaped, of course, yet like the valves of an artificial heart, they do fulfill the roles of their organic counterparts. Some architectures go so far as to model the brain's perceived plasticity (its ability to modify its own form to suit its function) by provisioning new components based on the needs of the tasks they're currently running. The first generation of AI was rules-based and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second, current generation is largely concerned with sensing and perception, such as using deep-learning networks to analyze the contents of a video frame. A coming next generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation. This is critical to overcoming the so-called “brittleness” of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding. Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities. Neuromorphic Computing Research Focus The key challenges in neuromorphic research are matching a human's flexibility, and the ability to learn from unstructured stimuli with the energy efficiency of the human brain. The computational building blocks within neuromorphic computing systems are logically analogous to neurons. Spiking neural networks (SNNs) are a novel model for arranging those elements to emulate natural neural networks that exist in biological brains. Each “neuron” in the SNN can fire independently of the others, and doing so, it sends pulsed signals to other neurons in the network that directly change the electrical states of those neurons. By encoding information within the signals themselves and their timing, SNNs simulate natural learning processes by dynamically remapping the synapses between artificial neurons in response to stimuli. While building such a device may inform us about how the mind works, or at least reveal certain ways in which it doesn't, the actual goal of such an endeavor is to produce a machine that can "learn" from its inputs in ways that a digital computer component may not be able to. The payoff could be an entirely new class of machine capable of being "trained" to recognize patterns using far, far fewer inputs than a digital neural network would require. "One of the most appealing attributes of these neural networks is their portability to low-power neuromorphic hardware," reads a September 2018 IBM neuromorphic patent application [PDF], "which can be deployed in mobile devices and native sensors that can operate at extremely low power requirements in real-time. Neuromorphic computing demonstrates an unprecedented low-power computation substrate that can be used in many applications." Although Google has been a leader in recent years, of both research and production of hardware called tensor processors (TPU) dedicated specifically to neural network-based applications, the neuromorphic branch is an altogether different beast. Specifically, it's not about the evaluation of any set of data in terms of discrete numeric values, such as scales from 1 to 10, or percentage grades from 0 to 100. Its practitioners have a goal in mind other than to solve an equation, or simply to produce more software. They seek to produce a cognition machine -- one that may lead credence to, if not altogether prove, a rational theory for how the human mind may work. They're not out to capture the king in six moves. They're in this to build mechanisms. The Future Of Neuromorphic Computing At any one time in history, there is a theoretical limit to the processing power of a supercomputer -- a point after which increasing the workload yields no more, or no better, results. That limit has been shoved forward in fits and starts with advances in microprocessors, including by the introduction of GPUs (formerly just graphics processors) and Google's design for TPUs. But there may be a limit to the limit's extension, as Moore's Law only works when physics gives you room to scale smaller. Neuromorphic engineering points to the possibility, if not yet probability, of a massive leap forward in performance, by way of radical alteration of what it means to infer information from data. Like quantum computing, it relies upon a force of nature we don't yet comprehend: In this case, the informational power of noise. If all the research pays off, supercomputers, as we perceive them today, maybe rendered entirely obsolete in a few short years, replaced by servers with synthetic, self-assembling neurons that can be tucked into hallway closets, freeing up the space consumed by mega-scale data centers for, say, solar power generators. Examples of neuromorphic engineering projects Today, there are several academic and commercial experiments underway to produce working, reproducible neuromorphic models, including: - SpiNNaker [pictured above] is a low-grade supercomputer developed by engineers with Germany's Jülich Research Centre's Institute of Neuroscience and Medicine, working with the UK's Advanced Processor Technologies Group at the University of Manchester. Its job is to simulate the functions so-called cortical microcircuits, albeit on a slower time scale than they would presumably function when manufactured. In August 2018, Spinnaker conducted what is believed to be the largest neural network simulation to date, involving about 80,000 neurons connected by some 300 million synapses. - Intel is experimenting with what it describes as a neuromorphic chip architecture, called Loihi (lo · EE · hee). Intel has been reluctant to share images that would reveal elements of Loihi's architecture, though based on what information we do have, Loihi would be producible using a form of the same 14 nm lithography techniques Intel and others employ today. First announced in September 2017, and officially premiered the following January at CES 2018 by then-CEO Brian Krzanich, Loihi's microcode include statements designed specifically for training a neural net. It's designed to implement a spiking neural network (SNN), whose model adds more brain-like characteristics. - IBM maintains a Neuromorphic Devices and Architectures Project involved with new experiments in analog computation. In a research paper, the IBM team demonstrated how its non-volatile phase-change memory (PCM) accelerated the feedback or backpropagation algorithm associated with neural nets. These researchers are now at work determining whether PCM can be utilized in modeling synthetic synapses, replacing the static RAM-based arrays used in its earlier TrueNorth and NeuroGrid designs (which were not neuromorphic).
<urn:uuid:7a0f1363-20a5-4bb0-9b3b-4c159e3f3b96>
CC-MAIN-2021-43
https://poshpython.com/blogs/tech-blog/what-is-neuromorphic-computing
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00296.warc.gz
en
0.945829
1,536
3.59375
4
The ISS houses many ground-breaking experiments which can only be performed in space, and the ice chest-sized Cold Atom Laboratory (CAL) is officially the ‘coolest’ suite of instruments, studying hyper-cold atoms and exploring physics at the atomic scale. Here, Emma Holling discusses the creation of Bose-Einstein Condensate in the Cold Atom Laboratory and the future applications of CAL’s research. The Cold Atom Laboratory is a facility on the International Space Station (ISS) which utilises the unique microgravity environment in space. In July 2018, scientists used the lab to produce Bose-Einstein condensate in orbit for the first time, and in 2020 the lab was upgraded for further research. As its name suggests, the Cold Atom Laboratory cools atoms down to around 200 nano-Kelvin by three key processes. First, a magneto-optical trap holds atoms in place, while radiation and pressure from lasers slows them down. Six lasers are focussed on the atoms, meaning whichever way the atom shifts, it will have a stream of photons exerting a force to slow it down. While this process cools atoms to a fraction of a degree above kelvin, for Bose-Einstein Condensates to form, further cooling is required. This comes in the form of evaporative cooling, which holds the atoms so they vibrate in place, allowing the highest energy atoms to be removed – almost like siphoning off the hottest atoms. The final stage is adiabatic expansion, where the remaining atoms are allowed to expand out by reducing the strength of the magnetic field holding them. Bose-Einstein condensate is an intriguing state of matter where supercooled bosons are in a single quantum state with the same low energy level. The group of bosons acts in a wave-like fashion (owing to the correlation of the particles), with the same wave function meaning the matter waves are coherent; with multiple condensates it is possible to observe interference, generating a clear series of minima and maxima. Perhaps more importantly, quantum effects can be observed on a much larger scale than usual, a vital discovery we will explore later. Interference patterns observed in Bose-Einstein condensate. Credit: Lachmann et al, Nat Commun, 2021 Credit: McGraw-Hill Concise Encyclopedia of Physics One quantum effect is quantum tunnelling. Part of the condensate is able to overcome physical barriers, something impossible were the atoms only to obey classical mechanics. Linked to this is the Josephson-Effect, which is when an electric current is able to flow from one part of a superconductor to another passing through an insulator. The current is able to do this due to a ‘weak link’ between the macroscopic quantum objects, giving a fraction of the condensate the ability to tunnel through, but not allowing a particle in the classical sense to as the weak link’s barrier is too high. While Bose-Einstein condensate was proposed nearly a century ago, it took 80 years for a Nobel Prize winning project to form one. Creation can be performed on Earth, but gravity means that the atoms fall out of place almost immediately. However, on the ISS, the microgravity environment allows scientists to observe the condensate for over a second rather than fractions of one - as the freefall of the atoms is indefinitely long unlike on Earth where there are gravity constraints, causing the condensate to be shifted within its formation chamber. Bose-Einstein condensate can also be created in seconds, giving physicists more opportunities to perform repeats and adjust their experiments. Further research into the quantum effects demonstrated by Bose-Einstein Condensate is vital to the progression of quantum computing, and so too is the development of the Cold Atom Laboratory. Quantum computers use qubits for calculations, and investigations into different systems for their creation are currently underway. One of the biggest challenges is maintaining coherence, as quantum states are very sensitive to their environments. Today, information is stored on computers using macroscopic objects, and while it was initially thought that this would be impossible for quantum computers, owing to quantum effects often disappearing for larger objects, Bose-Einstein condensate may defy this thinking. As quantum effects can be seen on a macro level in Bose Einstein, it may be possible to encode the condensate and then use it in quantum computing to create qubits. The next step is to discover the feasibility of encoding the condensate, and if successful this could open up a world of possibility for technological advancement. To allow for advanced research, in 2019 the laboratory was upgraded to include an interferometer which significantly increases the Cold Atom Lab’s abilities. Astronauts Christina Koch and Jessica Meir removed the old Science Module and upgraded it by connecting 11 fibre optic cables to the new module. The wire cores were thinner than a human hair, and if snapped or scratched, this could have ended the mission. The incredible astronauts completed this precision mission successfully. Now with the interferometer, not only can atoms be supercooled and observed on a microscopic scale, but the waves can be split and recombined, allowing for cutting-edge research into the fundamental physics of the universe. To do this the Bose-Einstein condensate is irradiated, separating the atoms and then allowing them to come back together and superpose. The interference pattern is clearly visible, owing to the coherence of the waves, and with some adjustment there is hope that physicists will be able to measure the effects of gravitational waves to a high level of precision by measuring interesting disruptions to the condensate. About the author Emma Holling is a UK student passionate about helping people see the value of the space industry and their place within it. When she’s not studying for a physics degree, Emma works with schools; produces a variety of online content (including webinars and podcasts); and is an Outreach Ambassador with New Voices in Space. This is the third monthly article in our ‘New Voices in Space’ series authored by young scientists and engineers involved in the space business. The first two articles are Taking quantum into space by Sonali Mohaptra and NASA’s women of inspiration by Mansi Joshi.
<urn:uuid:244f6862-0a10-4ed5-95da-9f26a14394fb>
CC-MAIN-2021-43
https://room.eu.com/article/the-coolest-experiment-on-the-iss
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00017.warc.gz
en
0.932781
1,310
3.765625
4
It is a peculiar thing to see, but more and more commonly terms of art make their way into the mainstream media. It seems that every week a new article about a vulnerability, cyberattack, or data breach makes its way into public discourse. One phrase used to give confidence in a strong encryption scheme is “256-bit encryption”, but what does this mean? - What is Encryption? - What is a Key Size? - How Strong is 256-bit Encryption? - But what if the hardware gets better? What is Encryption? Encryption is the practice of taking a message, referred to as “plain-text” and applying a series of transformations to produce “cipher-text”. This cipher-text is only readable by someone who can reverse this process, turning the cipher-text back into plain-text. Cipher-text is very portable – it can be safely sent via an insecure channel such as the internet without worrying about the contents of the message being intercepted by prying eyes. Broadly speaking, encryption falls into two major buckets: symmetric encryption, and asymmetric encryption. Symmetric encryption refers to an encryption algorithm which relies on both parties being privy to the same encryption key – it is used both to encrypt and decrypt the message. Asymmetric encryption on the other hand utilizes public/private keypairs. In this kind of encryption, both parties have a public key and a private key which are intrinsically, mathematically linked: that which can be encrypted via the public key can be reversed via the private key (and vice versa). Symmetric cryptography is much faster, but requires two parties to have communicated the key in advance via another channel. Asymmetric cryptography is slower, but can be performed without a prior exchange of information. SSL/TLS, the protocol most responsible for securing the internet, uses a mix of symmetric and asymmetric cryptography in order to get the best of both worlds. What is a “Key Size”? In cryptography, “key size” refers to the length of the secret key used to encrypt and decrypt information. If I asked you to pick a number 1 thru 4 (integers only!), you’d have a 25% chance of getting it right on your first try. If you got to pick 4 numbers, you’d have 100% chance of getting the right answer. In this way, if an attacker tries every possible key, they will eventually land upon the right one. This is referred to as a “brute force attack”. In order for something to be reasonably secure then, trying every possible key must be infeasible with modern hardware. But what is a bit? Instead of using our base-10 number system, computers rely on binary numbers because electrically, they operate on the presence or absence of current. These 1’s and 0’s are referred to as “bits”, and the number of them in your key is what defines your key-size. With a symmetric encryption key 256 bits long (2 to the 256th power possible combinations!), on current hardware it would take literally millions of years. How Strong is 256-bit Encryption? Given that it would take millions of years to try all possible combinations of an AES 256-bit key, what other attacks exist against modern encryption schemes? DES was rendered insecure to a 56-bit key size. As it turns out, not all AES is created equal! AES defines 5 different “modes”, some of which have suffered from rampant implementation flaws over the years. There is nothing preventing something we do not yet know from rendering many implementations of AES insecure in the same manner! Additionally, while infinitesimally unlikely, it is statistically possible for an attacker to guess the right key on the very first try! This is all to say that information security is relative, and it is fluid. AES 256-bit encryption represents the strongest symmetric encryption achievable today, but that is not a guarantee that this won’t change. But what if the hardware gets better? It will! over time, hardware improves. Some of you may be familiar with “Moore’s Law”, the tendency for the number of transistors packed into an integrated circuit to double every two years as we more and more clever with our manufacturing capabilities. Much in the same way, the computers of 2021 are staggeringly better than the computers of even as recent as 2005. Should this pace continue, it’s feasible that we eventually need to move away from current encryption methods. Many believe that quantum computing will rapidly accelerate this need. Quantum computers, unlike regular computers, are very good at reversing factorization of large prime numbers (the fact that this is so difficult is the very basis of RSA key generation). Luckily, alternative encryption algorithms already exist. But wait? You might ask, with baited breath. Why not move to those alternatives today? Why not pick keys so obscenely large that we never have to have this conversation again? Ultimately, the longer your key size, the more hardware time is needed to perform the encryption and decryption, and the more power the operation consumes. Ultimately, advances in battery technology have the potential to catapult us forward into uncharted terrain in terms of encryption. As constraints change, and as technology improves, optimizing for security and performance may look very different ten years from now, much like it looked very different ten years ago. The next time you see them mention encryption in the news, try to figure out if they’re talking about symmetric or asymmetric cryptography. (If you’re wondering what length TLS keys are generally considered to be strong, asymmetric cryptography tends to require much larger key sizes such as 2048 or 4096 bit keys!). Try to determine how many attempts it would take to brute-force the key space. Consider what industry is being discussed and the tradeoffs the company must have made to arrive at the chosen key size (a bank is likely to use a larger key size and value future-proofing its security over performance.)
<urn:uuid:0aa484a7-e9cf-4c06-8500-3695176e0197>
CC-MAIN-2021-43
https://www.ssltrust.co.nz/blog/what-is-256-bit-encryption
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00216.warc.gz
en
0.941709
1,260
3.84375
4
Quantum computers can lead to breakthroughs in a wide variety of subject areas because they offer a computational strength we’ve never seen before. However, not all problems are favorable for a quantum computer. In order to identify which problems make good candidates, it’s important to have an understanding of how a quantum computer solves problems. While quantum computers can offer an exponential boost in computational power, they can’t be programmed in the same way as a classical computer. The instruction set and algorithms change, and the resulting output is different as well. On a classical computer, the solution is found by checking possibilities one at a time. Depending upon the problem, this can take too long. A quantum computer can explore all possibilities at the same time, but there are a few challenges. Getting the right answer out of the computer isn’t easy, and because the answers are probabilistic, you may need to do extra work to uncover the desired answer. For example, assume you wanted to page-rank the internet. To do so, the process would require loading every single page as input data. On a classical machine you would create a computation that gives you the page rank of each page, but this takes time and a significant amount of hardware. With a quantum computer, computation is exponentially faster than on classical hardware. But the caveat is that with quantum, your result will typically be the page rank of one page. And then you’d have to load the whole web again to get another, and do it again to get another, and continue until you eventually have the page rank for the entire internet. Because you have to load everything each time, the exponential speedup is lost. This example would not be favorable for quantum computing. To solve any problem, you’ll have input, computation, and output. - Input – The data required to run the computation - Computation – The instructions given to the computer to process the data - Output – The useful result received from the computation Instead of returning the entire quantum state, a quantum computer returns one state as the result of a computation. This unique characteristic is why we write the algorithm in such a way that produces the desired answer with the highest probability. For this reason, problems that require a limited number of values are more applicable. The amount of input data is also a consideration. As input data increases, either the number of qubits or the amount of work to ‘prepare’ the data grows quickly. Problems with highly compressed input data are more much more favorable. What types of problems are ideal challenges for a quantum computer? Quantum computers are best-suited for solving problems with a limited volume of output, and—ideally—those with a limited amount of input. These restrictions might lead you to assume that the scope of what quantum computers can do is narrow, but the exact opposite is true. Quantum computers provide a level of computational power that allows us to tackle some of the biggest challenges we face. The nuance is in framing problems in a way that makes them solvable. Here are some great examples of how a quantum computer can be used to address some of today’s biggest challenges. Modelling molecules is a perfect application for quantum computing. In Richard Feynman’s own words, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.” While we have an accurate understanding of organic molecules—those with S and P orbitals—molecules whose orbitals interact with each other are currently beyond our ability to model accurately. Many of the answers we need to address significant issues, such as world hunger and global warming, come by way of understanding these more difficult molecules. Current technology doesn’t allow us to analyze some of the more complex molecules, however, this is an excellent problem for a quantum computer because input and output are small. There’s a unique approach in quantum computing where, instead of loading the input data, you’re able to encode it into the quantum circuit itself. Modelling molecules are an example of this; the initial positions of the electrons would be the input—also referred to as ‘preparation’—and the final positions of the electron would be the output. Modelling materials is essentially in the same problem class as modelling molecules, which means quantum computers are also helpful in identifying new possibilities in material science. The ability to develop high-temperature superconductors is a great example. We currently lose around 15% of the power in the energy grid every year due to the resistance in the wires transporting the electricity. Finding a material that can transmit energy without heating up the wires requires modelling properties of materials, a process very similar to modelling molecules. Again, this precise focus has a minimal amount of input and a highly focused output—both great candidates for quantum computing. In addition, materials have a regular structure with (mostly) local interactions making them generally easier to model than chemicals on a quantum computer. Many cryptosystems are built using math problems more difficult than a classical computer is able to solve. However, a quantum computer has the computational ability to find solutions to the cryptographic algorithms in use today. Cryptographic problems that use factoring are excellent examples of problems that can be solved with a quantum computer because both the input and output are each a single number. Note that the numbers used in the key are huge, so a significant amount of qubits are needed to calculate the result. A quantum computer’s ability to solve cryptographic algorithms is an issue we take extremely seriously at Microsoft, and we are already working on quantum-safe cryptography protocols to replace those which will be vulnerable to quantum attacks. Machine learning and optimization In general, quantum computers aren’t challenged by the amount of computation needed. Instead, the challenge is getting a limited number of answers and restricting the size of the inputs. Because of this, machine learning problems often don’t make for a perfect fit because of the large amount of input data. However, optimization problems are a type of machine learning problem that can be a good fit for a quantum computer. Imagine you have a large factory and the goal is to maximize output. To do so, each individual process would need to be optimized on its own, as well as compared against the whole. Here the possible configurations of all the processes that need to be considered are exponentially larger than the size of the input data. With a search space exponentially bigger than the input data, optimization problems are feasible for a quantum computer. Additionally, due to the unique requirements of quantum programming, one of the unexpected benefits of developing quantum algorithms is identifying new methods to solve problems. In many cases, these new methods can be brought back to classical computing, yielding significant improvements. Implementing these new techniques in the cloud is what we refer to as quantum-inspired algorithms. Quantum computing brings about a paradigm shift in multiple ways: Not only will quantum computing provide access to new levels of computational ability, but it will also inspire new ways of thinking. For a quantum computer to solve some of our biggest challenges, we have to understand how to frame the problem. As we look at problems in new ways, this shift can, in turn, bring new ideas to how we approach classical computation as well. With more and more individuals considering problems from different angles, more and more ideas and solutions will result. Luckily, you don’t have to wait until quantum computers are readily available to begin considering problems in new ways—you can start today by learning quantum development. As you dive into the world of quantum development, you’ll practice your ability to think about problems in new ways, get familiar with programming a quantum computer, and even simulate your work so that you’ll be ready once quantum computers are made available. Get started today with the Microsoft Quantum Development Kit.
<urn:uuid:7de249d6-ad5e-45b1-9947-4f322eaab437>
CC-MAIN-2021-43
https://cloudblogs.microsoft.com/quantum/2018/04/24/understanding-how-to-solve-problems-with-a-quantum-computer/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00214.warc.gz
en
0.927823
1,646
3.5
4
The ubiquitous classical digital computer encodes data in bits (a portmanteau of binary and digits) in either a 0 or 1 state. On the other hand, while a quantum computer also uses 0/1 data representation, these qubits (from quantum and bits), qubit states 0 and 1 can be simultaneously in what is known as a superposition – and a quantum computer can also make use of entanglement. For these reasons, quantum computers can potentially solve problems whose complexity is too resource-intensive for classical computation. That being said, quantum computers are very difficult to construct. Recently, however, scientists at University of Wisconsin, Madison have fabricated a qubit in a silicon double-quantum dot in which the qubit basis states are the singlet state and the spin-zero triplet state of two electrons. (A double quantum dot links two quantum dots – semiconductor nanostructures that confine the motion of conduction band electrons, valence band holes, or excitons in all three spatial directions.) Moreover, the researchers have for the first time integrated a proximal micromagnet, allowing them to create a large local magnetic field difference between the two sides of the quantum dot – thereby greatly increasing their ability to manipulate the qubit without injecting noise that would induce superposition decoherence. Prof. Susan Coppersmith and Prof. Mark Eriksson discuss the paper they and their co-authors published in Proceedings of the National Academy of Sciences with Phys.org, noting that overall goal of the research program is to develop quantum bits for a quantum computer using technology that is similar to that used for current classical computers. “The advantages of this strategy arise for two main reasons.” Coppersmith tells Phys.org. “First, enormous investments have been made to develop large-scale classical electronics, and one hopes that this investment can be leveraged to facilitate scale-up of quantum electronics. Second, the similarity in technology facilitates integration of quantum and classical processors.” Integration is important, Eriksson adds, because a large-scale classical computer will almost certainly be necessary to control the operation of a quantum computer. An early step towards this goal is to fabricate high-fidelity individual qubits. This paper focuses on the so-called singlet-triplet qubit, which was first fabricated in gallium arsenide (GaAs) devices. “The operation of a singlet-triplet qubit in GaAs is complicated by strong coupling between the electron spins and nuclear spins, Eriksson explains. “Silicon has much weaker coupling between the electron spins and nuclear spins, and most of the nuclei in silicon have spin zero, so the electron spins in silicon can stay coherent much longer than in GaAs.” In fact, measurements of a singlet-triplet qubit in natural silicon indeed yield much longer coherence times than in GaAs, but because the qubit operations themselves rely on having a magnetic field difference between the dots – a difference that also arises from the nuclei themselves – the qubit operations in that work were much slower than in GaAs. “Our work shows that using an integrated micromagnet enables faster gate operations by imposing a larger magnetic field difference between the quantum dots,” Coppersmith points out, “and it does so without introducing measurable additional decoherence, which improves the overall performance of the qubit.” Specifically, the paper states that the integrated micromagnet provides a promising path toward fast manipulation in materials with small concentrations of nuclear spins, including both natural silicon (Si) and isotopically enriched 28Si. “Nuclear spins in GaAs and other materials, such as InSb (Indium Antimonide), reduce qubit coherence – but this strong coupling also enables fast manipulation,” Eriksson tells Phys.org. “However, if the decoherence effects are reduced by using a material with weaker coupling to nuclear spins, it’s necessary to find another way to create a large magnetic field difference between the quantum dots – and the integrated micromagnet enables this.” “One big challenge was fabricating a suitable device, that being a double quantum dot in which a micromagnet is incorporated,” Coppersmith continues. Devices with incorporated micromagnets had previously been investigated in GaAs in a slightly different context, but the fabrication procedure in the University of Wisconsin devices differs from that used in the GaAs devices, requiring novel processes to be developed. “A further challenge arose because the micromagnetic field was somewhat different than what was expected based on measurements of cobalt films and our numerical calculations,” notes Eriksson. “Therefore, to perform the experiments we had to use the properties of the qubit itself to figure out what the actual fields on the quantum dots were.” By so doing, the researchers found that the field from the micromagnet depended on the applied uniform field, which enabled them to investigate the qubit properties for two magnitudes of the micromagnet field. Interestingly, the paper states that the scientists’ fabrication techniques being similar for both quantum dot-based qubits and donor-based qubits in semiconductors suggests that micromagnets should also be applicable to donor-based spin qubits. “The micromagnet in the device that we measured is created by depositing the metal cobalt by Electron Beam Physical Vapor Deposition (EBPVD), onto the top of the sample,” Coppersmith says. “Therefore, applying the technique to other semiconducting qubit architectures in which the qubits are defined by evaporated metal top gates is rather straightforward.” (EBPVD uses an electron beam to bombard a target and convert some of its atoms into a gas, which then precipitate and coat all surfaces in the vacuum chamber.) In practice, however, some of the gates on these devices will be made of non-magnetic materials – typically aluminum or gold –resulting in a small number of cobalt gates. The researchers also describe the unique characteristics of a large-scale quantum computer based on their approach: Once high-quality single qubits and two-qubit gates are achieved, then because the technology is close to that already used in classical electronics and the qubit size (< 1µ) is small, scaling up to devices with large numbers of qubits could be feasible. This plausible path to large numbers of qubits has sparked significant interest in electrically-gated qubits in semiconductors. “The next steps in our research are to increase both the magnitude of the field difference between the quantum dots, and the number of qubits by increasing the number of quantum dots,” Coppersmith tells Phys.org. “Both steps are being implemented in new devices that have been designed and are currently being fabricated. We’re also working on other qubit implementations in silicon quantum dots1,2, all of which use electrical initialization, manipulation and readout, and therefore have the potential advantages of integrability and scalability.” Moreover, Eriksson points out that being able to control local magnetic fields in a nanoelectronic device could be very useful for spintronics. Learn more here http://www.pnas.org/content/early/2014/07/31/1412230111
<urn:uuid:729a3373-cfaf-4fac-88d8-9c656b8e0314>
CC-MAIN-2021-43
https://futureprimate.com/2014/08/28/quantum-meets-classical-qubit-fabricated-with-integrated-micromagnet-increases-speed-of-quantum-manipulation-in-silicon/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00618.warc.gz
en
0.933396
1,536
3.890625
4
Nanoscience may involve manipulation of the smallest materials, but it could have a large impact on the biggest of all issues, climate change. This month, we look at two new battery systems, both aimed at improving on the slow-charging (and sometimes unsafe) lithium-ion batteries that now power our world. We also examine a new technology to compost plastic that leaves nothing—not even microplastics—behind, then take a peek at a living wall (or roof) printed with living algae. Finally, we finish up with an interesting way to solve a quantum computing problem—by emulating a black hole. Cheaper aluminum batteries shine A new aluminum-based battery achieves 10,000 error-free recharging cycles while costing less than the conventional lithium-ion batteries now used in everything from smartphones to electric vehicles. The new battery builds on research by Lynden Archer, a member of the Kavli Institute at Cornell for Nanoscale Science and dean of the school’s engineering department. He is studying low-cost batteries that are safer and more environmentally friendly than lithium-ion, which is slow to charge and prone to catching fire. His team built the new battery’s anode and cathode from two readily available elements, aluminum and carbon. Aluminum is trivalent (having three electrons in its outmost shell vs. one for lithium) and lightweight, so it can store more energy per unit mass than many other metals. While aluminum reactions can cause short circuits by reacting with other battery materials, Archer’s group found they could stop this by depositing aluminum on carbon fibers to form a bond strong enough to prevent it from moving to other parts of the battery. Solid-state battery moves closer to reality Factorial Energy has unveiled a 40-ampere-hour solid-state battery cell for electric vehicles and other applications. The startup was co-founded by Héctor Abruña, a member of the Kavli Institute at Cornell, and Cornell chemistry Ph.D. graduate Siyu Huang. The new battery is based on an electrolyte that is stable enough for batteries to run high voltages and energy densities without bursting into flames or growing lithium dendrites that damage and degrade lithium-metal anodes. The company claims its battery boosts driving range 20 to 50 percent without sacrificing service life. The company recently added two auto industry heavyweights, Joe Taylor, former chairman/CEO of Panasonic North America, and Dieter Zetsche, former chair of Daimler and head of Mercedes-Benz, to its team, and says several global automotive companies are testing the battery. Truly compostable plastics The problem with most “compostable” plastics is that they do not break down in composting facilities or landfills. They are so dense, there is no way for microbes to worm their way in to digest the polymers. Instead, they remain intact or turn into microplastics that do not break down any further and pose environmental risks. Now, Ting Xu, a member of the Kavli Energy NanoScience Institute at UC Berkeley, has found a way around the problem. She has developed a technique to “cocoon” enzymes that eat plastic and incorporate them into the plastic material itself. This is no small feat, since shaping those plastics takes place at 338* F, hot enough to destroy any enzyme. Once incorporated, the enzymes are inactive until exposed to water and moderate heat. It takes just one week to degrade 80 percent of the plastic at room temperature, and less time at higher temperatures. Xu’s technique is a boon for composters and could lead to new ways to incorporate active biomolecules into materials for sensing, decontamination, and self-healing materials. 3D printing an algae roof Image a roof or wall made of living, photosynthetic materials tough enough to use in real-life settings. That is exactly what Marie-Eve Aubin-Tam, a member of the Kavli Institute of Nanoscience at Delft Technical University, has built by 3D printing living algae into a cellulose matrix. Cellulose is a strong yet flexible material excreted by bacteria that retains its shape even when twisted or crushed. She and her team printed living microalgae onto the cellulose, resulting in a photosynthetic system that can turn carbon dioxide and water into sugar and feed itself for several weeks. One day, she speculates, the living elements of the material could sense and respond to cues in the environment. This work showcases a new way of thinking about materials and could spark new conversations between scientists and designers. Black holes improve quantum computing If you have been keeping up with quantum computing, you probably know about qubits. They are quantum bits of information that hold the value of zero, one, or a superposition of zero and one at the same time. With each new qubit entangled, quantum computers gain power and algorithm-crunching ability. Now, meet qutrits. These quantum bits can hold the value of zero, one, two, or a superposition of all three at the same time. That means the size of your computer scales much quicker, so fewer bits do more work. But qutrits are prone to decoherence (when they fall out of entanglement). Irfan Siddiqi, a member of the Kavli Energy NanoScience Institute, may have found a way around the problem—by storing information as if it were in a black hole, where information is scrambled but not destroyed. It is a fascinating exploration of how to use the most advanced science to do the most advanced engineering.
<urn:uuid:65624fae-a8fc-45b8-8671-30c615c54532>
CC-MAIN-2021-43
https://kavlifoundation.org/news/nanoscience-goes-green
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00140.warc.gz
en
0.942054
1,162
3.625
4
In contrast to the human genome, which consists of 3 billion bases made of 4 nucleic acids organized in a one dimensional space, the human phenome contains an unknown number of elements with variation and dimensionality only partly understood. The scientific understanding of genes and genomic variation is restricted by a narrow range of methods to assess phenotypes allowing only certain anatomical and behavioral traits to be recorded, which is often done manually. Phenomics aims to get a more in-depth and unbiased assessment of phenotypic profiles at the whole organism level. The field of phenomics recognizes a need for consensus on human- and machine-interpretable language to describe phenotypes. Lack of standardization and computability across phenotype data make sharing phenotypic data difficult and may result in missed opportunities for discoveries and a large amount of phenotype data are not publicly available. A computable format may involve the use of appropriate ontology terms for representing phenotypic descriptions in text or data sources. In human phenomics, two of the aims are to understand how the environment makes people more or less susceptible to disease and to understand individual reactions to therapies. In 2003, at the time of completion of the Human Genome Project, the need for more precisely defined phenotypes and high-throughput systems to fully take advantage of genotyping studies was projected and the creation of an international Human Phenome Project was proposed. The theme for a satellite meeting at the 2012 Annual Meeting of the American Society of Human Genetics in San Francisco by The Human Variome Project was “Getting Ready for the Human Phenome Project”. The meeting was cosponsored by the Human Genome Variation Society. At that meeting it was noted that in comparison to phenome projects in model organisms such as mouse, rat and zebrafish which have compiled phenotypic data on the consequences of genetic mutations, similar scale efforts for humans lagged behind. In humans, data on genotype-phenotype associations has been generated through genome-wide association studies (GWAS) or linking single nucleotide polymorphisms (SNPs) with disease phenotypes. To reduce the disease-centric bias in these approaches, an effort was made to look for associations in complete medical records and complete genome sequences, called Phenome Wide Association Studies (PheWAS). However human phenotype data has a strong clinical bias. The introduction of genetic changes in animal model systems allow for unbiased interrogation of genotype-phenotype interactions. The Mouse Phenome Project was the first major effort in a vertebrate model to catalog baseline phenotypic data, which is housed at the Mouse Phenome Database at the Jackson Laboratory, Bar Harbor, ME. The Knockout Mouse Project is a National Institutes of Health (NIH) initiative which aims to generate a resource containing loss of function mutations for every gene in the mouse genome correlated with phenotypic data. Most human genes (70%) have a counterpart (ortholog) in zebrafish, which combined with their short generation time, standard practises for genetic manipulation and suitability for live imaging make them cost-effective in biomedical research. The Zebrafish Phenome Project is underway and contribute to knowledge about phenotype-genotype associations and genetic diagnosis of human disease. The Chemical Phenomics Initiative, based on chemical genetics, is a high-throughput chemical screen for small molecules that modulate early embryonic development in zebrafish, carried out by the Hong lab at Vanderbilt University. Pharmacological targets for these small molecule developmental modulators are identified and made accessible to the scientific community through the chemotype-phenotype database on Chemical Phenomics interactive web portal. In zebrafish whole body micro-CT scanning has been used for skeletal phenomics studies. As DNA sequencing became faster and cheaper, new knowledge about normal genetic variation in the human population allowed genetic variants once thought to cause disease to be reclassified as benign. The ranges and commonality of variations in human phenotypes if better understood could improve the accuracy and treatment of disease in genetic medicine. The Human Phenotype Ontology helps the sharing of phenotype data by standardising the vocabulary for phenotypes. For some types of phenotypic abnormalities, standardized measurements can be used to define the phenotype. To show a causal link between a genetic variant and an abnormal phenotype, it needs to be shown that the two are found together more often then expected by chance. Improvements in data about baseline population frequencies of phenotypes are needed for these calculations. The International Human Phenome Project (Phase I) was launched in Shanghai in March 2018. The project will be led by Fudan University with collaboration with Shanghai Jiao Tong University, Shanghai Institute of Measurement and Testing Technology and Shanghai Institutes for Biological Sciences. The following projects promote standardization and sharing of phenotypic data related to humans and model organisms: - The Human Phenotype Ontology (HPO) - The Human Variome Project - PhenX Toolkit - International Mouse Phenotyping Consortium (IMPC) - National Phenome Centre - International Phenome Centre Network - UK Biobank - The Personal Genome Project - National Bio Resource Project (NBRP) Rat Phenome - Inborn Errors of Metabolism Knowledgebase (IEMbase) - Chemical Phenomics Initiative - The Phenomics Discovery Intitiative (PDi) - Consortium for Neuropsychiatric Phenomics (CNP) - Mouse Phenome Database - The Knockout Mouse Project - Phenome Wide Association Studies (PheWAS) - Chemical Phenomics Initiative - Definiens (Tissue Phenomics Company) - Plant Ontology - Gene Ontology Consortium Plant phenomics is used both to understand how crops respond to environmental changes and for crop improvement. Connections between plant genotype and phenotype were historically investigated by identifying a trait of interest and then using DNA markers and breeding to locate the gene responsible for the trait. Seeking the gene responsible for a phenotype is called a forward genetics approach. Reverse genetics approaches which mutate or alter genes first to find the phenotypic consequence of specific genetic changes became more commonly used with the development of mutagens, molecular genetics and bioinformatics. As the price of image data collection has gone down and the capability for computational image processing has increased, plant phenomics researchers are investigating relationships between genotype, phenotype and environment with satellite and drone images. One hurdle is in developing computational methods to extract useful information. Researchers at Iowa State University are using crowdsourcing to for image labeling used to train machine learning algorithms. The team used students and Amazon MTurkers for image labeling. Researchers at University of Saskatchewan developed the open-source software platform, Deep Plant Phenomics, which uses deep convolutional neural networks for phenotyping plants. The platform was shown to be effective at leaf counting, mutant classification and age regression in top-down images of plant rosettes. - National Plant Phenomics Centre (IBERS Gogerddan, Wales, UK) - PHENOME, the French plant phenomic Infrastructure - Australian Plant Phenomics Facility - The European Infrastructure for Multi-scale Plant Phenomics and Simulation (EMPHASIS) - International Plant Phenotyping Network - North American Plant Phenotyping Network - Qubit Phenomics - Zegami Ltd Computational approaches are being developed to gather, compare and process phenomics data. Machine learning methods are used for analysing images such as satellite images of plants, medical histology images and words describing medical conditions. For comparison of phenotypes across different organisms, formal ontologies are implemented that are accessible to automated reasoning. Phenotype ontologies are hierarchically-related phenotypic descriptions using controlled vocabulary that allows computation in individuals, populations and across multiple species. Ontologies are being developed in Web Ontology Language (OWL) and OBO Flatfile Format. Documentaries, videos and podcasts - Recursion PharmaceuticalsRecursion Pharmaceuticals is a biotechnology and data science company based in Salt Lake City, Utah, founded in 2013, that combines biology with artificial intelligence for drug discovery. Using human cell models of diseases, Recursion captures microscopic images to build biological datasets and computational techniques identify disease-associated.
<urn:uuid:587faaf4-4a06-425c-909b-a280e68ea4f2>
CC-MAIN-2021-43
https://golden.com/wiki/Phenomics-NAKR3G
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00461.warc.gz
en
0.907401
1,679
3.546875
4
Imagine a technology that could allow hackers to access everybody’s passwords, worldwide, in a matter of minutes; or with the right adjustments could create unbreakable encryption and information security. These are just a few potential consequences of breakthroughs in the field of quantum computing, which applies the unique laws of quantum mechanics to developing computers with remarkable capabilities. Other benefits of quantum computing include speed and energy efficiency improvements, as well as increased computational capacity over current computers, potentially unlocking breakthroughs in fields from drug discovery to artificial intelligence, and space exploration to weather forecasting which were previously too complex for conventional computers . To illustrate quantum computing, consider the following from Business Insider: “imagine you only have five minutes to find an “X” written on a page of a book in a library of 50 million books. It would be impossible. But if you were in 50 million parallel realities, and in each reality you could look through the pages of a different book, in one of those realities you would find the “X.” In this scenario a regular computer is you running around like a crazy person trying to look through as many books as possible in five minutes. A quantum computer is you split into 50 million yous, casually flipping through one book in each reality” . Many physicists from Albert Einstein to Carl Sagan have agreed that the principles of quantum physics are so strange that they defy understanding. However, it is precisely these strange properties which are being harnessed to develop the next generation of computing. Quantum computers are based on the physics of the small – the scale of individual electrons. At this scale, nature behaves differently than it does at our “human” scale. Examples include “superposition” (objects existing in multiple states simultaneously) and “entanglement” (intrinsically connected objects regardless of their distance apart), which can be manipulated to perform operations on data. Compared to modern digital computers that fundamentally store data in one of two states – known as “bits,” quantum bits or “qubits” can be in an infinite number of states at once . With another breakthrough in quantum computing announced this week, the quantum computing revolution may be closer than many of us realize . Several companies have already launched various attempts to capitalize on this field, including Google, IBM and D-Wave [4,5,6]. D-Wave is the first company offering quantum computers, with basic versions having already been used by Google, Lockheed Martin, NASA and others. Founded in British Columbia, Canada in 1999, the company made a big bet on the development and feasibility of quantum computing technology. However, that bet paid off with its first functional quantum computer priced at $10 million, with a number of customers already engaged and further developments on the way [4,5,6]. Business and Organizational Model D-Wave originally chose to outsource its research to other laboratories by funding research in exchange for rights to intellectual property . After securing the concept and design from 1999 to 2006 (D-Wave holds 100 US patents and over 60 scientific publications), the company embarked on engineering, commercialization and scale. The go-to-market model was based on joint collaboration with strategic customers in specific verticals including defense, web 2.0 and energy. The company intends to achieve a sustainable model by focusing on long-term growth and building multi-year relationships with customers, which includes professional and maintenance services, and offering multi-year subscription contracts to clients . - Publicity. Scientists were critical of the early D-Wave computers, arguing that they were not actually quantum machines. Though D-Wave since disproved these claims, maintaining commercial momentum will require positive publicity to bolster their brand name and avoid any perception of deceit. This is especially important since their product is based on complicated physics which could lend itself to a lack of trust by consumers in the nascent stages of commercialization [6,9]. - Partnerships. D-Wave has already partnered with corporations, laboratories, universities and governments to foster implementation of its product, however it should further invest in this arena as these partnerships will fuel early adoption of this technology. Without aggressively pursuing these partnerships, the company also risks losing market share to competitors. Moreover, if D-Wave is able to expand its user-base, it will foster a sort of competition built around its product; organizations will not want to be left behind with outdated computers [6,9]. - New Research. D-Wave must continue to source capital and continue to innovate, as competitors will be motivated to enter the market with their own breakthroughs in quantum computing. - Regulation. As a new technology, D-Wave is susceptible to new laws focused at its technology. If unforeseen repercussions arise from quantum computing, D-Wave will likely be the first organization affected. The company must be forward-thinking and take proactive strategic measures, such as working with regulators and exploring challenges. Dickerson, Kelly, “7 awesome ways quantum computers will change the world.” Business Insider. Web. 18 Nov. 2016. http://www.businessinsider.com/quantum-computers-will-change-the-world-2015-4 “Quantum computing 101” University of Waterloo. Web. 18 Nov. 2016. https://uwaterloo.ca/institute-for-quantum-computing/quantum-computing-101 Ranger, Steve, “Researchers claim quantum computing breakthrough, explain it using beer.” ZDNet. Web. 18 Nov. 2016. http://www.zdnet.com/article/researchers-claim-quantum-computing-breakthrough-explain-it-using-beer “Quantum A.I.” Research at Google. Web. 18 Nov. 2016. http://research.google.com/pubs/QuantumAI.html “A New Way of Thinking: The IBM Quantum Experience.” IBM Quantum Computing. Web. 18 Nov. 2016. http://www.research.ibm.com/quantum/ D-Wave Systems Inc. Website. Web. 18 Nov. 2016. http://www.dwavesys.com/ MacCormack, Alan D., Ajay Agrawal, and Rebecca Henderson. “D-Wave Systems: Building a Quantum Computer.” Harvard Business School Case 604-073, April 2004. “D-Wave Overview.” Web. 18 Nov. 2016. http://www.dwavesys.com/sites/default/files/D-Wave-Investor%20Presentation-Web100814-2.pdf Shah, Agam. “D-Wave will ship a 2,000-qubit quantum computer next year.” PC World. Web. 18 Nov. 2016. http://www.pcworld.com/article/3122452/hardware/d-wave-will-ship-a-2000-qubit-quantum-computer-next-year.html Photo credit: http://quantumhealthjournal.com/ (Accessed Nov. 18) Quantum mechanics helps describes discrete locations and objects as spectra of probabilities, from which novel computing principles can arise. This picture represents probability distributions of electron locations around their atomic nucleus – the building block of quantum computing.
<urn:uuid:f2e43a0a-81dd-4f6e-84aa-63fd850f2e52>
CC-MAIN-2021-43
https://digital.hbs.edu/platform-rctom/submission/the-next-quantum-leap-and-the-end-of-business-as-we-know-it/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585353.52/warc/CC-MAIN-20211020214358-20211021004358-00581.warc.gz
en
0.925688
1,523
3.671875
4
Towards the end of 2001, IBM made a breakthrough – they proved that three times five is fifteen. Not a big deal, one would say. But they haven’t heard the whole story yet: IBM did it using only seven atoms. That’s right – seven atoms. Not a full-fledged computer with a chip and input mechanism and trillions of atoms making all the components up. Only seven atoms did the trick. That, in short, is Quantum Computing – harnessing the power of nature. Moore’s Law coming to an end Moore’s famous law – computing power doubles every eighteen months. So by that logic, computers of 2050 will be able to process data at the speed of human thought – five hundred trillion bytes per second. This is enough to build machines with AI comparable to human intelligence. But there’s only one problem – Moore’s law won’t stand up till 2050. By 2020, we will have probably expanded it. There are only so many transistors we can fit on a chip. Eventually the distance between them will be too small for us to make any further improvements. So where do we go once Moore’s Law comes to an end? IBM has recently been showing off Graphene chips. Remember that Moore’s Law corresponds to the use of Silicon. Graphene is a special form of Graphite that consists of a single layer of carbon atoms. IBM managed to reach speeds of 100 GHz using it last year. Graphene chips are out? But then last week, IBM dismissed the idea of replacing silicon in computers with graphene. They said that because of its small width, graphene does not have a power gap and hence cannot be completely switched off. So while it works well in a laboratory, it is unlikely to replace the transistors in a CPU. While they may complement silicon in hybrid circuits such as RF circuits, hopes of using them to continue Moore’s Law have been well and truly dashed. What do we need Quantum Computing? Everything mentioned above leads us to this – Quantum Computing. When Moore’s law comes to an end, we will no longer be worried about improving silicon. Hopefully, by then we will have moved onto something much better – using atoms for computing. Suppose you want to simulate the behavior of a handful of atoms. With today’s technology, you would need nothing short of a super computer and a couple of days to perform the simulation. Yet in real life, nature can simulate their behavior using just that handful of atoms. That is something we plan to achieve using Quantum Computing. Quantum Computing will increase computing power not linearly, but exponentially. Why would we need so much power? For everything! Imagine having the power of the current breed of super computers on your desktop. “But will it run Crysis?” Oh! Yes it will! It will run thousands of instances of Crysis at the same time, if you so wish. But desktop implementation is far away. Think of the super computers locked away in laboratories. If we replace the silicon in them, imagine how much better weather prediction will get. So will tracing the motion of stars and planets, predicting future natural disasters, simulation of events like earthquakes, etc. Things that require days to compute today (like DNA sequences) will be completed in a matter of minutes. Just like we didn’t know what computers would be capable of when we first invented them, we don’t know today what we might use Quantum computers in the future for. We might use them to create virtual worlds like the Matrix (imagine how you would be playing Second Life then!). We might use them to create accurate simulations. We might use them for mind reading. We might even use them for teleportation! Because we haven’t truly harnessed the power of Quantum Computing yet, we don’t know what it is capable of. All the uses are not without potential security issues, however. When IBM computer 3*5 using seven atoms, even the CIA took note. If and when quantum computers become a reality, any code the CIA makes will be cracked in a matter of minutes, making password protection and encryption obsolete. Fortunately, the same computers that can be used to break codes can also be used to make codes. However, this still wouldn’t work unless Quantum Computing made its way into homes. Brute force attacks will become so simple that we will have to completely rethink security. Passwords will no longer work. This article from October 2009 outlines the current state of technology. While the technology is exciting, it is so complex that we don’t yet fully understand how it works. As a result, we are not sure how to extract the most out of it. However, with a major focus on the area today, scientists are figuring out quantum computing at a much faster pace than ever before. A few days back, an international team of scientists created 10 billion qubits on silicon at once. A qubit is an entangled pair of atoms, such that changing the state of one instantly affects the state of another, no matter how separated they are in space. One of the biggest problems with Quantum Computing, however, is getting the results out. The calculations are performed and stored in qubits. How do you extract the solutions from there? For this, we currently have to go back to our old methods of using silicon, eating up the time saved by using Quantum Computing in the first place. Another huge problem is that they are easily affected by external disturbances. Even cosmic radiation can throw your calculations off track, since they can cause atoms to vibrate strangely. How to isolate quantum computers from noisiness is a big issue. Isolation is also needed because quantum information has a tendency to leak into the outside environment. As things stand right now, Quantum Computing is still a far distance into the future. We might be lucky if we get to use them in our life time. But no matter how long it takes, one thing’s for sure- Quantum Computing is well on its way. And it is going to change the world forever.
<urn:uuid:f93ab225-439c-4500-9c1f-26c68f8dace6>
CC-MAIN-2021-43
https://techwench.com/quantum-computing-way-of-the-future/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00624.warc.gz
en
0.940713
1,271
3.859375
4
Though the concept of the robot seems to be a modern and a relatively new idea, they have been around for years. The first recording in literature of a possible description of the robot is found in the Iliad in reference to a “a three-legged cauldron that had ears for handles”. Later on, in 1900, we were introduced to Tik-Tok in Frank Baum's Wizard of Oz. The word robot was first used in 1920 by the Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots). This would be the first dramatization of a robot under this name. However, robots would come to life and be used for practical purposes in 1962. General Motors was the first company to use a robot for industrial purposes. Since then, robots have been used in many ways. They have come in all shapes and sizes. They have been used in the medical field, the armed forces, and in the space program. Now as we face the 21st century, technology evolves more. A new kind of robot is being studied and researched. This robot is called the quantum robot. The quantum robot is the idea of combining quantum theory with robot technology. In other words, it is a practical use of the combination of quantum computing and robot technology. Quantum computing involves using quantum systems and quantum states to do computations. A robot is an automated machine that is capable of doing a set of complex tasks. In some applications of robots, the programming used to run the robots may be based on artificial intelligence. Artificial Intelligence is the ability of a computer system to operate in a manner similar to human intelligence. Think of artificial intelligence as if you were training a machine to act like a human. Essentially, quantum robots are complex quantum systems.They are mobile systems with on board quantum computers that interact with their environments. Several programs would be involved in the operation of the robot. These programs would be quantum searching algorithms and quantum reinforcement learning algorithms. Quantum reinforcement learning is based on superposition of the quantum state and quantum parallelism. A quantum state is a system that is a set of quantum numbers. The four basic quantum numbers represent the energy level, angular momentum, spin, and magnetization. In the superposition of quantum states, the idea is to get one state to look like another. Let's say I have two dogs. One dog knows how to fetch a bone (energy level), sit up (angular momentum), give a high five (spin), and shake hands (magnetization). Now, let's apply the superposition of quantum states. Since one dog has been trained and given the commands, the other dog must learn to mimic or copy what the first dog did. Each time a command is achieved, reinforcement is given. The reinforcement for the dog would be a bone (or no bone if the command is not achieved). In quantum reinforcement learning, it is slightly different. The idea would be similar to an “If-Then” statement. An example would be if the quantum state has a certain energy level, then the angular momentum is certain value. This idea of “If-Then” statements in the quantum world leads to an idea which can be a topic of its own; Quantum Logic. Quantum parallelism simply means that computations can happen at the same time. This allows for all of the quantum numbers of the quantum system to be measured at the same time. If there are multiple quantum systems then; by using the concept of parallelism, all systems can be measured at the same time. Programs used for “quantum searching” are based on quantum random walks. Quantum random walks use probability amplitudes. A probability amplitude allows us to determine that there is more than one possible quantum state. In the classical world, if you type a word “Quantum” in the search engine, you get many results. You may have a tough time finding a needle in a haystack if you use just one word, but if you want to refine your search; let's say “Quantum Random Walks”, then it narrows the search. The same principle applies in quantum computing to get more refined results. However, you are not necessarily searching for words but you are finding information that may correlate to a quantum state. What would be the advantages of the Quantum Robot over the Robot? Quantum robots are more intricate in examining their environments and doing tasks as they apply quantum effects . Because of the complexity in quantum computing, the expectations of the quantum robots would be that they are faster, more accurate, and are able to multitask better than the standard robot. The quantum robots may be able one day to give us better medical diagnoses and better data interpretation in other research fields such as defense research. In medicine, they may be able to detect pathological changes in the body by being injected through the bloodstream. In the space program, they may be able to examine the delicate environments on other planets. In the military, they may be able to detect changes in the magnetic and electric fields. They may be able to help us detect early warnings of disasters more efficiently.
<urn:uuid:fbd472ab-f076-465f-90df-1a3e86993754>
CC-MAIN-2021-43
https://blogs.scientificamerican.com/guest-blog/i-quantum-robot/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00589.warc.gz
en
0.945273
1,058
3.671875
4
First, some background: - Quantum computers use qubits instead of bits. Classical computers use electrical or optical pulses that represent zeros and ones while quantum computers typically use subatomic particles such as electrons or photons. Different quantum computers use different strategies to create and manage qubits. - Quantum computers harness the principles of quantum mechanics such as superpositions (where qubits can represent different combinations of 0 and 1 simultaneously) and entanglement (where the state of one qubit instantaneously affects another) to perform tasks faster than classical computers. - While quantum computers will have applications in many fields from materials science to pharmaceuticals research, they will probably not totally replace classical computers (Giles). - Currently, complex mathematical formulas are used to encrypt and decrypt data. - Symmetric cryptography uses the same key for both encryption and decryption. Asymmetric, or public-key, cryptography uses two mathematically linked keys, “one shared publicly to let people encrypt messages for the key pair’s owner, and the other stored privately by the owner to decrypt messages.” (Denning) - While symmetric cryptography is much faster and is thus used for communications and stored data, public-key cryptography is used for exchanging symmetric keys and digital authentication. Because almost all internet applications use a combination of the two, everything needs to be secure. (Denning) - Quantum computers could break symmetric cryptography by simply trying all possible keys. While they would be much faster than classical computers and thus be able to realistically break keys, making keys longer would be a easy solution. - Quantum computers pose a great threat to public-key cryptography. - “The algorithms for public-key encryption that are popular today—which are called RSA, Diffie-Hellman, and elliptic curve—all make it possible to start with a public key and mathematically compute the private key without trying all the possibilities.” (Denning) - Public-key cryptography is currently uncrackable when very long key pairs are used. Both classical and quantum computers don’t have the ability to factor large enough numbers or perform advanced math quickly enough to crack them. However, in the future, a sufficiently advanced quantum computer could easily break public-key encryption using a quantum computer. (Denning) - There are options for new secure methods: In 2016 the U.S. National Institute of Standards and Technology evaluated 69 potential new methods for post–quantum cryptography, which has since been reduced to 26. Unfortunately, it will likely be years before any draft standards are published. (Giles) - Supersingular isogeny key exchange - Lattice-based cryptography is “relatively simple, efficient, and highly parallelizable.” Although the security of lattice-based systems has been proven to be secure in difficult scenarios, it is difficult to say for sure how secure it is. (Chen) - Code-based cryptography includes all cryptosystems, symmetric or asymmetric, whose security relies, partially or totally, on the hardness of decoding in a linear error correcting code. (Sendrier) - Multivariate polynomial cryptography is “based on the difficulty of solving systems of multivariate polynomials over finite fields.” (Chen) - Hash-based signatures use hash functions. Although there are drawbacks, “their security, even against quantum attacks, is well understood.” (Chen) - There are many other options being explored (Chen). This all seems rather dire (and complicated). What should the response be? What U.S. Governments and Corporations Should Do: Game Theory shows that allied governments and corporations developing quantum technologies should collaborate. For example, Sara Bjerg Moller, a professor of International Relations, writes that one of NATO’s goals should be countering China (Moller). One good way to achieve this goal would be to work together to make sure China does not develop a sufficiently advanced quantum computer first. Another example of the importance of collaboration is U.S corporations. Although Google, IBM, Microsoft, and others, are all competing, it is in all of the corporations best interest to make sure a malicious group does not get there first, so that customers’ data is not compromised. The idea of collaboration to implement the post-quantum cryptography system is also really important because everyone will benefit from security. Sadly, governments and corporations being what they are, collaboration is unlikely. What Researchers Should Do: Unfortunately, game theory does not show if while picking the best and most efficient post-quantum cryptography technique is important, the highest priority should be implementing a workable system quickly. One of the interesting aspects of game theory is it does not always have an answer. What You Can Do: A lot of these ideas aren’t in the public’s conscious yet. Learn more! Talk to people you know! Ask your representatives and governments what they are doing to prepare. If this interests you, both cybersecurity and quantum computing are quickly growing fields that will need more researchers! My works cited page is a great place to find more resources. I welcome feedback, thoughts, and questions in the comments! Still Interested? Check out works cited for more info.
<urn:uuid:7f0234a0-d895-45b5-ac3a-5664fcc1e854>
CC-MAIN-2021-43
https://goaconference.org/how-can-game-theory-be-used-to-determine-the-best-possible-paths-forward-to-prepare-for-cryptography-in-a-post-quantum-world/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00670.warc.gz
en
0.920553
1,094
4.125
4
Quantum dots are tiny particles or nanocrystals of a semiconducting material with diameters in the range of 2-10 nanometers (10-50 atoms). They were first discovered in 1980. 1 They display unique electronic properties, intermediate between those of bulk semiconductors and discrete molecules, that are partly the result of the unusually high surface-to-volume ratios for these particles.2-4 The most apparent result of this is fluorescence, wherein the nanocrystals can produce distinctive colors determined by the size of the particles. Due to their small size, the electrons in these particles are confined in a small space (quantum box), and when the radii of the semiconductor nanocrystal is smaller than the exciton Bohr radius (exciton Bohr radius is the average distance between the electron in the conduction band and the hole it leaves behind in the valence band), there is quantization of the energy levels according to Pauli’s exclusion principle (Figure 1)5,6. The discrete, quantized energy levels of these quantum particles relate them more closely to atoms than bulk materials and have resulted in them being nicknamed 'artificial atoms. Generally, as the size of the crystal decreases, the difference in energy between the highest valence band and the lowest conduction band increases. More energy is then needed to excite the dot, and concurrently, more energy is released when the crystal returns to its ground state, resulting in a color shift from red to blue in the emitted light. As a result of this phenomenon, these nanomaterials can emit any color of light from the same material simply by changing the dot size. Additionally, because of the high level of control possible over the size of the nanocrystals produced, these semiconducting structures can be tuned during manufacturing to emit any color of light.7 Quantum dots can be classified into different types based on their composition and structure. Figure 1.Splitting of energy levels in quantum dots due to the quantum confinement effect, semiconductor band gap increases with decrease in size of the nanocrystal. These nano dots can be single component materials with uniform internal compositions, such as chalcogenides (selenides, sulfides or tellurides) of metals like cadmium, lead or zinc, example, CdTe (Product No. 777951) or PbS (Product No. 747017). The photo- and electroluminescence properties of core-type nanocrystals can be fine-tuned by simply changing the crystallite size. The luminescent properties of quantum dots arise from recombination of electron-hole pairs (exciton decay) through radiative pathways. However, the exciton decay can also occur through nonradiative methods, reducing the fluorescence quantum yield. One of the methods used to improve efficiency and brightness of semiconductor nanocrystals is growing shells of another higher band gap semiconducting material around them. These particles with small regions of one material embedded in another with a wider band gap are known as core-shell quantum dots (CSQDs) or core-shell semiconducting nanocrystals (CSSNCs). For example, quantum dots with CdSe in the core and ZnS in the shell (Product Nos. 748056, 790192) available from Sigma-Aldrich Materials Science exhibit greater than 50% quantum yield. Coating quantum dots with shells improves quantum yield by passivizing nonradiative recombination sites and also makes them more robust to processing conditions for various applications. This method has been widely explored as a way to adjust the photophysical properties of quantum dots.8-10 The ability to tune optical and electronic properties by changing the crystallite size has become a hallmark of quantum dots. However, tuning the properties by changing the crystallite size could cause problems in many applications with size restrictions. Multicomponent dots offer an alternative method to tune properties without changing crystallite size. Alloyed semiconductor nanodots with both homogeneous and gradient internal structures allow tuning of the optical and electronic properties by merely changing the composition and internal structure without changing the crystallite size. For example, alloyed quantum dots of the compositions CdSxSe1-x/ZnS of 6nm diameter emits light of different wavelengths by just changing the composition (Product Nos. 753742, 753793) (Figure 2). Alloyed semiconductor quantum dots formed by alloying together two semiconductors with different band gap energies exhibited interesting properties distinct not only from the properties of their bulk counterparts but also from those of their parent semiconductors. Thus, alloyed nanocrystals possess novel and additional composition-tunable properties aside from the properties that emerge due to quantum confinement effects.11 Figure 2.Photoluminescence of alloyed CdSxSe1-x/ZnS quantum dots of 6 nm diameter. The material emits different color of light by tuning the composition. The unique size and composition tunable electronic property of these very small, semiconducting quantum dots make them very appealing for a variety of applications and new technologies.12 Quantum dots are particularly significant for optical applications owing to their bright, pure colors along with their ability to emit rainbow of colors coupled with their high efficiencies, longer lifetimes and high extinction coefficient. Examples include LEDs and solid state lighting, displays and photovoltaics.7,13,14 Being zero dimensional, quantum dots have a sharper density of states than higher-dimensional structures. Their small size also means that electrons do not have to travel as far as with larger particles, thus electronic devices can operate faster. Examples of applications taking advantage of these unique electronic properties include transistors, solar cells, ultrafast all-optical switches and logic gates, and quantum computing, among many others.13-15 The small size of dots allow them to go anywhere in the body making them suitable for different bio-medical applications like medical imaging, biosensors, etc. At present, fluorescence based biosensors depend on organic dyes with a broad spectral width, which limits their effectiveness to a small number of colors and shorter lifetimes to tag the agents. On the other hand, quantum dots can emit the whole spectrum, are brighter and have little degradation over time thus proving them superior to traditional organic dyes used in biomedical applications.16
<urn:uuid:a530a190-1d8c-4c37-9d93-d7a2444edbb0>
CC-MAIN-2021-43
https://www.sigmaaldrich.com/ES/en/technical-documents/technical-article/materials-science-and-engineering/biosensors-and-imaging/quantum-dots
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00111.warc.gz
en
0.906732
1,310
4.0625
4
Hold it right there: how (and why) to stop light in its tracks We are taught in school that the speed of light is a universal constant. Yet we also know that light travels more slowly through materials such as water and glass. Recently, we have even discovered that light can actually be made to stand completely still. In fact, it was first done a long time ago ... in a galaxy far, far away. In a scene from the latest Star Wars film, Kylo Ren stops a blaster pulse using The Force. The pulse is frozen, shimmering in mid-air. More recently, for our paper published in Nature Physics this week, we stopped a pulse of laser light using a rather different method, by trapping it in a cloud of cold rubidium atoms. Rubidium and other similar atoms have been used previously to slow down and store light and even to trap it. These systems all work by absorbing and re-emitting laser light from the atoms in a controlled way. But we found a new way to trap light, by using the light to write a particular "shape" into the atoms. When the light was re-emitted, it became trapped in the atoms. It turned out that once we had picked the right directions and frequencies for our lasers, the experiment was pretty straightforward. The hard part was figuring out the right frequencies and directions! Why do this? We are interested in trapping light because our ultimate goal is to make individual light particles, or photons, interact with one another. By interacting directly, the photons will become entangled. By scaling this up to many interactions, involving many photons, we could theoretically create the intricate states of information necessary for powerful quantum computing. Unfortunately, photons interact incredibly weakly with each other, but they can interact more strongly if they can be confined in a particular material long enough to enhance the interaction to a more useful level. In fact, these sorts of interactions have recently been demonstrated by multiple research groups around the world, often by using atom clouds to confine the light. But, as I'll explain below, our new stationary light system may have advantages when it comes to getting photons to interact. Quantum computing is an exciting and rapidly evolving field of research, and our team is part of the Australian Research Council's Centre for Quantum Computation and Communication Technology. There are many different potential platforms for quantum computing. For example, the centre's UNSW team has demonstrated quantum computing operations using phosphorus atoms embedded in silicon chips. But our group mainly studies light, not least because it is very likely that light will play some role in quantum computers. It offers a convenient way to send quantum information within or between computers because, unlike atoms or electric currents, it is not vulnerable to stray magnetic or electric fields. It may even be possible to perform quantum computation using light, and this is the idea that motivates our research into stationary light. Our team has been able to store and retrieve pulses of light in the same system. We have also been able to show that quantum information encoded in these light pulses is preserved - meaning that it can form the basis of computing memory. However, this is not sufficient to generate the sort of interaction we want, because the light is entirely absorbed into the atoms and it can no longer interact. Instead, we need to trap light in the memory, not just store it. While researching how to trap light in the atomic memory, I discovered using a computer simulation that a particular kind of shape written into the atomic memory would produce stationary light. By retrieving the light in two directions at once, the light could actually be trapped in the memory. All the light being re-emitted throughout the memory would destructively interfere at the ends of the memory and not escape. The simulations also predicted other interesting behaviour: if the wrong shape was written, some light would escape, but the memory would rapidly evolve to a shape where the light is trapped. This could be useful for stationary light by making it more robust, but it may also be useful for other optical processing. We were able to demonstrate all of this behaviour experimentally using our atomic memory. Unlike Kylo Ren's frozen blaster pulse, it was not possible to see the stationary light directly (to see something, photons have to travel from the object to your eyes, and these photons were not going anywhere). Instead, the fact the behaviour of the system matched our predictions so precisely confirmed that the light was indeed stationary. Light has previously been trapped in a similar system. What makes our system new and interesting is that we believe it is the most convincing demonstration so far, but also that the behaviour of our stationary light is radically different. We believe that this new behaviour, where the light travels more freely through the memory, could allow for stronger nonlinear interactions. This experiment is only a single step on the long road to optical quantum computing. The next step will be to prove that photons can indeed interact with one another within our system. Looking much further down the road, we hope this will give rise to a device that can use some of our discoveries, among many others, to generate the intricate states of many entangled photons necessary for an optical quantum computer.
<urn:uuid:fe9033d0-a39e-4430-a12c-84c413574de1>
CC-MAIN-2021-43
http://www.catchnews.com/science-technology/hold-it-right-there-how-and-why-to-stop-light-in-its-tracks-1475155691.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00233.warc.gz
en
0.954023
1,054
3.671875
4
Nothing is perfect, not even the indifferent, calculating bits that make up the basis of computers. But for the first time, a collaborative team that includes University of Maryland (UMD) scientists has shown that an assembly of quantum computing pieces can be better than the shakiest individual components used to build it. The research team that includes Christopher Monroe, a fellow of the Joint Quantum Institute and a College Park Professor of Physics, together with other UMD researchers and colleagues from Duke University, shared how they took this revolutionary step toward dependable, practical quantum computers in a paper published recently in the journal Nature. In their experiment, the team joined together several qubits — the quantum version of bits that encrypt data in standard computers as zeros and ones — to work together as one unit. This “logical qubit” is established on a quantum error correction code that can spot and rectify an error that happens in one of the 13 individual qubits that make up the logical qubit “team.” Furthermore, the design of the logical qubit is fault-tolerant — that is, can contain errors to diminish their negative effects. The demonstration strengthens the great potential of quantum computers, which are supposedly capable of functioning beyond the range of standard or “classical” computers, in part because qubits are much more versatile than standard bits, and not restricted to merely being zero or one. Yet, quantum faults have long hindered the effort to expand these futuristic technologies to superior levels of power; unlike transistors that encode data in standard computer chips, a qubit is vulnerable to errors from minute environmental disturbances like a temperature change or vibration that ruins its quantum state. A group of qubits that function in unison can help surpass such limits, however, said Monroe. Monroe is also co-founder and chief scientist at IonQ, a quantum company in College Park that is established partly on the technology he created as a UMD scientist. Qubits composed of identical atomic ions are natively very clean by themselves. However, at some point, when many qubits and operations are required, errors must be reduced further, and it is simpler to add more qubits and encode information differently. The beauty of error correction codes for atomic ions is they can be very efficient and can be flexibly switched on through software controls. Christopher Monroe, Fellow, Joint Quantum Institute For the first time, a logical qubit has been demonstrated to be more dependable than the most error-prone step necessary to make it. The experiment showed that the team could verify that it correctly formed the logical qubit in a preferred quantum state 99.4% of the time, compared to the estimated 98.9% success rate of the six quantum processes (known as quantum operations) that they employed to create it. That may not seem like a significant difference, but it is a vital step in the quest to design much bigger quantum computers. If the six quantum operations were assembly line workers, each concentrated on one task, the joint error rate of the workers would result in the line only creating beneficial products 93.6% of the time, much lower than the 99.4% efficacy rate when the “workers” cooperate to reduce the chance of quantum errors compounding and tarnishing the outcome. Although it may appear wasteful to employ so many separate qubits and steps simply to make something work as a single qubit, the distinctive computational functions of quantum computers could make logical qubits a minor cost to bear. If quantum computers can be made reliable, they will be robust tools capable of computations projected to transform sectors including security, healthcare and finance. The results were realized using Monroe’s ion-trap system at UMD, which employs up to 32 individual charged atoms — ions — that are cooled with lasers and suspended over electrodes on a chip. The ions can then be employed as qubits through additional laser tweaks. We have 32 laser beams. And the atoms are like ducks in a row; each with its own fully controllable laser beam. I think of it like the atoms form a linear string and we're plucking it like a guitar string. We're plucking it with lasers that we turn on and off in a programmable way. And that's the computer; that's our central processing unit. Christopher Monroe, Fellow, Joint Quantum Institute By effectively developing a fault-tolerant logical qubit with this system, the scientists have demonstrated that meticulous, creative designs have the potential to unleash quantum computing from the limitation of the unavoidable errors of the existing state of the art. What's amazing about fault tolerance, is it's a recipe for how to take small unreliable parts and turn them into a very reliable device. And fault-tolerant quantum error correction will enable us to make very reliable quantum computers from faulty quantum parts. Kenneth Brown, Study Co-Author and Professor of Electrical and Computer Engineering, Duke University Besides Monroe and Brown, the paper’s co-authors are JQI graduate student Laird Egan; JQI graduate students Andrew Risinger, Daiwei Zhu, and Debopriyo Biswas; JQI research scientist Marko Cetina; Duke postdoctoral researchers Crystal Noel and Michael Newman; Duke University physics graduate student Dripto M. Debroy; and Georgia Institute of Technology graduate student Muyuan Li.
<urn:uuid:8e083127-6187-40ec-918b-0e01c9757f2e>
CC-MAIN-2021-43
https://www.azoquantum.com/News.aspx?newsID=8456
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00114.warc.gz
en
0.937864
1,105
3.6875
4
Supercomputers have a high level of computing performance compared to a general purpose computer. In this post, we cover all details of supercomputers like history, performance, application etc. We will also see top 3 supercomputers and the National Supercomputing Mission. What is a supercomputer? - A computer with a high level of computing performance compared to a general purpose computer and performance measured in FLOPS (floating point operations per second). - Great speed and great memory are the two prerequisites of a super computer. - The performance is generally evaluated in petaflops (1 followed by 15 zeros). - Memory is averaged around 250000 times of the normal computer we use on a daily basis. - Housed in large clean rooms with high air flow to permit cooling. - Used to solve problems that are too complex and huge for standard computers. History of Supercomputers in the World - Most of the computers on the market today are smarter and faster than the very first supercomputers and hopefully, today’s supercomputer would turn into future computers by repeating the history of innovation. - The first supercomputer was built in 1957 for the United States Department of Defense by Seymour Cray in Control Data Corporation (CDC) in 1957. - CDC 1604 was one of the first computers to replace vacuum tubes with transistors. - In 1964, Cray’s CDC 6600 replaced Stretch as the fastest computer on earth with 3 million floating-point operations per second (FLOPS). - The term supercomputer was coined to describe CDC 6600. - Earlier supercomputers used to have very few processors but as the technology evolved and vector processing was turned into parallel processing, use of processors multiplied manifold resulting into supra fast supercomputers of the current decade. History of Supercomputer in India - As the saying goes “need is the mother of all inventions”, India started its journey towards supercomputers because it was denied the import of Cray supercomputers from the United States of America due to arms embargo imposed on India after Nuclear tests in the 1970s. - They were of the opinion that India might use the same for the development of military rather than civilian purposes since supercomputers came under dual-use technology group. - Ideation phase was started in the 1980s. - The first indigenous supercomputer was developed indigenously in 1991 by Centre for Development of Advanced Computing which was called as PARAM 8000. - Russian assistant in the development was paramount. - PARAM 8000 was replicated and installed at ICAD Moscow in 1991 under Russian collaboration. - In 2007, India held top 10 spots for speeds of supercomputers. - As of July 2016, India has 9 supercomputers with speeds in top 500 but not any in top 10. How powerful are supercomputers as compared to a computer? - The performance of ordinary computers is generally quoted in MIPS (million instructions per second). - MIPS is about the fundamental programming commands (read, write, store, and so on) the processor can manage. - Therefore computers are compared based on the number of MIPS they can handle which is typically rated in gigahertz as the processor speed. - Supercomputers are rated a different way because they are dealing with the scientific calculations. - They are measured according to how many floating point operations per second (FLOPS) they can do. - Since supercomputers were first developed, their performance has been measured in successively greater numbers of FLOPS, as the table below illustrates: World’s top 3 supercomputers - Sunway TaihuLight – developed in China with the computing power of a 93 petaflop/s. - The Tianhe-2 (MilkyWay-2) – from China. This supercomputer is capable of 33.8 petaflop/s. - Titan – from the US. Computing capacity is 17.5 petaflop/s. What is the next generation supercomputing? - Optical computing calculations with the near speed of light by using optical devices and connections in place of transistors. Latest developments in this field have already taken place with the optical equivalent of transistors being switched on using photons and not electrons. Since photons travel at speed of light, therefore, calculations may be done at sub-light speed. - DNA computing calculations by recombining DNA in a parallel environment. Numerous possibilities are tried at the same time; the most optimal solution will be “the strongest to survive.” - Quantum computing not in practical use yet only conceptual proofing done but think of it as calculations being done before you have thought of them. Work is done in the blink of an eye since time is of no essence here. What are the Applications of a Supercomputer? - Academic research: For observing and simulating the phenomena which are too big, too small, too fast, or too slow to observe in laboratories. For example, astrophysicists use supercomputers as “time machines” to explore the past and the future of our universe. Another important area is quantum mechanics. - Weather and climate modeling to forecast with better accuracy by analyzing multiple factors and their interrelationships. - Medicine discovery for e.g. How a protein folds information leads to the discovery of new drugs. - Monsoon Forecasting using dynamic Models. - Big data mining to strengthen and better mobilization of digital India mission. - Oil and gas exploration, therefore, ensuring energy security of India. - Airplane and spacecraft aerodynamics research and development, therefore better safety standards and smoother connectivity thereby helping in ease of transportation. - Simulation of nuclear fission and fusion processes, therefore imparting better nuclear infrastructure models and helping in energy security of the nation. - Molecular dynamics: supercomputer simulations allow scientists to dock two molecules together to study their interaction which may lead to the development of innovative materials for future generation technologies. - In 1994, A supercomputer was used to alert the scientists about the collision of a comet with Jupiter, providing them time to prepare to observe and record the event for useful analysis and its application in predicting future comet collision with the earth. What are the initiatives taken by the Government of India? - In the 12th five-year plan, the government of India (GOI) had committed that $2.5bn would be sanctioned for the research in the supercomputing field. - In 2015, GOI approved 7-year supercomputing program known as National Supercomputing Mission which aims to create a cluster of 73 supercomputers connecting various academic and research institutions across India with $730mn investment. Some facts for Prelims - There are no exaflop (higher than petaflops) computing supercomputers in the world and the first product is expected around 2019-20. - India is also preparing to launch its exaflop supercomputers by 2020. - China’s, Sunway TaihuLight is the fastest supercomputer (93 Pflops) and China has more supercomputers than the USA as of July 2016. Possible Sample Questions for Mains - What are supercomputers? What is its status in India? How does it help in the development of India and the world? - Supercomputers have more strategic significance than scientific. Illustrate. Sample Questions for Prelims Question: With reference to supercomputers, petaflops are related to? - A – The latest model of sSupercomputers developed by China. - B – The latest model of supercomputers developed by the USA. - C – The performance of supercomputers. - D – Floppy disks which are used on normal desktop computers. Answer: (Option C) The performance of supercomputers. Learning Zone: The performance is generally evaluated in petaflops (1 followed by 15 zeros) and some supercomputers may even perform quadrillions flops. Article by: Nishant Raj. The author is an IIT Kharagpur Alumnus.
<urn:uuid:45940c6c-4f10-4bf3-be16-d941644a453d>
CC-MAIN-2021-43
https://www.clearias.com/supercomputers/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00315.warc.gz
en
0.934922
1,692
3.703125
4
Quantum photonic interconnect and entanglement distribution between two integrated Si photonic chips. Credit: arXiv:1508.03214 [quant-ph] Explore further For modern electronic devices to work, there must be some channels for the different parts to use to convey information between them—such channels are usually either wire carrying electricity or fiber carrying photons and are called interconnects. But as researchers shrink down the parts, the interconnects more and more represent a bottleneck. Worse, as scientists conduct research into creating a truly quantum computer, the problem of creating interconnects for them has become a serious issue. Now, in this new effort, the research team is claiming to have found a solution—one where a separate entanglement stage is used to preserve the original entanglement needed as part of normal operations—demonstrating a way to connect two photonic chips.To allow for interconnection, the researchers ran two sources of photons along one of the chips, on channels that overlapped—when the photons met in the overlap area, they became entangled and that entanglement was then carried along different paths in the chip. They next ran the photons through a device that converted that path entanglement into a whole new type of entanglement, one that involved polarization, which also caused the creation of new entangled photons. Those newly entangled polarized photons were then passed into an optical fiber that ran between the two chips. The whole process was then reversed in the second chip, where the polarized photons were converted back to path entangled photons, which then behaved exactly like the photons in the first chip. The team conducted multiple different types of tests to prove that entanglement was preserved throughout the interconnection process.The team acknowledges that the process is still too inefficient to be implemented into real devices, but believe further refinement will lead to a usable solution. But, they have shown that it is possible to interconnect quantum devices, which should come as a relief to those working on building a quantum computer. © 2015 Phys.org Journal information: arXiv New method of quantum entanglement vastly increases how much information can be carried in a photon This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. More information: Quantum Photonic Interconnect, arXiv:1508.03214 [quant-ph] arxiv.org/abs/1508.03214AbstractIntegrated photonics has enabled much progress towards quantum technologies. However, many applications, e.g. quantum communication, sensing, and distributed and cloud quantum computing, require coherent photonic interconnection between separate sub-systems, with high-fidelity distribution and manipulation of entanglement between multiple devices being one of the most stringent requirements of the interconnected system. Coherently interconnecting separate chips is challenging due to the fragility of these quantum states and the demanding challenges of transmitting photons in at least two media within a single coherent system. Here, we report a quantum photonic interconnect demonstrating high-fidelity entanglement distribution and manipulation between two separate chips, implemented using state-of-the-art silicon photonics. Entangled states are generated and manipulated on-chip, and distributed between the chips by interconverting between path-encoding and polarisation-encoding. We use integrated state analysers to confirm a Bell-type violation of S=2.638+-0.039 between two chips. With improvements in loss, this quantum interconnect will provide new levels of flexible systems and architectures for quantum technologies.via Arxiv Blog (Phys.org)—An international team of researchers has found a way to interconnect two quantum devices, allowing photons to move between the two, all while preserving entanglement. In their paper they have uploaded to the preprint server arXiv, the team describes their process and their hopes for tweaking it to make it more efficient. Citation: A way has been found to interconnect quantum devices including preserving entanglement (2015, August 21) retrieved 18 August 2019 from https://phys.org/news/2015-08-interconnect-quantum-devices-entanglement.html Kolkata: Following the direction of Chief Minister Mamata Banerjee, the state Transport department is going to start a state-wide drive to check the fitness of public transport vehicles.According to a senior official of the state Transport department, checking of goods vehicles is carried out round-the-year.”Now, we are going to initiate a process to carry out periodical fitness checking of public transport vehicles,” the official said, adding that initially emphasis will be given on checking fitness of buses and mini-buses that ply on different routes. It has been stated that there are two reasons behind starting fitness check of public transport vehicles. First of all, it will help in reducing the number of road accidents and will also ensure that commuters get better service, particularly as the bus fares have recently gone up. It may be mentioned that the Chief Minister has called for necessary steps to ensure periodical fitness checking of vehicles to avoid accidents. Also Read – Rain batters Kolkata, cripples normal lifeSince stress will be initially given on checking the fitness of buses, the official said that they will be checking whether a passenger is facing any problem while travelling in the vehicles. Citing some examples, the official added that they will be checking the condition of tyres being used in the public transport vehicles.It may be mentioned that use of good quality tyres in buses is mandatory to check accidents. According to experts, the chances of skidding goes up when sudden brake is applied if the condition of tyre is poor or if the vehicle is fitted with rethreaded tyres. Mainly in monsoon, many accidents take place due to the use of rethreaded tyres, as vehicles do not stop despite applying brakes. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedBesides checking the quality of tyres, the officials will also check the overall condition of a bus to find out whether water drips from its ceiling and causes inconvenience to passengers. They will also check the condition of seats.”The checking will be carried out at different points and it will be done following the pick and choose method. Necessary steps will be taken if they find that the vehicles are not maintained properly,” the official said, adding that their endeavour is to ensure better amenities for passengers.The bus owners, in whose vehicles any fault is found, will be directed to repair the same, before plying them again.This comes at the time when the state government has taken up the state-wide Safe Drive Save Life campaign to bring down the rate of accidents. In what comes as a rare tributary endeavor of Khadi and Village Industries Commission (KVIC) to Rashtrapita on his 150th Birth Anniversary Year, Vice President Muppavarapu Venkaiah Naidu will unveil Mahatma Gandhi’s ‘Grand Wall Mural’ at New Delhi Municipal Corporation (NDMC) Main Building on January 31, 2019.This 150 square meters clay mural is made of ‘Kulhads’ from the hands of 150 village potters across India, who assembled to make it at Morbi in Gujarat. Also Read – Add new books to your shelfEnthused with the classical efforts shown by the artisans, KVIC Chairman Vinai Kumar Saxena said that KVIC had decided to mark the occasion with practical display of Gandhian thoughts on village industries, at a time when the whole nation is celebrating Gandhiji’s 150th birth anniversary in different styles. “With the co-operation of these 150 highly-skilled potters from all over the country, we have made this wall mural, using their kulhads,” he said, adding, “They had brought clay of their respective regions which has been mixed to produce the kulhads for the mural. Kulhads of 75 mm diameter and 90 mm height were made on electric pottery wheels given by KVIC.” “The potters have finished the kulhads as per tradition and design requirements. Altogether 5,000 kulhads were produced, of which the best 3,870 kulhads – making them all-weather proof by baking at high temperatures and glazing – were used in the final design.” Ahmedabad-based designer terracotta and ceramic art studio, Clay Club Innovations, had designed the artwork. Expressing his happiness on this tributary endeavour of KVIC, the Minister of State for MSME Giriraj Singh, who will be the Guest of Honour on this occasion, said, “It will really be a proud moment for the nation when KVIC’s grand mural will prominently be displayed in the heart of the national Capital – showcasing the combined ‘sweaty’ efforts of village potters across the nation.”
<urn:uuid:70cf685d-2bee-4b56-be55-9c1f58ac760b>
CC-MAIN-2021-43
https://ylcjj.org.cn/hendrikatag/%E4%B8%8A%E6%B5%B7%E4%B8%9D%E8%A2%9C%E4%BC%9A%E6%89%8024%E5%B0%8F%E6%97%B6
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00117.warc.gz
en
0.947334
1,870
3.703125
4
The year 2019 was full of surprises, wonders, and even setbacks. From Greta Thunberg championing climate activism to quantum supremacy claims by Google, we saw massive movements and moments in this field. But one that stood out was the first-ever portrait of a black hole. Scientific community around the world rejoiced at the stunning image of this enigmatic phenomenon, and most of the scientific outlets have named it the best achievement in science of the year. This is undoubtedly a great end to a decade. A decade that brought us unprecedented advancements in science and technology and has put us onto a path of even greater scientific enlightenment. The image of the black hole united the scientific community as everyone sat together, talked and marveled at the beauty and complexity of this universe. Before this image, many theories were given about the black hole’s actual shape. Researchers constructed various images, but due to lack of ample data, those were not quite right. This conundrum was also explored in Hollywood movies, and Interstellar even managed to come a bit close with its depiction of a black hole. Nevertheless, we now have an accurate visual treat of what a black hole really looks like. Achieving the feat Bringing the image to life was not an easy task. Even in this age of technology and modern equipment, it took years to process the data by a big team of talented scientists. Some theories presented earlier called for recording the shadows that surround the glowing area of a black hole. For that, a vast network of telescopes was created around the world that ultimately came to be known as the Event Horizon Telescope or the EHT. To be able to visualize distant images in a high resolution and detail, a telescope should have a large aperture or diameter. This way, more light is gathered and can be used in image construction. A technique called Very Long Baseline Interferometry (VLBI) was further honed by scientists and was used to create a network of telescopes that can aim at an object of interest at the same time. This network can then act together as one big telescope. To locate spacecraft and missions in outer space, and to capture photos of various things in the universe, this technique is preferably used. EHT’s aperture is substantial and is equal to the distance between the two stations at the South Pole and in Spain. This was cleverly arranged as the resulting setup ended up being almost the same length as the diameter of the Earth. The arrangement and spacing of the telescopes are also crucial in image resolution, and the farther they are, the better the quality gets. For taking the image of the black hole, the team of scientists decided to test the VLBI technique and computer programs and algorithms on two targets, each with its complexities and wonders. One of these was Sagittarius A* – the closest supermassive black hole to our planet. Located at the very center of our galaxy at a distance of 26,000 light-years away, this appears to be the biggest in size when seen from the Earth. But its existence in the Milky Way also posed a problem for scientists, who figured that they would have to clean out all the background noise and pollution in the data, and a complicated process was needed to filter it all out. Still, it offered an exciting opportunity to the researchers who ultimately chose it as a target, despite such issues. The other target was the black hole M87*, which is located in the center of the galaxy Messier 87 at 53 million light-years away. It is massive, and to get an idea of its size, the fact should be noted that it contains a whopping 6.5 billion solar masses! It was also an exciting and intriguing choice for the researchers as it is an active black hole meaning that matter is continually falling in and out of it. The particles also jet out of M87* at very high velocities (almost at the speed of light). Being that far away was yet another challenge in taking its picture. Katie Bouman, the computer scientist with the Event Horizon Telescope team who became the star and another highlight of this feat, very aptly described it as similar to taking the photo of an orange on the surface of the Moon. Originally the EHT had eight locations around the world but in the later years, more telescopes were added to help analyze and refine the data. For the collection of the data, there was a need for having suitable weather for telescope viewing at each location and it took almost ten days to observe it all. Valid calibration and synchronization of the telescope was an essential task which ultimately enabled EHT to have a resolution that was 4,000 times better than that of the Hubble Space Telescope. A considerable amount of data was obtained by the team, which was then transported to the primary location where it could be studied easily with high internet speed. It was in this central area, where the scientists managed to combine the data using various programs and algorithms and developed the first-ever image of the silhouette of the event horizon of M87*. The other target’s image is also in the process of being developed. NASA also contributed to this strenuous task, and several spacecraft were used to observe the black hole with varying wavelengths of light. The genius minds behind the scenes The team who made this wonder possible from impossible also deserves immense appreciation for their hard work. These researchers were recently honored with the Breakthrough Prize in Fundamental Physics for their efforts. The team was led by Shep Doeleman at the Harvard-Smithsonian Center for Astrophysics. He told in an interview that “For many years, I would tell people that we were going to image a black hole, and they would say, ‘Well, we’ll believe it when we see it.’ But when you finally come with robust evidence, when you make a breakthrough like this, then you have the satisfaction of really giving birth to a new field.” As mentioned above, another scientist that almost became a household name in the field of science was Katie Bouman, who garnered worldwide attention for working on the algorithm that helped to make the final image of the black hole. She became an inspiration for many people, especially women working in STEM. She started working on the algorithm as a graduate student at the Massachusetts Institute of Technology or MIT. In a caption to her Facebook post, she wrote, “Watching in disbelief as the first image I ever made of a black hole was in the process of being reconstructed.” She was hailed and appreciated around the world for her groundbreaking work along with her team. Paving the way for scientific glory Taking this image is no ordinary achievement. It is a big step in unraveling the mysteries of the universe. It can help us to test predicted theories and make observations about spacetime and celestial objects that have staggered humans since almost the beginning of the time. From working out and filling the gaps in Einstein’s theory of relativity to improving Hawking’s views on quantum mechanics, such type of data and knowledge are essential tools for figuring stuff out. Einstein’s theory of general relativity was not really been proven for the black hole and other similar paradoxes. This project offers a more precise calculation of the mass of a black hole. The radius of M87 *’s event horizon was accurately measured, and a method of mass estimation was validated. General relativity equations can be used to provide an estimate of the size and shape of a black hole, which calls for it to be roughly circular contrasting other theories. The developed image showed that it indeed has a circular silhouette, thus proving the theory. This data provided information about formation and behavior, and some elements, such as the ejection of particles at the speed of light, are now offering new research interests for scientists. As EHT continues to provide more data, new questions can now be answered, and studies can be done at an accelerated pace. Other areas can also get benefit from it, and it has also successfully ignited the fuel of passion and curiosity about the universe that has enabled scientists and researchers to come this far and will continue to take us to infinity and beyond! Note Asterisk (*) is used to represent a black hole.
<urn:uuid:efe9ea26-f476-4b15-b355-18104a577c17>
CC-MAIN-2021-43
https://scientiamag.org/2019-blessed-us-with-the-first-ever-image-of-a-black-hole-finally/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00479.warc.gz
en
0.974047
1,673
3.78125
4
- Eclipses, Equinoxes, and Solstices and Earth Perihelion and Aphelion - Space Exploration - Human spaceflight launches and returns, 2011 The study of graphene—a two-dimensional lattice of carbon atoms on an insulating substrate—produced results that may lead to a new generation of electronic devices, since electrons can travel in graphene 100 times faster than in silicon. Yanqing Wu and co-workers at the IBM Thomas J. Watson Research Center, Yorktown Heights, N.Y., studied graphene transistors that had cut-off frequencies as high as 155 GHz and that, unlike conventional devices, worked well at temperatures as low as 4.3 K (−268.9 °C, or −451.9 °F). Ming Liu and colleagues at the NSF Nanoscale Science and Engineering Center, Berkeley, Calif., demonstrated a high-speed broadband electro-optical modulator with high efficiency and an active device area of only 25 μm2. Such a device could lead to new designs of optical communications on chips. Vinay Gupta and colleagues from the National Physical Laboratory, New Delhi, made luminescent graphene quantum dots blended with organic polymers for use in solar cells and light-emitting diodes, which could offer better performance at lower cost than other polymer-based organic materials. By combining graphene with extremely small metal wires called plasmonic nanostructures, T.J. Echtermeyer, of the University of Cambridge, and co-workers made graphene-based photodetectors that were 20 times more efficient than those made in previous experiments. Other two-dimensional systems were studied. A.F. Santander-Syro’s group at Université de Paris-Sud, Orsay, France, showed that there was a two-dimensional electron gas at the surface of the material SrTiO3. One possible way for future computers to store information would be to encode data in the spin of electrons; such a computer has been called “spintronic.” Kuntal Roy and colleagues at Virginia Commonwealth University made a great step to producing a spintronic device by making a small spintronic switch in which very small amounts of energy would cause a piezoelectric material to move and thus change the spins of electrons in a thin magnetic layer. Devices using such switches could be powered by only very slight movements. Two new types of laser appeared in 2011. Yao Xiao and colleagues at the department of optical instrumentation, Zhejiang University, Hangzhou, China, reported lasing action at 738 nm (nanometres), using a folded wire 200 nm in diameter. The configuration made possible a tunable single-mode nanowire laser. Malte Gather and Seok-Hyun Yun at Harvard Medical School created a “living laser” by using biological material. Green fluorescent protein that had been inserted into human embryo kidney cells was used in a tiny optical cavity to produce laser light. This technique could be used to study processes in a living cell. In a different region of the electromagnetic spectrum, J.R. Hird, C.G. Camara, and S.J. Putterman at the department of physics and astronomy, University of California, Los Angeles, investigated the triboelectric effect, in which electric currents are generated by friction. When the team pulled apart silicon and a metal-coated epoxy, a current generated by the friction was found to produce a beam of X-rays. This method could lead to a new generation of simple and cheap sources for X-ray imaging. Lasers and optical devices for high-speed communications and information processing were being studied in many laboratories, with an emphasis on efficiency and reproducibility. Bryan Ellis and co-workers at Stanford University developed an electrically pumped quantum dot laser that produced continuous wave operation with the lowest current threshold yet observed. Matthew T. Rakher and colleagues at National Institute of Standards and Technology, Gaithersburg, Md., devised a system for simultaneous wavelength translation and amplitude modulation for single photons, using the “blending” in a crystal of photons from two separate laser sources. Georgios Ctistis and colleagues at the University of Twente, Enschede, Neth., built a switch that changed state in just one-trillionth of a second (10−12 s). Quantum information systems involve photons that are “entangled”—perfectly correlated over long distances. For storage and transmission of such photons, practical quantum memories are required for storing and recalling quantum states on demand with high efficiency and low noise. For transmission occurring over long distances, memory repeaters are required for receiving input data and retransmitting. In 2011 a number of groups demonstrated designs for such devices. M. Hosseini and colleagues at the Australian National University in Canberra reconstructed quantum states that had been stored in the ground states of rubidium vapour with up to 98% fidelity. Christoph Clausen and co-workers at the University of Geneva demonstrated entanglement between a photon and a physical system. One photon from an entangled pair was stored in a Nd:Y2SiO5 crystal and then later released, but it still retained its entanglement with the unstored photon. Holger P. Specht and co-workers at the Max Planck Institute for Quantum Optics, Garching, demonstrated a system in which a quantum bit, or qubit (a photon whose polarization states contain information), was absorbed by a single rubidium atom trapped inside an optical cavity. The rubidium atom later emitted a photon containing the original polarized information. Thus, the rubidium atom served as a quantum computer memory. In a very different approach, Christian Ospelkaus of Leibniz University, Hannover, Ger., and colleagues used a waveguide integrated on a microchip to produce the first microwave quantum gate—that is, a logic gate for a quantum computer. Two ions were trapped just above the chip’s surface. Multiple pulses of microwave radiation entangled the two ions, which acted as a quantum gate. N. Timoney and colleagues at the University of Siegen, Ger., trapped individual ions and applied microwave pulses to them to decouple them from outside noise and thus make an undisturbed quantum processor. Such developments could aid the production of large ion-trap quantum computers in the foreseeable future.
<urn:uuid:55f505af-7ffa-416d-90e4-bb882f056ecf>
CC-MAIN-2014-52
http://www.britannica.com/EBchecked/topic/1812357/Physical-Sciences-Year-In-Review-2011/302765/Condensed-State
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447546544.57/warc/CC-MAIN-20141224185906-00059-ip-10-231-17-201.ec2.internal.warc.gz
en
0.931152
1,324
3.828125
4
August 15, 2000 -- At a technical conference today at Stanford University, IBM-Almaden researcher Isaac Chuang described his team's experiments that demonstrated the world's most advanced quantum computer and the tremendous potential such devices have to solve problems that conventional computers cannot handle. "Quantum computing begins where Moore's Law ends -- about the year 2020, when circuit features are predicted to be the size of atoms and molecules," says Isaac L. Chuang, who led the team of scientists from IBM Research, Stanford University and the University of Calgary. "Indeed, the basic elements of quantum computers are atoms and molecules." Quantum computers get their power by taking advantage of certain quantum physics properties of atoms or nuclei that allow them to work together as quantum bits, or "qubits," to be the computer's processor and memory. By interacting with each other while being isolated from the external environment, theorists have predicted -- and this new result confirms -- that qubits could perform certain calculations exponentially faster than conventional computers. The new quantum computer contains five qubits -- five fluorine atoms within a molecule specially designed so the fluorine nuclei's "spins" can interact with each other as qubits, be programmed by radiofrequency pulses and be detected by nuclear magnetic resonance instruments similar to those commonly used in hospitals and chemistry labs. Using the molecule, Chuang's team solved in one step a mathematical problem for which conventional computers require repeated cycles. The problem is called "order-finding" -- finding the period of a particular function -- which is typical of many basic mathematical problems that underlie important applications such as cryptography. While the potential for quantum computing is huge and recent progress is encouraging, the challenges remain daunting. IBM's five-qubit quantum computer is a research instrument. Commercial quantum computers are still many years away, since they must have at least several dozen qubits before difficult real-world problems can be solved. "This result gives us a great deal of confidence in understanding how quantum computing can evolve into a future technology," Chuang says. "It reinforces the growing realization that quantum computers may someday be able to live up to their potential of solving in remarkably short times problems that are so complex that the most powerful supercomputers can't calculate the answers even if they worked on them for millions of years." Chuang says the first applications are likely to be as a co-processor for specific functions, such as database lookup and finding the solution to a difficult mathematical problem. Accelerating word processing or Web surfing would not be well-suited to a quantum computer's capabilities. Chuang presented his team's latest result today at Stanford University at the Hot Chips 2000 conference, which is organized by the Institute of Electrical and Electronics Engineers' (IEEE) Computer Society. His co-authors are Gregory Breyta and Costantino S. Yannoni of IBM-Almaden, Stanford University graduate students Lieven M.K .Vandersypen and Matthias Steffen, and theoretical computer scientist Richard Cleve of the University of Calgary. The team has also submitted a technical report of their experiment to the scientific journal, Physical Review Letters. History of Quantum ComputingWhen quantum computers were first proposed in the 1970s and 1980s (by theorists such as the late Richard Feynmann of California Institute of Technology, Pasadena, Calif.; Paul Benioff of Argonne National Laboratory in Illinois; David Deutsch of Oxford U. in England., and Charles Bennett of IBM's T.J. Watson Research Center, Yorktown Heights, N.Y.), many scientists doubted that they could ever be made practical. But in 1994, Peter Shor of AT&T Research described a specific quantum algorithm for factoring large numbers exponentially faster than conventional computers -- fast enough to break the security of many public-key cryptosystems. Shor's algorithm opened the doors to much more effort aimed at realizing the quantum computers' potential. Significant progress has been made by numerous research groups around the world. Chuang is currently among the world's leading quantum computing experimentalists. He also led the teams that demonstrated the world's first 2-qubit quantum computer (in 1998 at University of California Berkeley) and 3-qubit quantum computer (1999 at IBM-Almaden). The order-finding result announced today is the most complex algorithm yet to be demonstrated by a quantum computer. Note: Earlier this year, scientists at Los Alamos National Laboratories announced they had achieved quantum coherence in a seven-qubit molecule. While this is a necessary condition for achieving a quantum computer, they have not yet used the molecule as a seven-qubit quantum computer to solve a problem or to implement a quantum algorithm. How a Quantum Computer Works A quantum particle, such as an electron or atomic nucleus, can exist in two states at the same time -- say, with its spin in the up and down states. This constitutes a quantum bit, or qubit. When the spin is up, the atom can be read as a 1, and the spin down can be read as a 0. This corresponds with the digital 1s and 0s that make up the language of traditional computers. The spin of an atom up or down is the same as turning a transistor on and off, both represent data in terms of 1s and 0s. Qubits differ from traditional digital computer bits, however, because an atom or nucleus can be in a state of "superposition," representing simultaneously both 0 and 1 and everything in between. Moreover, without interference from the external environment, the spins can be "entangled" in such a way that effectively wires together a quantum computer's qubits. Two entangled atoms act in concert with each other -- when one is in the up position, the other is guaranteed to be in the down position. The combination of superposition and entanglement permit a quantum computer to have enormous power, allowing it to perform calculations in a massively parallel, non-linear manner exponentially faster than a conventional computer. For certain types of calculations -- such as complex algorithms for cryptography or searching -- a quantum computer can perform billions of calculations in a single step. So, instead of solving the problem by adding all the numbers in order, a quantum computer would add all the numbers at the same time. To input and read the data in a quantum computer, Chuang's team uses a nuclear magnetic resonance machine, which uses a giant magnet and is similar to the medical devices commonly used to image human soft tissues. A tiny test-tube filled with the special molecule is placed inside the machine and the scientists use radio-frequency pulses as software to alter atomic spins in the particular way that enables the nuclei to perform calculations. Cite This Page:
<urn:uuid:c77c39ff-dc95-4e37-b32d-19475f4b4c45>
CC-MAIN-2014-52
http://www.sciencedaily.com/releases/2000/08/000817081121.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765678.46/warc/CC-MAIN-20141217075245-00066-ip-10-231-17-201.ec2.internal.warc.gz
en
0.934168
1,373
3.65625
4
In physics, a quantum (plural: quanta) is the minimum amount of any physical entity involved in an interaction. Behind this, one finds the fundamental notion that a physical property may be "quantized," referred to as "the hypothesis of quantization". This means that the magnitude can take on only certain discrete values. A photon is a single quantum of light, and is referred to as a "light quantum". The energy of an electron bound to an atom is quantized, which results in the stability of atoms, and hence of matter in general. As incorporated into the theory of quantum mechanics, this is regarded by physicists as part of the fundamental framework for understanding and describing nature at the smallest length-scales. Etymology and discovery The word "quantum" comes from the Latin "quantus", meaning "how much". "Quanta", short for "quanta of electricity" (electrons) was used in a 1902 article on the photoelectric effect by Philipp Lenard, who credited Hermann von Helmholtz for using the word in the area of electricity. However, the word quantum in general was well known before 1900. It was often used by physicians, such as in the term quantum satis. Both Helmholtz and Julius von Mayer were physicians as well as physicists. Helmholtz used "quantum" with reference to heat in his article on Mayer's work, and indeed, the word "quantum" can be found in the formulation of the first law of thermodynamics by Mayer in his letter dated July 24, 1841. Max Planck used "quanta" to mean "quanta of matter and electricity", gas, and heat. In 1905, in response to Planck's work and the experimental work of Lenard (who explained his results by using the term "quanta of electricity"), Albert Einstein suggested that radiation existed in spatially localized packets which he called "quanta of light" ("Lightquanta"). The concept of quantization of radiation was discovered in 1900 by Max Planck, who had been trying to understand the emission of radiation from heated objects, known as black-body radiation. By assuming that energy can only be absorbed or released in tiny, differential, discrete packets he called "bundles" or "energy elements", Planck accounted for the fact that certain objects change colour when heated. On December 14, 1900, Planck reported his revolutionary findings to the German Physical Society, and introduced the idea of quantization for the first time as a part of his research on black-body radiation. As a result of his experiments, Planck deduced the numerical value of h, known as the Planck constant, and could also report a more precise value for the Avogadro–Loschmidt number, the number of real molecules in a mole and the unit of electrical charge, to the German Physical Society. After his theory was validated, Planck was awarded the Nobel Prize in Physics in 1918 for his discovery. Beyond electromagnetic radiation While quantization was first discovered in electromagnetic radiation, it describes a fundamental aspect of energy not just restricted to photons. In the attempt to bring experiment into agreement with theory, Max Planck postulated that electromagnetic energy is absorbed or emitted in discrete packets, or quanta. - Elementary particle - Introduction to quantum mechanics - Magnetic flux quantum - Photon polarization - Quantal analysis - Quantization (physics) - Quantum cellular automata - Quantum channel - Quantum coherence - Quantum chromodynamics - Quantum computer - Quantum cryptography - Quantum dot - Quantum electronics - Quantum entanglement - Quantum immortality - Quantum lithography - Quantum mechanics - Quantum number - Quantum sensor - Quantum state - Subatomic particle - Wiener, N. (1966). Differential Space, Quantum Systems, and Prediction. Cambridge: The Massachusetts Institute of Technology Press - E. Cobham Brewer 1810–1897. Dictionary of Phrase and Fable. 1898. - E. Helmholtz, Robert Mayer's Priorität (German) - Herrmann,A. Weltreich der Physik, GNT-Verlag (1991) (German) - Planck, M. (1901). "Ueber die Elementarquanta der Materie und der Elektricität". Annalen der Physik (in German) 309 (3): 564–566. Bibcode:1901AnP...309..564P. doi:10.1002/andp.19013090311. - Planck, Max (1883). "Ueber das thermodynamische Gleichgewicht von Gasgemengen". Annalen der Physik (in German) 255 (6): 358. Bibcode:1883AnP...255..358P. doi:10.1002/andp.18832550612. - Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt". Annalen der Physik (in German) 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.. A partial English translation is available from Wikisource. - Max Planck (1901). "Ueber das Gesetz der Energieverteilung im Normalspectrum (On the Law of Distribution of Energy in the Normal Spectrum)". Annalen der Physik 309 (3): 553. Bibcode:1901AnP...309..553P. doi:10.1002/andp.19013090310. Archived from the original on 2008-04-18. - Brown, T., LeMay, H., Bursten, B. (2008). Chemistry: The Central Science Upper Saddle River, NJ: Pearson Education ISBN 0-13-600617-5 - Klein, Martin J. (1961). "Max Planck and the beginnings of the quantum theory". Archive for History of Exact Sciences 1 (5): 459. doi:10.1007/BF00327765. - Melville, K. (2005, February 11). Real-World Quantum Effects Demonstrated - Modern Applied Physics-Tippens third edition; McGraw-Hill. - B. Hoffmann, The Strange Story of the Quantum, Pelican 1963. - Lucretius, On the Nature of the Universe, transl. from the Latin by R.E. Latham, Penguin Books Ltd., Harmondsworth 1951. There are, of course, many translations, and the translation's title varies. Some put emphasis on how things work, others on what things are found in nature. - J. Mehra and H. Rechenberg, The Historical Development of Quantum Theory, Vol.1, Part 1, Springer-Verlag New York Inc., New York 1982. - M. Planck, A Survey of Physical Theory, transl. by R. Jones and D.H. Williams, Methuen & Co., Ltd., London 1925 (Dover editions 1960 and 1993) including the Nobel lecture. - Rodney, Brooks (2011) Fields of Color: The theory that escaped Einstein. Allegra Print & Imaging.
<urn:uuid:41b2f68a-193f-4a03-acad-fd8604f1d895>
CC-MAIN-2014-52
http://en.wikipedia.org/wiki/Quantum
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767453.104/warc/CC-MAIN-20141217075247-00130-ip-10-231-17-201.ec2.internal.warc.gz
en
0.830419
1,556
3.734375
4
Astronomers affiliated with the Supernova Legacy Survey (SNLS) have discovered two of the brightest and most distant supernovae ever recorded, 10 billion light-years away and a hundred times more luminous than a normal supernova. These newly discovered supernovae are especially puzzling because the mechanism that powers most of them cannot explain their extreme luminosity. Noble gas molecules have been detected in space for the first time in the Crab Nebula, a supernova remnant, by astronomers at Univ. College London. Led by Prof. Mike Barlow, the team used ESA's Herschel Space Observatory to observe the Crab Nebula in far infrared light. Their measurements of regions of cold gas and dust led them to the serendipitous discovery of the chemical fingerprint of argon hydride ions. An atmospheric peculiarity the Earth shares with Jupiter, Saturn, Uranus and Neptune is likely common to billions of planets, Univ. of Washington astronomers have found, and knowing that may help in the search for potentially habitable worlds. The paper uses basic physics to show why this happens, and suggests that tropopauses are probably common to billions of thick-atmosphere planets and moons throughout the galaxy. Jupiter’s moon Europa features an intricate network of cracks in its icy surface. This unusual pattern is particularly pronounced around the equator. Scientists performing modeling studies on the potential marine currents below this ice layer have discovered that, near Europa’s equator, warmer water rises from deep within the moon. A massive impact on the moon about 4 billion years ago left a 2,500-mile crater, among the largest known craters in the solar system. Smaller subsequent impacts left craters within that crater. Comparing the spectra of light reflected from the peaks of those craters may yield clues to the composition of the moon’s lower crust and mantle—and would have implications for models of how the moon formed. NASA's Curiosity rover has uncovered signs of an ancient freshwater lake on Mars that may have teemed with tiny organisms for tens of millions of years, far longer than scientists had imagined, new research suggests. The watering hole near the Martian equator existed about 3.5 billion years ago. Scientists say it was neither salty nor acidic, and contained nutrients—a perfect spot to support microbes. A research team has discovered a natural particle accelerator of interstellar scale. By analyzing data from NASA’s Van Allen probes, physicists have been able to measure and identify the “smoking gun” of a planetary scale process that accelerates particles to speeds close to the speed of light within the Van Allen radiation belt. Using the powerful eye of NASA's Hubble Space Telescope, two teams of scientists have found faint signatures of water in the atmospheres of five distant planets. The presence of atmospheric water was reported previously on a few exoplanets orbiting stars beyond our solar system, but this is the first study to conclusively measure and compare the profiles and intensities of these signatures on multiple worlds. Quantum entanglement, a perplexing phenomenon of quantum mechanics that Albert Einstein once referred to as “spooky action at a distance,” could be even spookier than Einstein perceived. A team of physicists believe the phenomenon might be intrinsically linked with wormholes, hypothetical features of space-time that in popular science fiction can provide a much-faster-than-light shortcut from one part of the universe to another. NASA said Monday that the Hubble Space Telescope is the best bet for figuring out whether Comet ISON disintegrated during its brush with the sun last week. A pair of solar observatories saw something emerge from around the sun following ISON's close approach on Thanksgiving Day. But scientists don't yet know whether the spot of light was merely the comet's shattered remains or what's left of its icy nucleus. Research has shed new light on the properties of neutron stars, super dense stars that form when a large star explodes and collapses into itself. Writing in Nature, the team describes a newly discovered process that happens within the star's crust, located just below the surface. Until now, scientists thought that nuclear reactions within the crust contributed to the heating of the star's surface. Comet ISON will be only about 1 million miles away from the sun's super-hot surface during its close encounter on Thanksgiving. On Monday, it looked like it was about to die even before it got there. On Tuesday, it appeared healthy again. Will it meet a fiery death (or survive) when it whips around the sun on Thursday? Scientists haven’t seen a comet behave this way before. In April, a bright flash of light burst from near the constellation Leo. Originating billions of light years away, this explosion of light, called a gamma ray burst, has now been confirmed as the brightest gamma ray burst ever observed. Astronomers around the world were able to view the blast in unprecedented detail and observe several aspects of the event. The data could lead to a rewrite of standard theories on how gamma ray bursts work. For months, all eyes in the sky have pointed at the comet that's zooming toward a blisteringly close encounter with the sun. The moment of truth comes Thursday, Thanksgiving Day. The sun-grazing Comet ISON, now thought to be less than a mile wide, will either fry and shatter, victim of the sun's incredible power, or endure and quite possibly put on one fabulous celestial show. Talk about an astronomical cliffhanger. In our universe there are particle accelerators 40 million times more powerful than the Large Hadron Collider at CERN. Scientists don’t know what these cosmic accelerators are or where they are located, but new results being reported from IceCube, the neutrino observatory buried at the South Pole, may show the way. These new results should also erase any doubts as to IceCube’s ability to deliver on its promise. Orbiting telescopes got the fireworks show of a lifetime last spring when they spotted what is known as a gamma ray burst in a far-off galaxy. It’s not an unusual occurrence, but this one set records. Had it been closer, Earth would have been toast. But because this blast was 3.7 billion light-years away, mankind was spared. For nearly as long as astronomers have been able to observe asteroids, a question has gone unanswered: Why do the surfaces of most asteroids appear redder than meteorites—the remnants of asteroids that have crashed to Earth? Scientists have now found that Mars, not Earth, shakes up some near-Earth asteroids. The first solids to form in the solar system contain unusual isotopic signatures that show a nearby supernova injected material within ~100,000 years of their formation. That supernova, caused from the cataclysmic death of a star, could have even triggered the birth of the sun. The Hubble Space Telescope has discovered a six-tailed asteroid in the asteroid belt between the orbits of Mars and Jupiter. Scientists say they've never seen anything like it. Incredibly, the comet-like tails change shape as the asteroid sheds dust. The streams have occurred over several months. A pioneering technology called an atom interferometer promises to detect tiny perturbations in the curvature of space-time. With its potential picometer-level sensitivity, the instrument may one day detect what so far has remained imperceptible: gravitational waves or ripples in spacetime caused when massive celestial objects move and disrupt the space around them. A Russian rocket soared into the cosmos Thursday carrying the Sochi Olympic torch and three astronauts to the International Space Station ahead of the first-ever spacewalk for the symbol of peace. The unlit torch for the 2014 Winter Olympics in the Russian city of Sochi is to be taken on a spacewalk Saturday, then return to Earth on Monday (late Sunday EST) with three departing space station astronauts. A rare, recently discovered microbe that survives on very little to eat has been found in two places on Earth: spacecraft clean rooms in Florida and South America. Some other microbes have been discovered in a spacecraft clean room and found nowhere else, but none previously had been found in two different clean rooms and nowhere else. Leslie Rosenberg and his colleagues are about to go hunting. Their quarry: A theorized-but-never-seen elementary particle called an axion. The search will be conducted with a recently retooled, extremely sensitive detector that is currently in a testing and shakeout phase at the University of Washington’s Center for Experimental Nuclear Physics and Astrophysics. Over billions of years, small black holes can slowly grow into the supermassive variety by taking on mass from their surroundings and by merging with other black holes. But this slow process can't explain the problem of supermassive black holes existing in the early universe. New findings may help to test a model that solves this problem. Space is vast, but it may not be so lonely after all: A study finds the Milky Way is teeming with billions of planets that are about the size of Earth, orbit stars just like our sun, and are not too hot or cold for life. For the first time, NASA scientists have calculated, not estimated, what percent of stars that are just like our sun have planets similar to Earth: 22%, with a margin of error of plus or minus 8 percentage points.
<urn:uuid:fa66cb20-9e3f-48d4-b080-bb538d6069e6>
CC-MAIN-2014-52
http://www.rdmag.com/topics/general-sciences/astrophysics?items_per_page=25&page=5
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777454.142/warc/CC-MAIN-20141217075257-00017-ip-10-231-17-201.ec2.internal.warc.gz
en
0.942458
1,920
3.6875
4
Ultimate slow motion Media Lab postdoc Andreas Velten, left, and Associate Professor Ramesh Raskar with the experimental setup they used to produce slow-motion video of light scattering through a plastic bottle. Photo: M. Scott Brauer Massachusetts Institute of Technology (MIT) researchers have created a new imaging system that can acquire visual data at a rate of one trillion exposures per second. That's fast enough to produce a slow-motion video of a burst of light traveling the length of a 1-L bottle, bouncing off the cap and reflecting back to the bottle’s bottom. Media Lab postdoc Andreas Velten, one of the system's developers, calls it the "ultimate" in slow motion: "There's nothing in the universe that looks fast to this camera," he says. The system relies on a recent technology called a streak camera, deployed in a totally unexpected way. The aperture of the streak camera is a narrow slit. Particles of light—photons—enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones. The image produced by the camera is thus two-dimensional, but only one of the dimensions—the one corresponding to the direction of the slit—is spatial. The other dimension, corresponding to the degree of deflection, is time. The image thus represents the time of arrival of photons passing through a 1D slice of space. The camera was intended for use in experiments where light passes through or is emitted by a chemical sample. Since chemists are chiefly interested in the wavelengths of light that a sample absorbs, or in how the intensity of the emitted light changes over time, the fact that the camera registers only one spatial dimension is irrelevant. But it's a serious drawback in a video camera. To produce their super-slow-mo videos, Velten, Media Lab Associate Professor Ramesh Raskar and Moungi Bawendi, the Lester Wolfe Professor of Chemistry, must perform the same experiment—such as passing a light pulse through a bottle—over and over, continually repositioning the streak camera to gradually build up a two-dimensional image. Synchronizing the camera and the laser that generates the pulse, so that the timing of every exposure is the same, requires a battery of sophisticated optical equipment and exquisite mechanical control. It takes only a nanosecond for light to scatter through a bottle, but it takes about an hour to collect all the data necessary for the final video. For that reason, Raskar calls the new system “the world’s slowest fastest camera.” Doing the math After an hour, the researchers accumulate hundreds of thousands of data sets, each of which plots the 1D positions of photons against their times of arrival. Raskar, Velten, and other members of Raskar’s Camera Culture group at the Media Lab developed algorithms that can stitch that raw data into a set of sequential 2D images. The streak camera and the laser that generates the light pulses—both cutting-edge devices with a cumulative price tag of $250,000—were provided by Bawendi, a pioneer in research on quantum dots: tiny, light-emitting clusters of semiconductor particles that have potential applications in quantum computing, video-display technology, biological imaging, solar cells, and a host of other areas. The trillion-frame-per-second imaging system, which the researchers have presented both at the Optical Society's Computational Optical Sensing and Imaging conference and at Siggraph, is a spinoff of another Camera Culture project, a camera that can see around corners. That camera works by bouncing light off a reflective surface—say, the wall opposite a doorway—and measuring the time it takes different photons to return. But while both systems use ultrashort bursts of laser light and streak cameras, the arrangement of their other optical components and their reconstruction algorithms are tailored to their disparate tasks. Because the ultrafast-imaging system requires multiple passes to produce its videos, it can't record events that aren't exactly repeatable. Any practical applications will probably involve cases where the way in which light scatters—or bounces around as it strikes different surfaces—is itself a source of useful information. Those cases may, however, include analyses of the physical structure of both manufactured materials and biological tissues—"like ultrasound with light," as Raskar puts it. As a longtime camera researcher, Raskar also sees a potential application in the development of better camera flashes. "An ultimate dream is, how do you create studio-like lighting from a compact flash? How can I take a portable camera that has a tiny flash and create the illusion that I have all these umbrellas, and sport lights, and so on?" asks Raskar, the NEC Career Development Associate Professor of Media Arts and Sciences. "With our ultrafast imaging, we can actually analyze how the photons are traveling through the world. And then we can recreate a new photo by creating the illusion that the photons started somewhere else." "It's very interesting work. I am very impressed," says Nils Abramson, a professor of applied holography at Sweden's Royal Institute of Technology. In the late 1970s, Abramson pioneered a technique called light-in-flight holography, which ultimately proved able to capture images of light waves at a rate of 100 billion frames per second. But as Abramson points out, his technique requires so-called coherent light, meaning that the troughs and crests of the light waves that produce the image have to line up with each other. "If you happen to destroy the coherence when the light is passing through different objects, then it doesn’t work," Abramson says. "So I think it's much better if you can use ordinary light, which Ramesh does." Indeed, Velten says, "As photons bounce around in the scene or inside objects, they lose coherence. Only an incoherent detection method like ours can see those photons." And those photons, Velten says, could let researchers "learn more about the material properties of the objects, about what is under their surface and about the layout of the scene. Because we can see those photons, we could use them to look inside objects—for example, for medical imaging, or to identify materials." "I'm surprised that the method I've been using has not been more popular," Abramson adds. "I've felt rather alone. I'm very glad that someone else is doing something similar. Because I think there are many interesting things to find when you can do this sort of study of the light itself."SOURCE
<urn:uuid:72987e19-78fd-4b9d-b186-f60373344f6f>
CC-MAIN-2014-52
http://www.rdmag.com/news/2011/12/ultimate-slow-motion
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770324.129/warc/CC-MAIN-20141217075250-00044-ip-10-231-17-201.ec2.internal.warc.gz
en
0.935444
1,402
3.546875
4
Quantum dice debut Technology Research News Researchers have overcome a major obstacle to generating random numbers on quantum computers by limiting the possibilities in the otherwise unlimited randomness of a set of quantum particles. Random numbers play a key role in classical computing by providing an element of chance in games and simulations, a reliable method for encrypting messages, and a means of accurately sampling huge amounts of data. Researchers from the Massachusetts Institute of Technology and the National Atomic Energy Commission in Argentina have shown that short sequences of random operations -- randomly shifting laser pulses or magnetic fields -- acting on a string of quantum bits can, in effect, generate random configurations of qubits. Being able to generate random numbers in quantum computing could make quantum computers easier to build by countering the noise that eventually destroys qubits, which represent the 1s and 0s of computer information. Quantum computers promise to be fantastically fast at solving certain types of large problems, including the mathematics that underpins today's Quantum random numbers could also be useful for increasing the efficiency of quantum secret-sharing schemes, quantum encryption and various forms of quantum communications. Qubits can represent not only 1 and 0 but any number in between; a string of 100 qubits can represent every possible 100-digit binary number, and a single set of operations can search every possible answer to a problem at once. This gives quantum computers their power, but also poses a problem for generating random numbers. The nearly infinite number of possible qubit configurations theoretically requires an impossibly large number In the quantum world, no outcome is certain, and in most aspects of quantum computing, the goal is to reduce the uncertainty in order to get a definite answer to a problem. The researchers' scheme, however, aims for uncertainty. It limits the possible outcomes without making them The scheme generates quantum states in such a way that the probabilities of the limited set of outcomes are as evenly distributed over the nearly infinite range of possible outcomes as quantum theory allows, said Joseph Emerson, one of the MIT researchers who is now a fellow at the Perimeter Institute for Theoretical Physics in Canada. "These pseudo-random transformations are a practical substitute for truly... random transformations," he said. The number of operations required to represent a truly random configuration increases exponentially with the number of qubits in the configuration. For example, if the quantum equivalent of generating random numbers takes 22, or four, operations for two qubits, 15 qubits would require 215, or 32,768, operations. The researchers' pseudo-random number method could be used to help build quantum computers by providing a practical way to estimate imperfections or errors in quantum processors, said Emerson. "This is addressing a very big problem -- imperfections such as decoherence and inadequate control of the coherence between the qubits are the main limiting factors in the creation of large-scale quantum computers," he said. A quantum particle decoheres, or is knocked out of its quantum state, when it interacts with energy from the environment in the form of light, heat, electricity or magnetism. Researchers are looking for ways to fend off decoherence for as long as possible in order to make qubits last long enough to be useful. A way to estimate decoherence would allow researchers to assess the strength and type of environmental noise limiting the precision of a given quantum device, said Emerson. Random quantum operations can be used as control operations that, when subjected to the noise affecting a prototype quantum computer, will generate a response that depends only on the noise, he said. This way the noise can be characterized with many fewer measurements than existing methods, which are dependent on the interactions of the qubits and so require a number of measurements that increases exponentially with the number of qubits, he said. In addition to helping build quantum computers, random operators would be useful for quantum communications tasks like encryption, said Emerson. "The idea is to randomize a specific configuration of qubits containing the message, and then transmit this randomized state," he said. In this case, if each bit that makes up the message is encrypted, or changed randomly, it is not possible for an eavesdropper to find any type of pattern that may lead to cracking the message. The researchers tested the method on a three-qubit prototype liquid nuclear magnetic resonance (NMR) quantum computer. The computer consists of a liquid sample containing the amino acid alanine, which is a molecule made of three carbon-13 atoms. The qubits are the atoms' spins, which are analogous to a top spinning clockwise or counterclockwise. The two directions, spin up and spin down, can be used to represent 1 and 0. The qubits are controlled by magnetic fields generated by the nuclear magnetic Being able to diagnose faulty quantum computer components in a way that is independent of the number of qubits is very important, said Daniel Lidar, an assistant professor of theoretical chemical physics at the University of Toronto. "For this reason alone I suspect random [operators] will find widespread applications as quantum computer benchmarking becomes an experimental reality," he said. It is also likely that future quantum algorithms will make increasing use of pseudo-random operators, said Lidar. The researchers are working on making the random-number-generation system more precise, said Emerson. "Right now one can only estimate very coarse properties of the noise, such as [its] overall strength," he said. "I would like to devise methods to get a much more detailed analysis of the noise operators." Complete noise-estimation experiments could be implemented in rudimentary quantum computers within the next few years, said Emerson. Researchers generally agree that practical quantum computers are a decade or two away. Emerson's research colleagues were Yaakov S. Weinstein, Marcos Saraceno, Seth Lloyd, and David G. Corey. The work appeared in the December 19, 2003 issue of Science. The research was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA) and the Cambridge-MIT Institute. Timeline: 2 years, 10-20 years Funding: Government; University TRN Categories: Quantum Computing and Communications; Physics Story Type: News Related Elements: Technical paper, "Pseudo-Random Unitary Operators for Quantum Information Processing," Science, December 19, 2003 January 14/21, 2004 Quantum dice debut Pressure shapes plastic itself on the go Fiber optics goes nano make nano channels Wet biochip preserves Nanotubes grown on Hardy molecule makes Atoms make quantum Research News Roundup Research Watch blog View from the High Ground Q&A How It Works News | Blog Buy an ad link
<urn:uuid:8e910fa7-657e-4cf6-8056-fab02f717829>
CC-MAIN-2014-52
http://www.trnmag.com/Stories/2004/011404/Quantum_dice_debut_011404.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775517.52/warc/CC-MAIN-20141217075255-00133-ip-10-231-17-201.ec2.internal.warc.gz
en
0.884067
1,483
3.859375
4
Recent developments bring quantum computers closer to implementation 8 January 2011 Richard Feynman first seriously posed the question of designing computers based on quantum mechanics in a paper published in 1982. The most recent research into this field comes from a team from the Delft University of Technology and Eindhoven University of Technology, both in the Netherlands. In a paper recently published in the scientific journal Nature, a new technique to manipulate the fundamental building blocks of quantum computers was examined. Inspired by basic questions about the nature of light, quantum mechanics is the study of the most fundamental particles of matter. The most advanced quantum computing application currently imagined involves utilizing the inherent link that two particles, such as electrons, can form on the most elementary level to perform specified calculations. Current computers are based on a binary digit, called a bit. The information stored is held in two distinct states, generally referred to as 0 and 1. The basic unit of a quantum computer is called a qubit. The value of a qubit is generally based in the inherent rotation of an electron, which is either negative or positive. Unlike a classical bit, which is always one value or the other, a qubit initially has both of these values. Only when acted upon will the qubit take on a single value, and it will do so following the probabilistic laws governing quantum mechanics. A bit has a distinct disadvantage compared to a qubit. While 1000 bits could deliver about 1000 pieces of information at a time, 1000 qubits could deliver approximately 2^1000 (or 10^300) pieces of information simultaneously. This number is so large, that it is incomprehensibly larger than the number of grains of rice it would take to fill up the Solar System. While exponentially more powerful than classical computers, quantum computers have also proven to be exponentially more difficult to build. Quantum computers revolve around the manipulation of individual quantum particles. While dealing with 1000 bits is easy for modern-day technology, working with 1000 qubits it incredibly hard. The quantum mechanical nature of qubits can cause unwanted interaction with their physical surroundings, destabilizing the entire system. Imagine a line of dominoes falling one after the other. This issue has been the reason that quantum computers have yet to exceed their classical counterparts. Physics simulations were the original goal for quantum computers, and the impetus for Richard Feynman to write his 1982 paper. Feynman wanted to look at the possibility of a computer being able to fully simulate physical events, not just approximate them. Quantum mechanical systems were the particular focus. Unlike the random-event generators found in classical computers, the probabilistic nature of qubit states lets a quantum system be truly represented. This lets large systems of quantum particles be studied, which is utterly beyond the capabilities of classical computing. Success at building a quantum computer would also be the most stringent test of quantum mechanics ever devised. If quantum computers can be built to outstrip classical computers, it would be the most powerful vindication of quantum theory yet. On the flip side, a demonstration of a fundamental reason that quantum computers cannot be built would require a serious re-thinking of much of physics. A consequence of this technology would be immediately felt in the field of electronic security. Most secure communications and information storage revolve around a technique called RSA encryption. The process involves multiplying two prime numbers, such as 5 and 3, and using the product, 15, to encode data. The power of this approach is based on multiplying prime numbers so large, that the product is hundreds of digits long. Classical computers simply cannot factor such a large number in any reasonable timescale. The original information, encrypted based on the two original numbers, is thus safeguarded. In contrast, quantum computers would be able to take advantage of a procedure known as Shor's algorithm. The essence of the formula is that, using quantum computers, even extremely large numbers could be factored in a matter of moments. This would give the user of a quantum computer the ability to break into bank accounts, private email, and decipher computer passwords at a whim. The specific advances of the new research involve the direct manipulation of electrons through electric fields. Previous experiments used magnetic fields, which do not have the precision necessary to form large numbers of qubits into a functioning system. The precision granted by using electric fields shows potential in keeping large amounts of qubits coherent long enough to perform calculations. What should be noted is that quantum computers will not solve all problems posed in computer science. If they ever reach fruition, the highest use will be simulating quantum physics. They will not be adept at proving mathematical theorems nor will they discover new physics. Those would still be the province of human beings. A major drawback of quantum computers is the issue of usage. Much of the research done on the subject is funded by private institutions and nation states. Any private institution with a quantum computer would be able to break down any barriers encountered in breaking into a competitor's system in seconds, giving an unparalleled edge in development. Even more disturbing, a government with a quantum computer could access secrets held by other nations with ease. Its apparatus for electronic spying, both foreign and domestic, would be without rival. Quantum computing is an exercise in contradictions. The technical difficulties make it difficult to achieve, and the social and political consequences give pause over whether and how fast to move forward with this effort. On the other hand, humanity's knowledge of physics itself is in many ways bound up in showing whether such tools are possible. We owe it to ourselves to find out. R. P. Feynman. Simulating Physics with Computers. S. Nadj-Perge, S.M. Frolov, E.P.A.M. Bakkers, L.P. Kouwenhoven. “Spin-orbit qubit in a semiconductor nanowire.”
<urn:uuid:30da248e-7301-4940-99e8-c06b2507ee45>
CC-MAIN-2014-52
http://www.wsws.org/en/articles/2011/01/quan-j08.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447554191.27/warc/CC-MAIN-20141224185914-00065-ip-10-231-17-201.ec2.internal.warc.gz
en
0.940877
1,200
3.953125
4
By Jason Palmer Science and technology reporter, BBC News The "quantum resonator" can be seen with the naked eye Researchers have created a "quantum state" in the largest object yet. Such states, in which an object is effectively in two places at once, have until now only been accomplished with single particles, atoms and molecules. In this experiment, published in the journal Nature, scientists produced a quantum state in an object billions of times larger than previous tests. The team says the result could have significant implications in quantum computing. One of the pillars of quantum mechanics is the idea that objects absorb and emit energy in tiny discrete packets known as quanta. This can be seen in a piece of coloured glass, which absorbs a certain colour of light. That light is made up of photons - packets of light energy - and the glass atoms absorb only photons with the quanta (or amount) of energy that corresponds to that colour. What we see through the glass is the light that has not been absorbed. At the atomic level, quantum mechanics predicts - and experiments demonstrate - a number of surprising effects beyond that, however. If all the energy that an atom gets from the jostling atoms in its environment is removed by cooling it to phenomenally low temperatures, it can reach its "quantum ground state" - no more energy can be removed. If just one quantum of energy is then carefully put back in a certain way, the atom can be said to be in two states at the same time: a superposition of states. Although only one quantum of energy is put in, any measurements will show either zero or one quanta; strictly, the atom has both. Down to ground These superpositions of states have long been predicted to be useful for a pursuit known as quantum computing; if used in place of the zeroes and ones of digital computing, a quantum computer would be vastly more powerful. Similar approaches could lead to the quantum ground state of a virus However, creating these states in anything bigger than single atoms and molecules has proven difficult, because the larger an object is, the more tricky it becomes to isolate it from its environment and put it in its ground state. "There is this question of where the dividing line is between the quantum world and the classical world we know," said Andrew Cleland of the University of California, Santa Barbara. "We know perfectly well that things are not in two places at the same time in our everyday experience, but this fundamental theory of physics says that they can be," he told BBC News. Now, Professor Cleland and his team have moved that dividing line, using an object just big enough to be seen with the naked eye. They used a tiny piece of what is known as a piezoelectric material, which expands and contracts when an electrical current is run through it. A current applied at a certain frequency causes it to expand and contract regularly and, just like a violin string, the material has a frequency at which it is inclined to vibrate. They connected this resonator to an electric circuit that the team has been developing for three years. This can be tuned to put in just one quantum of electrical energy. They cooled the whole apparatus down to a thousandth of a degree above absolute zero and confirmed that their resonator was in its quantum ground state. The researchers designed the system so that they could "pump in" just one quantum of electrical energy at a time and see the oscillator begin to vibrate as it converted that quantum into one quantum of vibrational energy. As it vibrated, the team showed that the resonator was in one of the slippery superpositions of states, with both one and zero quanta of energy. Sensors and sensibility The result is a huge push toward answering the question of whether quantum mechanical effects simply disappear in objects beyond a certain size. "As far as mechanical objects are concerned, the dividing line was at around 60 atoms," Professor Cleland said. "With this experiment, we've shown that the dividing line can be pushed up all the way to about a trillion atoms." The ability to create these superpositions of states and to read them out using the same circuit that created them would make for a quantum-based memory storage system - the heart of a potential quantum computer. Previously, the largest quantum state was achieved in a buckyball Markus Aspelmeyer of the University of Vienna believes that the mechanical oscillator approach will, in time, prove its worth in the business of quantum computing. "What they've shown here is a mechanical oscillator as a completely new quantum system, and I personally think it's a really important one," he told BBC News. "It means that you can now utilise mechanical resonators in quantum experiments and that opens a completely new perspective, in particular for quantum information science." Although these tiny resonators could be made in huge arrays using techniques that are standard in the computer industry, Professor Cleland says that using different systems based on photons instead of vibrations would most likely perform better in any eventual computers. But, he said, the devices might be used in reverse, to detect the tiniest of vibrations that are created when light interacts with matter or when chemical reactions take place. In either case, these devices have added to the debate about quantum mechanics and whether its surprising and, as Albert Einstein famously put it, "spooky" effects play a role in the everyday objects around us. "I don't think there is a limit, that there will be a certain size where quantum mechanics starts to break down," Dr Aspelmeyer said. "The larger we go, it becomes increasingly difficult and we will bump into more and more practical limitations. So the only reason that things could break down is that we run out of money."
<urn:uuid:356ec801-1d4f-42cf-942b-d9f4f0c85028>
CC-MAIN-2014-52
http://news.bbc.co.uk/2/hi/8570836.stm
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770633.72/warc/CC-MAIN-20141217075250-00110-ip-10-231-17-201.ec2.internal.warc.gz
en
0.959516
1,209
3.765625
4
How close are we to the quantum computational revolution? Quantum computers promise drastic speedups for tackling the most complex mathematical problems. Nonetheless, current precursors of quantum computers cannot be scaled efficiently to reasonably sized systems. Now, researchers have realized a new setup that can be scaled more easily than ever before. The heart of the quantum computer. This is where the ions are physically stored and processed, surrounded by lasers, electronics and vacuum systems. Tiny trap segments located at the end of this bar confine and control the ions. Quantum information processing and cooling are done by shining laser beams onto the ions. Credit: J. Jost/NIST. Imagine an engineer having to work with materials that are constantly changing: iron morphing into wax, wood draining off as water, or cement disintegrating into ashes. Luckily, in classical physics this is a not very common problem — making life relatively easy for classical engineers. In quantum physics, however, things are different: even the most slightly uncontrolled interaction can potentially turn quantum objects into classical ones. As a consequence, it is extremely difficult to build large setups that utilize the quantum nature of matter, such as scalable quantum computers. Now, researchers led by David Wineland at the National Institute of Standards and Technology (NIST) in Boulder (Colorado, USA) have been able to realize a scalable setup for quantum computation. Classical computers have increased their computational power enormously over the last decades — so why are scientists interested in the possibility of quantum computers? On the one hand, advances in microelectronics have depended largely on continued miniaturization, and engineers are now starting to reach the fundamental quantum-physical limit that will make it impossible to further miniaturize classical technology. On the other hand, classical computers are intrinsically sequential: they execute a list of instructions one after the other. Their limits become evident with certain tasks that do not naturally conform to a sequential solution, such as factorizing prime numbers, sorting long lists, or simulating complex systems. Today's high-power computers, therefore, use multiple processors which share the workload like mechanics building different parts of an apparatus. Even this parallel approach, however, relies on a substantially sequential approach which can only cope with a few small subsystems at any one given time. Quantum computation schematic. (1) Qubits are prepared or read-out individually, in spatially different zones. (2) Two qubits are brought together in order to perform a two-qubit operation. (3,4) Qubits can be transported to and from other trap segments, allowing for the experimental realization of complex quantum algorithms involving several interacting qubits. Credit: NIST. Quantum computers would be able to perform intrinsically parallel, collective algorithms. All the parts of the quantum system can be directly connected and made to simultaneously respond, thereby potentially solving complex problems a lot faster. Being so deeply connected, however, is extremely delicate, since it also implies that any uncontrolled interactions may break the quantum parallelism needed for quantum computation. Of course, if a quantum particle loses its quantumness — a process called decoherence — it does not usually disappear, it simply turns itself — and possibly even part of the system it is connected to — into a classical object that can no longer be used for quantum computation. Today, most setups for quantum computation use about 5-10 qubits (quantum bits). This is not yet enough to outperform classical computers. For ground-breaking results, far more qubits — between 50 to several 1000 — are expected to be necessary. Therefore, it is important to find ways to scale up a quantum computing device to a large number of qubits. Scalability here refers to how easily the setup can be implemented for a larger number of qubits. At present, miniaturization — scaling the size — is not yet a major concern since current error rates are still too high for performing large quantum computations reliably, anyway. Quantum algorithms can be implemented by operations of at most two qubits at a time. Therefore, given enough qubits and small error rates, sequential treatment of one, or at most two qubits is enough to implement intrinsically parallel quantum algorithms — nature itself will take care of the rest. The most complex physical, mathematical and engineering problems could thus be tackled in a completely new way, possibly revolutionizing science and technology. One consequence, for example, would be that we would no longer be able to trust current cryptographic protocols used for credit cards and internet security. At the same time, much safer technologies would be implemented exploiting related quantum information technologies, such as quantum key distribution. There are currently many different approaches to quantum computation. Physically, qubits can be realized using, for example, neutral atoms, ions or even superconducting materials. The NIST setup, in particular, uses an array of radio-frequency traps, each able to cope with a small number of ions. The setup was divided into regions for storing qubits, regions for performing quantum operations and regions for transporting the qubits. The big problem with this approach is heating up of ions during transport because hot ions are likely to emit photons, thereby altering their internal state and thus the qubit. NIST researchers have found a way to inhibit the adverse effects of heating during transport, thereby securing the scalability of their experiment. "Our trick was to use two types of ions in our experiments," Jonathan Home explains, "one for carrying the quantum information and one as a cooling agent." Home performed the NIST experiment and points out how important the controlled interplay between the ions is. "Beryllium ions have a favorable atomic level structure for storing our quantum information which renders them essentially insensitive to remaining external magnetic fields," Home continues, "and Magnesium can be cooled efficiently without disturbing the Beryllium ions." Beryllium-Magnesium couples were used for their experiments. During transport the ions heated up, once at the target location, however, they could be cooled down to the low temperatures necessary for performing quantum operations using dedicated lasers. Achieving this with single quantum bits, as well as with pairs, constitutes the proof that all the necessary operations for a quantum computer could be achieved using this same, scalable setup. "Today, quantum computers are probably at the stage at which conventional computers were in the first half of the 20th century," says Renato Renner, head of the Quantum Information Theory group at the Swiss Federal Institute of Technology in Zurich (ETH). Ion trap technology, according to Renner, is a very promising way for studying quantum computers, even though he expects further technological and scientific breakthroughs — similar to a transistor replacing tubes for classical computation — to be necessary before real quantum computers will be built. "Nonetheless," Renner insists, "being able to perform all the relevant steps, including transport, in the same scalable experiment is a great achievement." "The usefulness of current setups, as both Home and Renner agree, is in exploring quantum," Renner adds, "even quantum simulators, the precursors of general purpose quantum computers, might soon be useful for studying open problems in quantum physics, for example." "Quantum information," Home concludes, "has been a very fast-paced topic over the last couple of years and it is simply extremely fascinating to see how we advance in our concepts in mathematics, our understanding of physics, and our possibilities in engineering. It is truly exciting to see how all of this evolves!" AN is currently working on his PhD on disordered ultracold quantum systems at ICFO - The Institute of Photonic Sciences in Barcelona (Spain). Jonathan P. Home, David Hanneke, John D. Jost, Jason M. Amini, Dietrich Leibfried & David J. Wineland, Complete Methods Set for Scalable Ion Trap Quantum Information Processing, Science (2009) 325, 1227-1230 (link). How to compute with qubits. (Video) Animation of the NIST experiment on sustained quantum information processing. Credit: Jonathan Home/NIST.
<urn:uuid:aa579d42-ff58-4fc3-8af8-324e11b43c59>
CC-MAIN-2014-52
http://www.opfocus.org/index.php?topic=story&v=7&s=6
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768977.107/warc/CC-MAIN-20141217075248-00154-ip-10-231-17-201.ec2.internal.warc.gz
en
0.936192
1,641
3.890625
4
Physicists at the National Institute of Standards and Technology have designed and built a novel electromagnetic trap for ions that could be easily mass produced to potentially make quantum computers large enough for practical use. The new trap, described in the June 30 issue of Physical Review Letters, may help scientists surmount what is currently the most significant barrier to building a working quantum computer—scaling up components and processes that have been successfully demonstrated individually. Quantum computers would exploit the unusual behavior of the smallest particles of matter and light. Their theoretical ability to perform vast numbers of operations simultaneously has the potential to solve certain problems, such as breaking data encryption codes or searching large databases, far faster than conventional computers. Ions (electrically charged atoms) are promising candidates for use as quantum bits (qubits) in quantum computers. The NIST team, one of 18 research groups worldwide experimenting with ion qubits, previously has demonstrated at a rudimentary level all the basic building blocks for a quantum computer, including key processes such as error correction, and also has proposed a large-scale architecture. The new NIST trap is the first functional ion trap in which all electrodes are arranged in one horizontal layer, a “chip-like” geometry that is much easier to manufacture than previous ion traps with two or three layers of electrodes. The new trap, which has gold electrodes that confine ions about 40 micrometers above the electrodes, was constructed using standard microfabrication techniques. NIST scientists report that their single-layer device can trap a dozen magnesium ions without generating too much heat from electrode voltage fluctuations—also an important factor, because heating has limited the prospects for previous small traps. Microscale traps are desirable because the smaller the trap, the faster the future computer. Work is continuing at NIST and at collaborating industrial and federal labs to build single-layer traps with more complex structures in which perhaps 10 to 15 ions eventually could be manipulated with lasers to carry out logic operations. Quantum Information Research at NIST: Goals and Vision America’s future prosperity and security may rely in part on the exotic properties of some of the smallest articles in nature. Research on quantum information (QI) seeks to control and exploit these properties for scientific and societal benefits. This remarkable field combines physics, information science, and mathematics in an effort to design nanotechnologies that may accomplish feats considered impossible with today’s technology. QI researchers are already generating “unbreakable” codes for ultra-secure encryption. They may someday build quantum computers that can solve problems in seconds that today’s best supercomputers could not solve in years. QI has the potential to expand and strengthen the U.S. economy and security in the 21st century just as transistors and lasers did in the 20th century. Nations around the world are investing heavily in QI research in recognition of the economic and security implications. A significant part of the U.S. effort is based at the National Institute of Standards and Technology (NIST), which has the largest internal QI research program of any federal agency. NIST laboratories routinely develop the measurement and standards infrastructure needed to promote innovation in emerging fields that may transform the future. Few fields need this support as much as QI, which involves entirely new concepts of information processing as well as complex hardware for precision control of individual atoms, very small quantities of light, and electrical currents 1 billion times weaker than those in light bulbs. As the nation’s measurement experts, NIST researchers long have had world-class capabilities in precision measurement and control of atoms, light, and other quantum systems. NIST, therefore, has the world-class skills and facilities needed to advance QI through technology demonstrations, development of new methods and tests for evaluating QI system components, and related scientific discoveries. NIST first became involved in quantum information science in the early 1990s when physicist David Wineland and colleagues realized that engineering of exotic quantum states could lead to a significantly more precise atomic clock. A few years later, Wineland demonstrated the first quantum logic operation, a pioneering step toward a future quantum computer using ions (electrically charged atoms) to process information. In 1999, the NIST Physics Laboratory launched a broader Quantum Information Program, joined shortly thereafter by NIST’s Information Technology Laboratory and Electronics and Electrical Engineering Laboratory. This interdisciplinary program, featuring strong collaborations among physicists, electrical engineers, mathematicians, and computer scientists, has established NIST as one of the premier QI programs in the world. Participants include Wineland, a NIST Fellow and Presidential Rank Award winner; physicist William D. Phillips, a 1997 Nobel Prize winner in physics; mathematician Emanuel Knill, a leading QI theorist; and physicist Sae Woo Nam, winner of a Presidential Early Career Award for Scientists and Engineers. A total of nine technical divisions within three different laboratories at NIST’s Gaithersburg and Boulder campuses are involved. NIST’s work in ion-trap quantum computing is widely recognized as one of the most advanced QI efforts in the world. Scientists building the NIST quantum communications testbed set a record in 2004 for the fastest system for distributing quantum cryptographic “keys,” codes for encrypting messages that, due to the peculiarities of quantum physics, cannot be intercepted without detection. Other NIST research with single photon sources and detectors, and computing with neutral atoms and “artificial atoms,” are also among the leading efforts worldwide. For instance, prospects for practical quantum communications have been improved by NIST’s recent demonstration of a device that detects single photons with 88 percent efficiency, a QI record. There is strong synergy between NIST’s core mission work on measurement and standards and the QI research program. For instance, NIST scientists gained much of their expertise in quantum systems from decades of work developing atomic clocks. NIST’s ultra-precise atomic fountain clock—the world’s most accurate device for measuring time—is based on the precise manipulation and measurement of two quantum energy levels in the cesium atom. This clock would neither gain nor lose one second in 60 million years (as of March 2005), an accuracy level that is continually being improved. NIST quantum computing research is producing new techniques that may lead to even more accurate atomic clocks in the future. Ultimately, NIST measurements, tests, and technologies for quantum information science are helping U.S. industry develop new information technologies in an effort to ensure U.S. technological leadership and strengthen national security. The United States may have the lead in this field for now—based in part on NIST’s contributions—but competition from Europe, Japan, Australia, and developing countries such as China is strong and growing. Citation: S. Seidelin, J. Chiaverini, R. Reichle, J.J. Bollinger, D. Leibfried, J. Britton, J.H. Wesenberg, R.B. Blakestad, R.J. Epstein, D.B. Hume, W.M. Itano, J.D. Jost, C. Langer, R. Ozeri, N. Shiga, and D.J. Wineland. 2006. A microfabricated surface-electrode ion trap for scalable quantum information processing. Physical Review Letters. June 30 Explore further: How the physics of champagne bubbles may help address the world's future energy needs
<urn:uuid:8a5dcd98-22f5-49ef-a0e8-3bf8cfc22e49>
CC-MAIN-2014-52
http://phys.org/news71414204.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769321.94/warc/CC-MAIN-20141217075249-00103-ip-10-231-17-201.ec2.internal.warc.gz
en
0.934715
1,538
3.703125
4
scheme lightens load Technology Research News Two years ago, scientists proved it possible to build a quantum computer from simple optical equipment commonly found in university classrooms and laboratories. Now researchers at Johns Hopkins University have refined the approach, reducing the amount of equipment linear optical quantum computers would need by about two orders of magnitude. Quantum computers use the weird nature of particles like atoms, electrons and photons to perform many computations in parallel. If a big enough quantum computer could be built, it would far outstrip classical computers for solving certain problems like cracking secret codes. So far, however, only the most rudimentary quantum prototypes have been constructed. The Johns Hopkins plan shows that equipment like mirrors, half mirrors and phase shifters could be used to make practical, photon-based quantum computers, said James Franson, principal staff at the Johns Hopkins University Applied Physics Laboratory and a research professor the university's electrical and computer engineering department. "Our approach may make it more feasible to develop a full-scale quantum computer," he said. Controlling single photons using linear optics equipment is simpler than manipulating individual or small numbers of atoms or electrons, which are the basic units of most other quantum computing schemes. Capturing and manipulating atoms and electrons involves precisely tuned lasers or magnetic fields, or carefully constructed microscopic devices. It's also much harder to transport isolated atoms and electrons than it is to move photons. "An optical approach to quantum computing would have a number of potential advantages, including the ability to connect different devices using optical fibers in analogy with the wires of a conventional computer," said Franson. Linear optical quantum computers, like ordinary electronic computers, would use circuits that link simple logic devices in intricate patterns that make the output from one device the input to the next. The 1s and 0s of linear optical quantum computing would be represented by properties of photons like horizontal versus vertical polarization rather than the presence or absence of a current of electrons. The potential power of any type of quantum computer comes from its ability to examine all possible solutions to a problem at once rather than having to check one at a time. This is possible because when a particle like a photon is isolated from its environment it is in the weird quantum state of superposition, meaning it can be horizontally and vertically polarized at once, and so can represent a mix of 1 and 0. This allows a string of photons in superposition to represent every combination of 1s and 0s at the same time so that a quantum computer could process all the numbers that represent possible solutions to a problem using one set of operations on the single string of photons. Linear optical devices perform quantum logic operations by altering photons according to probabilities. Half mirrors, or beam splitters, for example, can direct photons along one of two paths, with an even chance for each The challenge of linear optical quantum computing is to pass the correct result of a quantum logic operation from one device to the next without directly observing the states of the photons that represent the results, because this would change the states and therefore destroy the information the photons contain. The trick is to put additional photons through the logic operation at the same time. These additional, ancilla photons trigger the optical circuitry that passes along the output of the logic operation when the result of the operation is correct. The ancilla photons are absorbed in photon detectors in the circuitry, but the output photons are preserved and passed on. The key advance in the Johns Hopkins researchers' approach is that it uses fewer ancilla photons by entangling input and ancilla photons in a way that minimizes the probability of errors, said Franson. When two or more particles in superposition come into contact with each other, they can become entangled, meaning one or more of their properties change in lockstep even if the particles are separated. Fewer ancilla photons means fewer pieces of equipment are needed. "Using the current error correction techniques, our high-fidelity approach should reduce the [equipment] required by roughly two orders of magnitude," said Franson. The amount of equipment required to generate the entangled ancilla state and the probability of an error "both increase rapidly with increasing numbers of ancilla photons," he said. The original linear optical quantum computing scheme had an average error rate of 2/n, while the researchers' refined scheme has an average error rate of 4/n2, according to Franson. N represents the number of ancilla photons. This translates to error rates of 20 percent versus 4 percent for 10 ancilla photons, and 2 percent versus 0.04 percent for 100 ancilla This gives the Johns Hopkins scheme a practical error rate with far fewer ancilla photons, said Franson. Quantum error correction will require error rates on the order of 0.1 to 0.01 percent, he said. "That range of errors could be achieved with 100 ancilla in our case, but that would require 5,000 ancilla in the original... method." Because the scheme requires fewer mirrors and beam splitters to manipulate the smaller number of ancilla photons, it makes it more likely that a practical linear optical quantum computer could be built, said Jonathan Dowling, supervisor of the quantum computing technologies group at NASA's Jet Propulsion Laboratory. The researchers' method "seems to be a substantial improvement over the original scheme," he said. Devices enabled by this new approach will be used in quantum communications systems before they are used in full-blown quantum computers, said Dowling. With experience gained from making quantum communications devices, the researchers' approach will eventually lead to "a practical, compact, all-optical quantum computer," he said. Dowling's group has developed a plan for a quantum repeater, a device necessary to boost quantum communications over long distances, that is based in part on the researchers' linear optical quantum logic, said Dowling. The researchers have shown that the overhead needed to achieve a given fidelity for linear optical quantum logic gates can be significantly improved, said Emanuel Knill, a mathematician at Los Alamos National Laboratory and one of the scientists who developed the concept of linear optical The Johns Hopkins researchers' approach does not address logical qubits, however, said Knill. Logical qubits are encoded from two or more physical qubits, and this makes them more resistant to errors. "My preference is to use logical qubits," said Knill. "If one wishes to use physical, not logical, qubits, then the authors' approach would help significantly," Quantum repeaters could be developed in five years, said Franson. "Full-scale quantum computers would be much more difficult and would probably require 15 to 20 years in the most optimistic scenario," he said. The researchers are working on making photon-based logic gates and memory devices, and single-photon sources, said Franson. "These are the basic building blocks of a linear optics approach to quantum computing," he Franson's research colleagues were Michelle Donegan, Michael Fitch, Bryan Jacobs, and Todd Pittman. They published the research in the September 23, 2002 issue of the journal Physical Review Letters. The research was funded by the Office of Naval Research (ONR), the Army Research Office (ARO), the National Security Agency (NSA) and the Department of Defense (DOD) Independent Research and Development Program (IR&D). Timeline: 5 years, 15-20 years TRN Categories: Physics Quantum Computing and Communications Story Type: News Related Elements: Technical paper, "High-Fidelity Quantum Logic Operations Using Linear Optical Elements," Physical Review Letters, September 23, 2002 Chemists brew tiny wires Voiceprints make crypto Stamp corrals tiny bits Net devices arranged Quantum scheme lightens Research News Roundup Research Watch blog View from the High Ground Q&A How It Works News | Blog Buy an ad link
<urn:uuid:dfedefe6-0d50-4172-acaf-5e486a624188>
CC-MAIN-2014-52
http://www.trnmag.com/Stories/2002/101602/Quantum_scheme_lightens_load_101602.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770324.129/warc/CC-MAIN-20141217075250-00054-ip-10-231-17-201.ec2.internal.warc.gz
en
0.8956
1,736
4.125
4
Quantum cryptography is often touted as the ultimate in information security, but that doesn't make it immune to successful attack. A recent publication in IEEE Transactions on Information Theory details how the very process of ensuring security can be used by evildoers to send fake messages on a network. As with all good cryptography researchers, the publication also includes a method for defeating the attack. The security provided by a quantum system relies on the fundamental laws of nature rather than the inability of computers to factor large numbers efficiently. The sender, traditionally called Alice, encodes information in the quantum states of, for instance, light. The recipient, imaginatively referred to as Bob, measures the quantum state. That measurement depends on what is called the basis and, if Bob and Alice don't have the same basis, Bob will not receive the same information that Alice sent. This feature is used to generate a secret key that can then be used to send information over more public channels. Generating a key The key generation process looks like this. Alice takes a random string of ones and zeros and encodes them in the quantum states of light. In doing so, she doesn't use the same basis, but rather flips randomly between two different basis sets. Bob also flips his basis sets and records the bit values that he receives. He then transmits his basis flips to Alice and she sends her basis flips to Bob. Those cases where, at random, the two agree on the value received, the bit values encoded by Alice are used as the key. An eavesdropper (who, amazingly enough, is always called Eve) can obtain all the publicly sent information and still not obtain the secret key. If she attempts to measure the quantum bits, they will be modified, meaning that Alice and Bob will see errors in the bits where their bases were not the same. One vulnerability of this system is the man-in-the-middle attack, where Eve plays the role of Alice for Bob and Bob for Alice. Every security system fails at this point because sometimes you have to trust that Alice really is Alice. One way to try and ensure the security of the exchange is to begin communications using a small, shared key. This key is then expanded using the quantum cryptographic system. Part of the expanded key is set aside so it can act as the shared key that initiates the next session. The remainder is used to encode messages sent in the current session. Assuming Eve has no knowledge of the starting key, the system is secure. But what if Eve knows some of the key already? Well, then problems can arise. Eve can grab the full key provided certain conditions are met: first, she has to be able to capture the quantum and classical information sent by Alice before Bob sees it. Second, she has to be able to modify the information in the quantum channel—a modification that may not necessarily be detectable, since it does not require measuring the quantum state—though I am not certain that this is truly practical. If these conditions are met, then Eve may be able to obtain the key for this session and, by extension, all future sessions. Probabilities and coincidences The explanation for how this works is a little technical but it involves probabilities. The key is generated from coincidences in two sets of random numbers, meaning that any number within a bit range is equally probable. However, if Eve has part of the key, it can be used to break up the distribution of possible numbers, making some of them much more probable while completely eliminating others. Eve can then modify the information in the quantum channel to make just a few numbers within the distribution much more probable. Since Eve has not measured the information in the quantum channel, and the information in the classical channel is public, Alice and Bob remain unaware of Eve. At this point, Eve can simply try out the few remaining possible keys on various messages until she achieves success. Since sessions using the same key will last for a long time, Eve can be sure to get some of the good sauce from Alice and Bob. So, what can Alice and Bob do about this? There are several solutions, which mainly involve making sure that Eve cannot delay transmissions in the quantum channel long enough to be able to modify it after receiving the classical information. What the authors propose is similar, but offers a guarantee that the message was not delayed. In their scheme, Alice sends a random string of ones and zeros on the quantum channel. Bob selects a bunch of bits from the message at random and sends them back to Alice using the quantum channel. Alice evaluates the bits and adds them to the bit string generated by the basis flips. This is then sent to Bob, who replies by sending his basis flips, and the key is generated. Now Eve cannot modify Alice's message before sending it on to Bob because she does not have the basis state string required to modify the message. So what does this all mean? It means that a security protocol that is designed to counter a threat that does not yet exist (quantum computing) is slightly more secure than it was yesterday. IEEE Transactions on Information Theory, 2008, DOI: 10.1109/TIT.2008.917697
<urn:uuid:b9ab6922-8711-40f9-a5ae-2104098ad0d2>
CC-MAIN-2014-52
http://arstechnica.com/science/2008/05/quantum-cryptography-not-as-secure-as-we-thought/
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773864.47/warc/CC-MAIN-20141217075253-00134-ip-10-231-17-201.ec2.internal.warc.gz
en
0.949791
1,056
3.5
4
Pairs of qubits Pairs of qubits are much, much more than the sum of their parts. Classical bits only become marginally more interesting when paired—it literally only makes the difference between counting to two and counting to four. Pairs of quantum bits, on the other hand, can be used to create entanglement, a phenomenon so... well, disturbing that one of the most controversial arguments in 20th century physics revolved around whether it could exist at all. Before talking about the strange things that can be done using pairs of qubits, let's talk about the things that can't. Like copying qubits. The most basic operation one can perform using classical bits is to copy the value of one bit into another bit. Simple, right? Not really. When we want to copy a single classical bit, we really perform two operations in sequence: - Measure both bits. - If they don't match, flip the second one. Uh-oh. Not only can a single qubit take on a whole sphere full of values, it can only be measured along a single axis at a time. Not only that, but measuring it changes its state from whatever it was before the measurement to whatever state the measurement produced. That's a problem. In fact, it can be proven that even in principle it's not possible to copy an unknown qubit's state. You can move it—that's called quantum teleportation—but, just like in Star Trek, teleportation just moves the state from one place to another. It doesn't make a copy. Why, then, is classical copying allowed? Classical bits act exactly like quantum bits that never leave a single axis on the sphere. If an unknown qubit is constrained to a single axis (as it is after a measurement along that axis), the classical recipe of measuring and flipping will work fine. But it only works for that one axis. This leads us to another broken classical assumption: Classical Theory: All information can be perfectly copied. Quantum Theory: Only the results of a measurement can be copied. OK, no copying. Strange—definitely strange—but that doesn't sound worthy of a 75-year argument which had Nobel laureates on both teams. Let's get to the good stuff. Let's get to entanglement. At the heart of entanglement is the concept of correlation, or how the results of measurements relate to each another. Specifically, it's about whether the results of two measurements are the same (correlation) or different (anti-correlation). This sounds too easy. For states that are on the surface of the sphere, the two measurements will correlate if the states fall on the same axis as the measurement. Right? Time to eliminate another fundamental assumption about information. Classical Theory: The state of multiple bits is defined by the states of all of the individual bits. Quantum Theory: The whole is greater than the sum of its parts. Many, many two-qubit states cannot be completely described by the state of the first qubit and the state of the second qubit. We call the states which are just a combination of the individual qubit states separable; we call all other states entangled, because they exhibit extra correlations that simple, single-qubit descriptions miss. Consider the "singlet state," an example of an entangled two-qubit state. A singlet state has two defining characteristics: - Any single-qubit measurement performed on one half of the singlet state will give a totally random result. - Any time the same single-qubit measurement is performed on both qubits in a singlet state, the two measurements will give opposite The first characteristic sounds like a pair of single qubit states plotted at the origin, the point that divides every measurement axis in half. The second characteristic, that of perfect anti-correlation, is an entirely new phenomenon. This second "rule of singlet states" means that, if horizontal/vertical measurements are made on the two qubits, one qubit will always be measured as H and one will always be measured as V. Which is which will be completely random. If you've read the last paragraph carefully, this should seem very, very strange. Even impossible. Imagine if someone showed you a pair of coins, claiming that when both were flipped at the same time, one would always come up heads and one would always come up tails, but that which was which would be totally random. What if they claimed that this trick would work instantly, even if the coins were on opposite sides of the Universe. You would probably say that's impossible. Albert Einstein did. In 1935, in one of the most famous scientific papers of all time, Einstein, Podolsky, and Rosen argued that because quantum mechanics allowed exactly this type of strange action at a distance, it must not be complete. Some part of the theory had to be missing. In effect, they claimed that some extra information (called hidden variables) was programmed into the coins—although they seem random, they really only show correlation because of hidden instructions which tell the coins which way to flip. After all, dice seem random, but if you know precisely how a die is rolling, you can predict its outcome. This assumption—that, in principle, the outcome of any experiment is predictable—is called realism. The EPR paper coupled this assumption with another basic assumption, locality, which states that events that are very far away can't affect nearby outcomes (unless there's enough time to for a signal to travel between the two events). They showed that as long as local realism is true, quantum mechanics can't be the whole story. For 30 years, the EPR paradox went unresolved. Finally, in 1965, John Bell proposed an experiment which could directly measure the paradox and, if performed, disprove local realism. He proposed creating a stream of identical singlet states and, for each state, separating the first qubit from the second. In separate locations, each qubit would be randomly subjected to one of two measurements: If they exhibited too much of the right types of correlation and anti-correlation—as defined by John Bell's equations—it would prove that a locally realistic universe could not exist. Over the past three decades, this experiment has been performed in many different settings using many different types of particles. The Bell experiment has most commonly been performed using polarized photons and the following procedure: - Create many copies of a singlet state (i.e., many pairs of entangled - Send the first photon in every pair through a polarizer. Randomly choose, for every photon, whether to orient this polarizer at 90 degrees (V) or at 45 degrees (D). These two quantum measurements correspond to the measurements (the red arrows) shown on the first sphere in Figure - In the same way, send the second photon in every pair through a polarizer. Randomly choose, for every photon, whether to orient this polarizer at 22.5 degrees or at 67.5 degrees (corresponding to the red arrows on the second sphere in Figure 8). - Count the number of times the measurement results matched (exhibited correlation) and the number of times they didn't When this experiment is performed, the results are incredibly surprising. To illustrate why the results are so suprising, I will describe an equivalent implementation of the Bell experiment which, to the best of my knowledge, has never been performed: I call it "The Nemesis Experiment". To perform this experiment, we're going to need 1000 pairs of people to play the part of singlet states. Remember that the singlet state is the permanently anti-correlated entangled state, and so we can't use just any pairs of people. We need arch-enemies.
<urn:uuid:fea5be5e-0933-4d76-8e81-5cfaeeb150a9>
CC-MAIN-2014-52
http://arstechnica.com/science/2010/01/a-tale-of-two-qubits-how-quantum-computers-work/4/
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775517.52/warc/CC-MAIN-20141217075255-00145-ip-10-231-17-201.ec2.internal.warc.gz
en
0.949813
1,621
3.6875
4
█ BRIAN HOYLE A supercomputer is a powerful computer that possesses the capacity to store and process far more information than is possible using a conventional personal computer. An illustrative comparison can be made between the hard drive capacity of a personal computer and a super-computer. Hard drive capacity is measured in terms of gigabytes. A gigabyte is one billion bytes. A byte is a unit of data that is eight binary digits (i.e., 0's and 1's) long; this is enough data to represent a number, letter, or a typographic symbol. Premium personal computers have a hard drive that is capable of storing on the order of 30 gigabytes of information. In contrast, a supercomputer has a capacity of 200 to 300 gigabytes or more. Another useful comparison between supercomputers and personal computers is in the number of processors in each machine. A processor is the circuitry responsible for handling the instructions that drive a computer. Personal computers have a single processor. The largest supercomputers have thousands of processors. This enormous computation power makes supercomputers capable of handling large amounts of data and processing information extremely quickly. For example, in April 2002, a Japanese supercomputer that contains 5,104 processors established a calculation speed record of 35,600 gigaflops (a gigaflop is one billion mathematical calculations per second). This exceeded the old record that was held by the ASCI White-Pacific supercomputer located at the Lawrence Livermore National Laboratory in Berkeley, California. The Livermore supercomputer, which is equipped with over 7,000 processors, achieves 7,226 gigaflops. These speeds are a far cry from the first successful supercomputer, the Sage System CDC 6600, which was designed by Seymour Cray (founder of the Cray Corporation) in 1964. His computer had a speed of 9 megaflops, thousands of times slower than the present day versions. Still, at that time, the CDC 6600 was an impressive advance in computer technology. Beginning around 1995, another approach to designing supercomputers appeared. In grid computing, thousands of individual computers are networked together, even via the Internet. The combined computational power can exceed that of the all-in-one supercomputer at far less cost. In the grid approach, a problem can be broken down into components, and the components can be parceled out to the various computers. As the component problems are solved, the solutions are pieced back together mathematically to generate the overall solution. The phenomenally fast calculation speeds of the present day supercomputers essentially corresponds to "real time," meaning an event can be monitored or analyzed as it occurs. For example, a detailed weather map, which would take a personal computer several days to compile, can be complied on a supercomputer in just a few minutes. Supercomputers like the Japanese version are built to model events such as climate change, global warming, and earthquake patterns. Increasingly, however, supercomputers are being used for security purposes such as the analysis of electronic transmissions (i.e., email, faxes, telephone calls) for codes. For example, a network of supercomputers and satellites that is called Echelon is used to monitor electronic communications in the United States, Canada, United Kingdom, Australia, and New Zealand. The stated purpose of Echelon is to combat terrorism and organized crime activities. The next generation of supercomputers is under development. Three particularly promising technologies are being explored. The first of these is optical computing. Light is used instead of using electrons to carry information. Light moves much faster than an electron can, therefore the speed of transmission is greater. The second technology is known as DNA computing. Here, recombining DNA in different sequences does calculations. The sequence(s) that are favored and persist represent the optimal solution. Solutions to problems can be deduced even before the problem has actually appeared. The third technology is called quantum computing. Properties of atoms or nuclei, designated as quantum bits, or qubits, would be the computer's processor and memory. A quantum computer would be capable of doing a computation by working on many aspects of the problem at the same time, on many different numbers at once, then using these partial results to arrive at a single answer. For example, deciphering the correct code from a 400-digit number would take a supercomputer millions of years. However, a quantum computer that is about the size of a teacup could do the job in about a year. █ FURTHER READING: Stork, David G. (ed) and Arthur C. Clarke. HAL's Legacy: 2001's Computer Dream and Reality. Boston: MIT Press, 1998. Cray Corporation. "What Is a Supercomputer?" Supercomputing. 2002. < http://www.cray.com/supercomputing >(15 December 2002). The History of Computing Foundation. "Introduction to Supercomputers." Supercomputers. October 13, 2002. < http://www.thocp.net/hardware/supercomputers.htm >(15 December 2002).
<urn:uuid:56412357-c1ec-460e-8610-91ba89d7ace4>
CC-MAIN-2014-52
http://www.faqs.org/espionage/Sp-Te/Supercomputers.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768977.107/warc/CC-MAIN-20141217075248-00164-ip-10-231-17-201.ec2.internal.warc.gz
en
0.935399
1,054
4.03125
4
Where would we be without singlet states? Almost all molecules in nature—and in our bodies—exist as singlet states, which arise when two particles with a spin of combine into an eigenstate with zero spin. Their most common occurrence is in stable atomic or molecular orbitals. Their combination into a spinless state frees the pair from angular/magnetic momenta, leading to a particularly stable diamagnetic combination. Because they don’t have a net magnetic moment, singlets are long-lived states. This is a property with important practical implications for nuclear magnetic resonance (NMR): although singlets are not directly measurable in NMR, their long lifetime can be exploited to enhance the sensitivity of NMR experiments and extend the range of dynamic phenomena that NMR can probe [in either its spectroscopic (NMR) or imaging (MRI) modes]. A sine qua non condition for creating such singlet states is that its constituents be identical—or at least so we believed. But now, a study in Physical Review Letters by Meike Emondts from RWTH Aachen University, Germany, and co-workers demonstrates a singletlike state made of two different spin- particles: a hydrogen () nucleus and a bonded carbon-13 () counterpart . Evidence for the singlet nature of the state they construct is given by the long lifetime of this state, which exceeds the lifetime of either atom by a factor of 3. The demonstration of heteronuclear singlets thus challenges theoretical preconceptions and dramatically extends the range of systems that can be potentially probed via enhanced forms of singlet-state-based NMR. The creation of long-lived singlets has been previously reported for ions and electrons but required complex manipulations. Nuclear spins, which interact more weakly with the environment, can offer a more sheltered “playground” for studying unusual spin states. Singlet nuclear states have, in fact, long been known and manipulated, starting with the discovery of the spin isomer of the molecule, known as parahydrogen. This isomer can be prepared by cooling below its characteristic rotational temperature (88 kelvin), and it is characterized by having its two protons in the antisymmetric combination. In this singlet state, the molecule has a total nuclear spin of zero. Although magnetically silent and thus invisible to NMR, parahydrogen can enhance the sensitivity of NMR and in vivo MRI. This is due to the fact that the singlet symmetry can be broken via a chemical reaction known as hydrogenation , whereby the two hydrogens bond to two inequivalent atoms, converting the pair’s perfect (but magnetically silent) antialignment into an effective spin polarization. This, in turn, leads to a pronounced difference in the spin populations of each hydrogen site—a so-called hyperpolarized state—enabling their resulting magnetic fields to be detected by NMR/MRI experiments with a dramatically enhanced sensitivity. Unfortunately, there are few options for manipulating parahydrogen: other than chemical processes that break its symmetry, there are no ways of “talking” to it. Over the last decade, however, research has demonstrated that one could create an entangled singlet state out of a pair of homonuclear spins that are chemically slightly different . This can be done in a number of ways. The most intuitive option is to apply a series of so-called refocusing magnetic pulses that erase the spins’ chemical shifts (i.e., their different precession frequencies in an applied magnetic field). Such a pulse sequence can thus “turn off” the inequivalence of protons in the two different molecules. With proper magnetic manipulations, the pair of spins can be combined to create a singlet state. This long-lived state will be conserved while the pulse sequence is executed, but by stopping the refocusing pulses, it can be later transformed back into an observable that is detectable by NMR. Now, Emondts et al. take this concept a step further and demonstrate that such toggling is also feasible when dealing with heteronuclear spins, i.e., pairs composed of nuclei as different as and . Key to generating a “heteronuclear singlet” is erasing the difference between the magnetic couplings that the spins will naturally present when inserted in a magnetic field. These differences will be stronger than in the homonuclear all- case. They will not be determined by chemical effects but rather by the different nuclei gyromagnetic ratios of and ; i.e., by the isotope-specific constants defining the species’ NMR precession frequencies in a magnetic field. To cut the Gordian Knot presented by this intrinsic nuclear difference, Emondts et al.’s solution is as drastic as it is simple: make the magnetic field as close to zero as possible. The authors achieved this by exploiting tricks from zero-field NMR, a “shuttling” based technique first developed in the 1980s . To achieve the heteronuclear singlet state, this meant reducing the difference between the and Larmor frequencies from the usual 10 to 100 megahertz values, to only 3.2 hertz. This difference is much smaller than other interactions to which the spins may be subjected, including the mutual - coupling (a form of spin-spin coupling mediated by the electrons in the molecular bonds connecting the two spins). At zero magnetic field, this coupling becomes the dominant term in the spin Hamiltonian, and a suitable shuttling of the system will thus make the spin pair evolve into one of its allowed states, one of which is a singletlike state involving the two spins. Hence a paradoxical “heteronuclear singlet” state can be originated (see Fig. 1). Emondts et al. confirmed that they reached this state by dividing their experiment into three phases: (i) a high-field stage that polarizes the spin system, (ii) a shuttling into zero-field where the original high-field eigenstates adiabatically transition into the triplet and singlet spin manifolds, and (iii) a final return to high-field conditions in order to probe the fate of the singlet and triplet populations. The group saw both expected and unexpected features upon performing the same experiment under a variety of conditions. Among the expected results was the generation of a long-lived state, which exceeded the polarized lifetimes of each constituent by about a factor of 3; this is the landmark characteristic of a spin singlet eigenstate. Among the experiment’s most unusual features was the possibility of creating spin coherence between the triplet and singlet manifolds, whose decay is slower than the actual lifetimes of each one of the states that it involves. In spectroscopic jargon one could describe this as the creation of a subspace where the effective spin’s is longer than its . From a practical perspective, this new singlet-spin entity could find applications in hyperpolarized NMR and MRI and in the development of more sensitive NMR probes. The latter could be used to investigate slow biomolecular dynamics or the diffusive behavior of molecules in tissues. But at a more fundamental level, it is likely that the most remarkable impact will be the inspiration that this initial NMR observation may provide for the wider world of quantum coherent control in coupled two-level systems. - M. Emondts, M. P. Ledbetter, S. Pustelny, T. Theis, B. Patton, J. W. Blanchard, M. C. Butler, D. Budker, and A. Pines, “Long-Lived Heteronuclear Spin-Singlet States in Liquids at a Zero Magnetic field,” Phys. Rev. Lett. 112, 077601 (2014). - D. Kielpinski, V. Meyer, M. A. Rowe, C. A. Sackett, W. M. Itano, C. Monroe, and D. J. Wineland, “A Decoherence-Free Quantum Memory Using Trapped Ions,” Science 291, 1013 (2001); C. Langer et al., “Long-Lived Qubit Memory Using Atomic Ions,” Phys. Rev. Lett. 95, 060502 (2005); S. Kotler, N. Akerman, N. Navon, Y. Glickman, and R. Ozeri, “Measurement of the Magnetic Interaction between Two Electrons,” arXiv:1312.4881 (2013). - C. R. Bowers and D. P. Weitekamp, “Transformation of Symmetrization Order to Nuclear-Spin Magnetization by Chemical Reaction and Nuclear Magnetic Resonance,” Phys. Rev. Lett. 57, 2645 (1986); R. W. Adams et al., “Reversible Interactions with para-Hydrogen Enhance NMR Sensitivity by Polarization Transfer,” Science 323, 1708 (2009). - Malcolm H. Levitt, “Singlet Nuclear Magnetic Resonance,” Ann. Rev. Phys. Chem. 63, 89 (2012). - D.B. Zax, A. Bielecki, K.W. Zilm, and A. Pines, “Heteronuclear zero-field NMR,” Chem. Phys. Lett. 106, 550 (1984).
<urn:uuid:4ee95de8-80fb-4a22-bda9-6f08f432776e>
CC-MAIN-2014-52
http://physics.aps.org/articles/v7/17
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773864.47/warc/CC-MAIN-20141217075253-00143-ip-10-231-17-201.ec2.internal.warc.gz
en
0.907776
1,982
3.78125
4
How the Cell Exploits Genetic Code Degeneracy In the context of making and analyzing codes, the term "degeneracy" refers to having excess codes that produce the same message. A non-degenerate code, like Morse code, is one for one: each code is unique, producing one and only one output. The genetic code, by contrast, is many-to-one in some cases. For instance, six different codons can produce the amino acid leucine. This would be like having six combinations of dots and dashes to produce the letter A in a "degenerate" version of Morse code. Other amino acids can be coded by 4, 3 or 2 codons, while two (methionine and tryptophan) each have only one unique code. Why is this? One reason is that there are 20 standard amino acids used in living organisms, but 64 possible combinations of codons (4 letters in triplets, 43 = 64). This mismatch creates the degeneracy, but also allows for multiple codons to code for the same amino acid; these are called cognates. Is there a reason for this degeneracy other than happenstance? In a recent paper in PNAS, a trio of researchers from Harvard and the University of Chicago experimented with E. coli bacteria to study the effects of multiple codons under environmental stress. Here is their explanation of the degeneracy in the genetic code: The genetic code governing protein synthesis is a highly degenerate system because 18 of the 20 amino acids have multiple synonymous codons and 10 of the 20 amino acids are aminoacylated (charged) onto multiple tRNA [transfer RNA] isoacceptors. (Emphasis added.)To study the effects of the environment on the code, they first created a library of 29 versions of yellow fluorescent protein genes (yfp) using different cognate forms of the codons for leucine, arginine and serine (each with 6 cognates), proline (4 cognates), isoleucine (3), glutamine (2) and phenylalanine (2). Under normal conditions, with amino acids plentiful, each of the cognate codes in their gene library produced the same amount of YFP protein. But then they created a supply-and-demand crisis by "starving" the cells of the amino acids, one at a time. What they found was a case of "degeneracy lifting." Degenerate states that are indistinguishable under normal conditions can exhibit distinct properties under the action of external perturbations. This effect, called degeneracy lifting, allows degenerate systems to exhibit a wide range of behaviors, depending on the environmental context.This implies that degenerate systems provide a way to encode extra information; indeed, quantum computing and steganography exploit this capacity. It should be noted that the genetic code is not the only degenerate system in nature: Degeneracy, the occurrence of distinct states that share a common function, is a ubiquitous property of physical and biological systems. Examples of degenerate systems include atomic spectra, condensed matter, the nervous system, and the genetic code. Degeneracy in physical systems is often associated with underlying symmetries and in biological systems with error minimization, evolvability, and robustness against perturbations.The question becomes: does E. coli "lift" the degeneracy of the genetic code under stress, and thereby encode environmental information in the extra space? Yes, they found: Our study suggests that organisms can exploit degeneracy lifting as a general strategy to adapt protein synthesis to their environment.In a clever series of experiments, they found that the cells divide the cognates into a hierarchy: those that are robust with regard to perturbations, and those that are sensitive. The robust cognates have no effect on protein synthesis levels, whereas the sensitive ones show up to a 100-fold reduction in synthesis rate. These results are independent of tRNA supply and codon usage. Rather, competition among tRNA isoacceptors for aminoacylation underlies the robustness of protein synthesis. Remarkably, the hierarchy established using the synthetic library also explains the measured robustness of synthesis for endogenous proteins in E. coli. We further found that the same hierarchy is reflected in the fitness cost of synonymous mutations in amino acid biosynthesis genes and in the transcriptional control of σ-factor genes.The team's results imply a "strategy" to exploit degeneracy to survive environmental stress. When tRNA isoacceptors are in short supply, the ribosome pauses, and sends feedback to the nucleus to reduce transcription. Other effects include messenger-RNA cleavage and translation recoding -- functions induced by the environmental stress to regulate protein supply. The authors had nothing to say about how evolution produced these regulatory effects that promote robustness in a varying environment. Instead, in concluding, they concentrated on functional design: Here, we have investigated the effect of a specific environmental perturbation associated with amino acid limitation in the bacterium E. coli. However, this type of perturbation plays a crucial role in the life cycle of other bacteria such as Myxococcus xanthus and Bacillus subtilis that undergo differentiation cued by amino acid limitation. Protein synthesis during such differentiation events might also be regulated by degeneracy lifting of the genetic code. Moreover, degeneracy lifting could be important during protein synthesis in eukaryotes, where clinically important conditions such as neoplastic transformation and drug treatment are often accompanied by a reduction in amino acid supply. Therefore, lifting the degeneracy of the genetic code might emerge as a general strategy for biological systems to expand their repertoire of responses to environmental perturbations.Feedback, regulation, robustness: When there turns out to be method in what appeared to be madness, it's reasonable to draw an inference to intelligent design.
<urn:uuid:fe23a4a4-771c-407a-b6bc-43649b354ac5>
CC-MAIN-2014-52
http://www.evolutionnews.org/2013/01/how_the_cell_ex068101.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772972.2/warc/CC-MAIN-20141217075252-00024-ip-10-231-17-201.ec2.internal.warc.gz
en
0.924959
1,206
3.640625
4
Researches have uncovered "smoking-gun" evidence to confirm the workings of an emerging class of materials that could make possible "spintronic" devices and practical quantum computers far more powerful than today's technologies. The materials are called topological insulators. For more than 50 years, scientists have debated what turns particular oxide insulators, in which... Research from North Carolina State Univ. shows that a type of modified titania, or titanium... Research led by Penn State Univ. and Cornell Univ. physicists is studying "spintorque" in devices that combine a standard magnetic material with a new material known as a topological insulator. The new insulator, which is made of bismuth selenide and operates at room temperature, overcomes one of the key challenges to developing a spintronics technology based on spin-orbit coupling. Together with teams from Finland and Japan, physicists from the Univ. of Basel in Switzerland were able to place 20 single bromine atoms on a fully insulated surface at room temperature to form the smallest “Swiss cross” ever created. The effort is a breakthrough because the fabrication of artificial structures on an insulator at room temperature is difficult. It is largest number of atomic manipulations ever achieved at room temperature. Vanadium dioxide is called a "wacky oxide" because it transitions between a conducting metal and an insulating semiconductor and with the addition of heat or electrical current. A device created by Penn State engineers uses a thin film of vanadium oxide on a titanium dioxide substrate to create an oscillating switch that could form the basis of a computational device that uses a fraction of the energy necessary for today’s computers. Materials that can be used for thermoelectric devices have been known for decades. But, until now, there has been no good explanation for why just a few materials work well for these applications, while most others do not. Now researchers say they have finally found a theoretical explanation for the differences, which could lead to the discovery of new, improved thermoelectric materials. Manganites show great promise as “go-to” materials for future electronic devices because of their ability to instantly switch from an electrical insulator to a conductor under a wide variety of external stimuli, including magnetic fields, photo-excitations and vibrational excitations. This ultra-fast switching arises from the different ways electrons and electron-spins in a manganite may organize or re-organize in response to such stimuli. Topological insulators are considered a very promising material class for the development of future electronic devices because they are insulators inside but conductors at the surface. A research team in Germany has discovered how light can be used to alter the physical properties of the electrons in these materials by using it to alter electron spin at the surface. Topological insulators have been of great interest to physicists in recent years because of unusual properties that may provide insights into quantum physics. But most analysis of such materials has had to rely on highly simplified models. Now, a team of researchers at Massachusetts Institute of Technology has performed a more detailed analysis that hints at the existence of six new kinds of topological insulators. By applying pressure to a semiconductor, researchers have been able to transform a semiconductor into a “topological insulator” (TI), an intriguing state of matter in which a material’s interior is insulating but its surfaces or edges are conducting with unique electrical properties. This is the first time that researchers have used pressure to gradually “tune” a material into the TI state. A single layer of tin atoms could be the world’s first material to conduct electricity with 100% efficiency at the temperatures that computer chips operate, according to a team of theoretical physicists led by researchers from SLAC National Accelerator Laboratory and Stanford Univ. An international team of scientists have discovered a new type of quantum material whose lopsided behavior may lend itself to creating novel electronics. The material is called bismuth tellurochloride, or BiTeCl. It belongs to a class of materials called topological insulators that conduct electrical current with perfect efficiency on their surfaces, but not through their middles. Researchers at Massachusetts Institute of Technology have succeeded in producing and measuring a coupling of photons and electrons on the surface of an unusual type of material called a topological insulator. This type of coupling had been predicted by theorists, but never observed. A theoretical study conducted by scientists at Japan’s National Institute of Materials Science reveals the possibility of developing a quantum material to transport zero-resistance edge current above room temperature. This capability, allowed by large spin-orbit coupling, will depend on the construction of a new class of topological materials that the researchers have designed. An international collaboration at Lawrence Berkeley National Laboratory’s Advanced Light Source has induced high-temperature superconductivity in a toplogical insulator, an important step on the road to fault-tolerant quantum computing. When scientists found electrical current flowing where it shouldn't be—at the place where two insulating materials meet—it set off a frenzy of research that turned up more weird properties and the hope of creating a new class of electronics. Now scientists have mapped those currents in microscopic detail and found another surprise: Rather than flowing uniformly, the currents are stronger in some places than others. Researchers not only confirmed several theoretical predictions about topological crystalline insulators (TCIs), but made a significant experimental leap forward that revealed even more details about the crystal structure and electronic behavior of these newly identified materials. The findings reveal the unexpected level of control TCIs can have over electrons by creating mass. Researchers from the RIKEN Center for Life Science Technologies and Chiba Univ. have developed a high-temperature superconducting wire with an ultrathin polyimide coating only 4 micrometers thick, more than 10 times thinner than the conventional insulation used for high-temperature superconducting wires. The breakthrough should help the development of more compact superconducting coils for medical and scientific devices. It is well known to scientists that the three common phases of water (ice, liquid and vapor) can exist stably together only at a particular temperature and pressure, called the triple point. Scientists now have made the first-ever accurate determination of a solid-state triple point in a substance called vanadium dioxide, which is known for switching rapidly from an electrical insulator to a conductor. New research shows that a class of materials being eyed for the next generation of computers behaves asymmetrically at the sub-atomic level. This research is a key step toward understanding the topological insulators that may have the potential to be the building blocks of a super-fast quantum computer that could run on almost no electricity. A team of theoretical physicists at the U.S. Naval Research Laboratory and Boston College has identified cubic boron arsenide as a material with an extraordinarily high thermal conductivity and the potential to transfer heat more effectively from electronic devices than diamond, the best-known thermal conductor to date. Researchers have made the first direct images of electrical currents flowing along the edges of a topological insulator. In these strange solid-state materials, currents flow only along the edges of a sample while avoiding the interior. Using an exquisitely sensitive detector they built, the team was able to sense the weak magnetic fields generated by the edge currents and tell exactly where the currents were flowing. By means of special metamaterials, light and sound can be passed around objects. Researchers have now succeeded in demonstrating that the same materials can also be used to specifically influence the propagation of heat. They have built a structured plate of copper and silicon that conducts heat around a central area without the edge being affected. Researchers from Dresden have discovered a new material that conducts electric currents without loss of power over its edges and remains an insulator in its interior. The material is made out of bismuth cubes packed in a honeycomb motif that is known from the graphene structure. As opposed to graphene, the new material exhibits its peculiar electrical property at room temperature, giving it promise for applications in nanoelectronics. Electrons flowing swiftly across the surface of topological insulators are "spin polarized," their spin and momentum locked. This new way to control electron distribution in spintronic devices makes TIs a hot topic in materials science. Now scientists have discovered more surprises: contrary to assumptions, the spin polarization of photoemitted electrons from a topological insulator is wholly determined in three dimensions by the polarization of the incident light beam. Unlike conventional electrical insulators, which do not conduct electricity, topological insulators have the unique property of conducting electricity on their surface, while acting as an insulator inside. In a step toward understanding and exploiting an exotic form of matter that has been sparking excitement for potential applications in a new genre of supercomputers, scientists are reporting the first identification of a naturally occurring topological insulator that was retrieved from an abandoned gold mine in the Czech Republic. University of Utah engineers demonstrated it is feasible to build the first organic materials that conduct electricity on their edges, but act as an insulator inside. These materials, called organic topological insulators, could shuttle information at the speed of light in quantum computers and other high-speed electronic devices. - Page 1
<urn:uuid:63fe3455-19dd-4717-b988-6feed11f433b>
CC-MAIN-2014-52
http://www.rdmag.com/topics/materials/insulators
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769321.94/warc/CC-MAIN-20141217075249-00115-ip-10-231-17-201.ec2.internal.warc.gz
en
0.938442
1,916
3.671875
4
measures quantum quirk Technology Research News Quantum entanglement, which Einstein once dismissed as impossible, is a physical resource that could transform information processing. It is key to producing phenomenally powerful quantum computers, and is the critical component of the most secure form of quantum cryptography. Until now, however, researchers have had no way to measure entanglement directly, but have had to rely on indirect measurements or mathematical Researchers from the Technical University of Gdansk in Poland and the University of Cambridge in England have come up with a scheme for measuring entanglement that could give scientists the means to judge the purity of the primary resource used in quantum information processing. The scheme could mark the beginning of quantum metrology -- the science of quantum measurement, said Artur Ekert, a professor of quantum physics at the University of Cambridge. "Efficient tests for quantum entanglement will be important in all applications where quantum entanglement is used," Entanglement links physical properties, such as polarization or momentum, of two or more atoms or subatomic particles. It is part of numerous schemes for secure communication, precise frequency standards, atomic clocks and When an atom or subatomic particle is isolated from its environment, it enters into the weird state of superposition, meaning it is in some mixture of all possible states. For example, a photon can be polarized in one of two opposite directions. In superposition, however, the photon is polarized in some mixture of both directions at the same time. When two or more particles in superposition come into contact with each other, they can become entangled. A common example is photons that have their polarizations entangled. When one of the photons is knocked out of superposition to become, say, vertically polarized, the other photon leaves superposition at the same instant and also becomes vertically polarized, regardless of the distance between them. Existing methods of checking for entanglement involve either indirect measurements, which are inefficient and leave many entangled states undetected, or a mathematical estimation, Ekert said. The researchers' method is similar to the mathematical approach, but works on the particles directly rather than on a mathematical representation of them. "We have managed to find a physical operation that mimics the mathematical one," said Ekert. Quantum operations alter particles that are used as quantum bits, or qubits, to represent the 1s and 0s of computing in quantum information systems. One way to carry out a quantum operation is to use a laser beam to rotate an atom held in a magnetic trap so that its orientation flips from a position representing a 1 to a position representing a 0. The basic logic of quantum computing is made up of many series of these quantum operations. The researchers' entanglement-detection method could be included in several proposed architectures for quantum computers, including ion traps, which hold individual atoms in magnetic fields, and quantum dots, which trap individual electrons in microscopic specks of semiconductor material, according to Ekert. The research is excellent; it is an original idea about how to detect entanglement in an efficient way, said Vlatko Vedral, a lecturer of physics at Imperial College and the University of Oxford in England. "One of the most fundamental issues in quantum information theory is whether two systems are entangled or not," he said. Scientists have had a good theoretical understanding of how to detect entanglement, but these methods are not practical in the physical world because they involve physical impossibilities like reversing time, he said. The researchers have come up with a practical method of testing for entanglement, said Vedral. The basic idea is to mix a bit of noise into the operation so there will always be a physically possible result, he said. "It turns out that this mixing can be performed in an efficient way," he added. Entanglement is crucial for quantum communications, said Vedral. "Some forms of quantum cryptography depend critically on the presence of entanglement and cannot be implemented without it," he said. It's not yet clear how useful being able to measure entanglement will be for quantum computing because researchers do not know if there is a direct link between amount of entanglement and the speed of quantum computers, Vedral said. "Everything indicates that entanglement is an important ingredient, but how much of it is enough to be clearly better than any classical computer remains an open question," he said. The method could be used in practical applications in two to five years, said Ekert. It is likely to be used first in quantum cryptography and frequency standards, he said. Ekert's research colleague was Pawe Horodecki of the Technical University of Gdansk in Poland. They published the research in the September 16, 2002 issue of Physical Review Letters. The research was funded by the Polish Committee for Scientific Research, the European Commission, Elsag SpA, the Engineering and Physical Sciences Research Council and the Royal Society of London. Timeline: 2-5 years, 20 years Funding: Government, Corporate TRN Categories: Physics; Quantum Computing and Communications Story Type: News Related Elements: Technical paper, "Method for Direct Detection of Quantum Entanglement," Physical Review Letters, September 16, 2002 November 13/20, 2002 Coax goes nano Webs within Web boost Circuit gets more power from shakes Method measures quantum Biochip sprouts DNA strands Research News Roundup Research Watch blog View from the High Ground Q&A How It Works News | Blog Buy an ad link
<urn:uuid:e92b0cc1-3dd4-43c7-a783-3291a750f601>
CC-MAIN-2014-52
http://www.trnmag.com/Stories/2002/111302/Method_measures_quantum_quirk_111302.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767453.104/warc/CC-MAIN-20141217075247-00155-ip-10-231-17-201.ec2.internal.warc.gz
en
0.90891
1,222
3.6875
4
Quantum teleportation, or entanglement-assisted teleportation, is a technique used to transfer quantum information from one quantum system to another. It does not transport the system itself, nor does it allow communication of information at superluminal (faster than light) speed. Neither does it concern rearranging the particles of a macroscopic object to copy the form of another object. Its distinguishing feature is that it can transmit the information present in a quantum superposition, useful for quantum communication and computation. More precisely, quantum teleportation is a quantum protocol by which a qubit a (the basic unit of quantum information) can be transmitted exactly (in principle) from one location to another. The prerequisites are a conventional communication channel capable of transmitting two classical bits (i.e. one of four states), and an entangled pair (b,c) of qubits, with b at the origin and c at the destination. (So whereas b and c are intimately related, a is entirely independent of them other than being initially colocated with b.) The protocol has three steps: measure a and b jointly to yield two classical bits; transmit the two bits to the other end of the channel (the only potentially time-consuming step, due to speed-of-light considerations); and use the two bits to select one of four ways of recovering c. The upshot of this protocol is to permute the original arrangement ((a,b),c) to ((b′,c′),a), that is, a moves to where c was and the previously separated qubits of the Bell pair turn into a new Bell pair (b′,c′) at the origin. Suppose Alice has a qubit in some arbitrary quantum state . (A qubit may be represented as a superposition of states, labeled and .) Assume that this quantum state is not known to Alice and she would like to send this state to Bob. Ostensibly, Alice has the following options: Option 1 is highly undesirable because quantum states are fragile and any perturbation en route would corrupt the state. Option 2 is forbidden by the no-broadcast theorem. Option 3 (classical teleportation) has also been formally shown to be impossible. (See the no teleportation theorem.) This is another way to say that quantum information cannot be measured reliably. Thus, Alice seems to face an impossible problem. A solution was discovered by Bennett, et al. The components of a maximally entangled two-qubit state are distributed to Alice and Bob. The protocol then involves Alice and Bob interacting locally with the qubit(s) in their possession and Alice sending two classical bits to Bob. In the end, the qubit in Bob's possession will be in the desired state. Assume that Alice and Bob share an entangled qubit AB. That is, Alice has one half, A, and Bob has the other half, B. Let C denote the qubit Alice wishes to transmit to Bob. Alice applies a unitary operation on the qubits AC and measures the result to obtain two classical bits. In this process, the two qubits are destroyed. Bob's qubit, B, now contains information about C; however, the information is somewhat randomized. More specifically, Bob's qubit B is in one of four states uniformly chosen at random and Bob cannot obtain any information about C from his qubit. Alice provides her two measured classical bits, which indicate which of the four states Bob possesses. Bob applies a unitary transformation which depends on the classical bits he obtains from Alice, transforming his qubit into an identical re-creation of the qubit C. Suppose Alice has a qubit that she wants to teleport to Bob. This qubit can be written generally as: Alice takes one of the particles in the pair, and Bob keeps the other one. The subscripts A and B in the entangled state refer to Alice's or Bob's particle. We will assume that Alice and Bob share the entangled state . So, Alice has two particles (C, the one she wants to teleport, and A, one of the entangled pair), and Bob has one particle, B. In the total system, the state of these three particles is given by Alice will then make a partial measurement in the Bell basis on the two qubits in her possession. To make the result of her measurement clear, we will rewrite the two qubits of Alice in the Bell basis via the following general identities (these can be easily verified): The three particle state shown above thus becomes the following four-term superposition: Notice all we have done so far is a change of basis on Alice's part of the system. No operation has been performed and the three particles are still in the same state. The actual teleportation starts when Alice measures her two qubits in the Bell basis. Given the above expression, evidently the result of her (local) measurement is that the three-particle state would collapse to one of the following four states (with equal probability of obtaining each): Alice's two particles are now entangled to each other, in one of the four Bell states. The entanglement originally shared between Alice's and Bob's is now broken. Bob's particle takes on one of the four superposition states shown above. Note how Bob's qubit is now in a state that resembles the state to be teleported. The four possible states for Bob's qubit are unitary images of the state to be teleported. The crucial step, the local measurement done by Alice on the Bell basis, is done. It is clear how to proceed further. Alice now has complete knowledge of the state of the three particles; the result of her Bell measurement tells her which of the four states the system is in. She simply has to send her results to Bob through a classical channel. Two classical bits can communicate which of the four results she obtained. After Bob receives the message from Alice, he will know which of the four states his particle is in. Using this information, he performs a unitary operation on his particle to transform it to the desired state : to recover the state. to his qubit. Teleportation is therefore achieved. Experimentally, the projective measurement done by Alice may be achieved via a series of laser pulses directed at the two particles. In the literature, one might find alternative, but completely equivalent, descriptions of the teleportation protocol given above. Namely, the unitary transformation that is the change of basis (from the standard product basis into the Bell basis) can also be implemented by quantum gates. Direct calculation shows that this gate is given by Entanglement can be applied not just to pure states, but also mixed states, or even the undefined state of an entangled particle. The so-called entanglement swapping is a simple and illustrative example. If Alice has a particle which is entangled with a particle owned by Bob, and Bob teleports it to Carol, then afterwards, Alice's particle is entangled with Carol's. A more symmetric way to describe the situation is the following: Alice has one particle, Bob two, and Carol one. Alice's particle and Bob's first particle are entangled, and so are Bob's second and Carol's particle: ___ / \ Alice-:-:-:-:-:-Bob1 -:- Bob2-:-:-:-:-:-Carol \___/ Now, if Bob performs a projective measurement on his two particles in the Bell state basis and communicates the results to Carol, as per the teleportation scheme described above, the state of Bob's first particle can be teleported to Carol's. Although Alice and Carol never interacted with each other, their particles are now entangled. One can imagine how the teleportation scheme given above might be extended to N-state particles, i.e. particles whose states lie in the N dimensional Hilbert space. The combined system of the three particles now has a N3 dimensional state space. To teleport, Alice makes a partial measurement on the two particles in her possession in some entangled basis on the N2 dimensional subsystem. This measurement has N2 equally probable outcomes, which are then communicated to Bob classically. Bob recovers the desired state by sending his particle through an appropriate unitary gate. A general teleportation scheme can be described as follows. Three quantum systems are involved. System 1 is the (unknown) state ρ to be teleported by Alice. Systems 2 and 3 are in a maximally entangled state ω that are distributed to Alice and Bob, respectively. The total system is then in the state where Tr12 is the partial trace operation with respect systems 1 and 2, and denotes the composition of maps. This describes the channel in the Schrödinger picture. Taking adjoint maps in the Heisenberg picture, the success condition becomes for all observable O on Bob's system. The tensor factor in is while that of is . The proposed channel Φ can be described more explicitly. To begin teleportation, Alice performs a local measurement on the two subsystems (1 and 2) in her possession. Assume the local measurement have effects If the measurement registers the i-th outcome, the overall state collapses to The tensor factor in is while that of is . Bob then applies a corresponding local operation Ψi on system 3. On the combined system, this is described by where Id is the identity map on the composite system . Therefore the channel Φ is defined by Notice Φ satisfies the definition of LOCC. As stated above, the teleportation is said to be successful if, for all observable O on Bob's system, the equality holds. The left hand side of the equation is: where Ψi* is the adjoint of Ψi in the Heisenberg picture. Assuming all objects are finite dimensional, this becomes The success criterion for teleportation has the expression
<urn:uuid:c476a611-1368-4569-8057-4b91a2ac9c3b>
CC-MAIN-2014-52
http://www.thefullwiki.org/Quantum_teleportation
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802776996.17/warc/CC-MAIN-20141217075256-00101-ip-10-231-17-201.ec2.internal.warc.gz
en
0.93462
2,029
4.03125
4
When atoms of Calcium are brought to a high energy state and are allowed to return to a lower energy state while in a fixed position, each atom emits two photons which fly off in opposite directions. If the spin of the photons is measured in the same axis (vertical, horizontal, front to back or any other axis) then the two photons have opposite direction spins (clockwise and anticlockwise). According to quantum theory, subatomic particles have latent spins in all axes but once the spin is measured along one axis, it is never a fractional one and after measurement along one axis, it is not possible to detect or measure the particle's spin direction along other axis. Many physicists have done this experiment and the spin of the particles measured at various distances. The farthest distance apart between the particles was eleven kilometers. Einstein could not accept that two particles separated in space could influence the behavior or properties of each other. This would have violated his theory of special relativity as one particle would be communicating with the other faster than the speed of light. He called it spooky action at a distance and remarked that god does not play dice. He claimed that the two particles were programmed at birth to have opposite spins. John Bell got the Nobel prize in physics for thinking up an experiment carried out decades later when more sophisticated technology and instruments became available. These were the ability to measure the direction of spin of particles in the vertical, horizontal and front to back axis (or any other axis). In experiments where the spin direction was measured along the same axis at any angle from the vertical, horizontal or front to back, the experimenter always got opposite results for the two particles. Bell's theorem was simple. The spin direction can be either clockwise or anti-clockwise. If one assumes that the one sided photon is programmed, say clockwise for vertical, anti-clockwise for horizontal and clockwise for front to back, its program would be CAC. In that case the opposite side photon would have to be programmed ACA. If the experiment was randomized so that two independent experimenters measured the photon spin in any one of the three directions, the results would be CA, CC, CA for vertical axis on one photon and vertical, horizontal and front to back axis on the second. The next set for horizontal axis on the first photon and vertical, horizontal and front to back on the second photon would be AA, AC, AA respectively. For front to back axis on the first photon and three in the same order on the second photon would be CA, CC, CA respectively. Of the nine results, you will notice that, four are the same (AA or CC) and five are different. Thus a series of random tests should give opposite results more than 50% of the time. Yet when repeated experiments are carried out the results always show an agreement of 50% but never greater. This proves that the two photons are not pre-programmed and do not have a fixed spin until measured, but if measured along the same axis they are always opposite. Once the spin on one particle is measured the second entangled particle takes on the opposite spin along the same axis. This raises strange questions. Was the choice of switching off or on the detector in part one of my article pre-ordained? If so then there is no free will. If the particles are entangled at birth and yet free to choose their spin until measurement, what is reality? By the way despite our Indian habit of entangling all inventions to our scriptures and philosophy, quantum physics has nothing to do with the dance of Shiva, Buddhism or Taoism. If you still believe the opposite, you should nominate me for the Nobel Prize in physics, as I have found two large particles (human beings) entangled at a distance of 9000 miles. They are the president of the US (Bush or Obama) and Manmohan Singh. As soon as a mere thought passes through the president's mind, Manmohan Singh immediately and spontaneously does the same thing not the opposite. - Iran despite not in violation of the NPT should be referred to the IAEA. Manmohan Singh'Yes Sahib - India's Petroleum minister Aiyar should be fired for promoting the IPI gas pipeline. Manmohan Singh'Jee Huzoor - India must sign the Civil Nuclear Deal without any guarantees of supply, promise of enrichment technology and agree to intrusive inspections of its nuclear facilities. Manmohan Singh' Jo Hukoom or I will fall on my sword. - Pakistan is a victim of terror not its source. Manmohan Singh in Havana 'India and Pakistan are friends and both are victims of terror. - Obama thinks India must resume talks with Pakistan to get the US out of the Afghan fire. Manmohan Singh 'Yes Bwana (at Egypt NAM meeting). What Pakistan does to us in Kashmir is like what we do to it in Baluchistan. India Pakistan Bhai Bhai! - Fire Kamal Nath from the trade portfolio. He is unwilling to resuscitate the Doha round and agree to our unfair trade practices. Manmohan Singh 'OK Boss, done. - We have the right to inspect any arms that you buy from us at exorbitant prices and you need our permission to use them even after you have fully paid for them. We have the right to use your ports, airfields and you have to supply us what we need during war, if necessary. So sign the End User Arms Agreement now. Manmohan Singh' Bows head and agrees - You will not get any nuclear enrichment technology as we have changed the terms of the civilian nuclear treaty at the recent G8 meeting. Manmohan Singh --Your wish is my thought. I will give up my nation's agreements without restriction (with Russia) to be entangled with you as your subordinate. Your foreign policy is mine. I know you are in debt up to your gazoos to China and will not help India if China attacks us, but I will abandon old friends for the entanglement with you. One psychologist friend said to me My diagnosis is wrong and the above is not quantum entanglement. Maybe Manmohan Singh has no free will and he is a Zombie (Look up psychological definition of Zombie). Another friend who is a comedian said, maybe Manmohan Singh is playing the part of a ventriloquist's dummy. Simultaneously, all three of us said 'you have got to hand it to Sonia. She is like the nuclear strong force which increases with distance, unlike gravity, electromagnetism and the weak force, so she can throw her voice to large distances and alter it to sound like that of a man.
<urn:uuid:959478f0-0e91-415d-b86f-2ed8e5cf2ac7>
CC-MAIN-2014-52
http://www.boloji.com/index.cfm?md=Content&sd=Articles&ArticleID=7866
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768044.102/warc/CC-MAIN-20141217075248-00103-ip-10-231-17-201.ec2.internal.warc.gz
en
0.949003
1,374
3.984375
4
But it's a little more complex than this. We also have quantum mechanics to contend with. The spin of an electron is a vector. But we find that when we measure one of the components of this vector this value is quantised and can only take values +hbar/2 and -hbar/2, where hbar is Planck's constant. We choose units where h-bar is 1 so the z-component of the spin is always measured to be +1/2 or -1/2. If we write these two states as |+> and |-> then because we are dealing with quantum mechanics, the z-component of the spin can be represented by the linear combination a|+>+b|->. This corresponds to a state in which there is a probability |a|² of measuring +1/2 and a probability |b|² of measuring -1/2. This is what might have been written as a.*return (1/2)+b.*return (-1/2) in my earlier Haskell code. But that's just one component of the spin. What about the x- and y-components? Amazingly the state a|+>+b|-> tells us everything we can possibly know about the spin of an electron and we'll call it a spin state. Suppose we have an electron in the state ψ = a|+>+b|->. What happens if we measure the y-component of its spin? One way to answer that question is to rotate the electron through π/2 so that its x-axis is rotated to align with the z-axis and then measure the z-component of its spin. In order to to that we need to know how to rotate spin states. The rule for rotation through θ about the x-axis is this (in a suitable coordinate frame): |+> → cos(θ/2)|+>-sin(θ/2)|-> |-> → sin(θ/2)|+>+cos(θ/2)|-> Note how choosing θ=0 gives the identity, as expected. Note also that θ=π maps a|+>+b|-> to b|+>-a|-> so that the probabilities of measuring +1/2 and -1/2 are simply swapped, exactly what you'd expect for turning a state upside down. But there's something else that you should notice - there's an ambiguity. A rotation through 2π should give the same as a rotation through 0 and yet setting θ=2π in that transformation maps a state ψ to -ψ. Now |a|² = |-a|² so the probability of observing spin up or spin down is unaffected. But as I've been showing over previous posts, flipping a sign in a state can make a big difference as soon as you start performing interference experiments. The same goes for any angle: if I rotate through π should I use θ=π or θ = 3π? So can the transformation I've given make sense? The transformation does make sense if you consider that in any physical process that rotates an electron the transformation will evolve continuously over time. Electrons don't just instantly rotate. In other words, if a rotation is applied to an electron then it will follow a path in SO(3), not just be an instantaneous application of an element of SO(3). And that allows us to resolve the ambiguity: the rotations of electrons are described by the double cover of SO(3) known as SU(2). So a rotation through 360 degrees doesn't return you to the identity although a 720 degree rotation does. The transformation I gave above is completely unambiguous if you continuously rotate an electron around the x-axis tracking a continuous value of θ, after all, the double cover is basically just the set of continuous paths from the identitiy in SO(3) (with homotopic paths considered equivalent). And that's the bizarre fact: electron rotations aren't described by SO(3), they're described by SU(2). In particular, rotating an electron through 360 degrees does not return it to its original state, but a rotation through 720 degrees does! In a sense, like Dirac's belt, electrons can remember something about the path they took to get where they are, in particular they remember how many twists there were in the path. What does this mean experimentally? the first thing to note is that this is true not just for electrons but any spin-1/2 fermion. This included protons and neutrons. The stuff I've been talking about manifests itself in a number of ways. In particular, the spin of a particle affects how a magnetic field acts on it. For example, spin-up and spin-down particles can be separated into distinct beams using Stern-Gerlach apparatus. Also, the spin of particles precesses in a magnetic field and this is used on a regular basis in NMR. These two facts allow us to easily manipulate and measure the spin of fermions. In other words, the fact that fermions remember how many twists there are in their rotations isn't just some esoteric nonsense, it's now engineering and the theory is tested repeatedly all over the world. Every familiar object is invariant under rotations through 360 degrees. So the fact that electrons need to be rotated through 720 degrees to return them to their original state seems like one of the most bizarre facts about the universe I know of. And yet many books that introduce spin just slip in this fact in a routine way as if it were no different to any other. The fact that the biggest connected cover of SO(3) is the double cover puts a big constraint on the kinds of weird effects like this can happen. We can have a 360 degree rotation multiply by -1, but not by i, because a 720 degree rotation absolutely has to return us to where we started from. But suppose the universe were 2-dimensional. If you remember what I said about SO(2) you may notice that no such constraints apply because SO(2) has an infinite cover. There is a group in which all of the rotations through 360n degrees are distinct for distinct n. This means that a physical system could have its state multiplied by any factor (of modulus 1) when rotated through 360 degrees. Particle that behave this way are called anyons. But we live in a 3D universe so we don't expect any fundamental particles to have this property. However, in quantum mechanics any kind of 'excitation' of a physical system is quantised and can be thought of as a type of particle. These are known as quasiparticles. For example, just as light is made of photons, sound is also quantised as phonons. In the right kind of solid state medium, especially those that arise from some kind of 2D lattice, it seems quite plausible that anyons might arise. This gives rise to the so called fractional quantum hall effect. Anyons might one day play an important role in quantum computing via topological quantum computation.
<urn:uuid:01835a26-f203-45da-a33b-8af442206f36>
CC-MAIN-2014-52
http://blog.sigfpe.com/2007/04/curious-rotational-memory-of-electron.html?showComment=1176611940000
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768980.24/warc/CC-MAIN-20141217075248-00002-ip-10-231-17-201.ec2.internal.warc.gz
en
0.955202
1,475
4.09375
4
Are we alone? 1. We have strong evidence that that our solar system is not the only one; we know there are many other Suns with planets orbiting them. Improved telescopes and detectors have led to the detection of dozens of new planetary systems within the past decade, including several systems containing multiple planets. One giant leap for bug-kind 2. Some organisms can survive in space without any kind of protective enclosure. In a European Space Agency experiment conducted in 2005, two species of lichen were carried aboard a Russian Soyuz rocket and exposed to the space environment for nearly 15 days. They were then resealed in a capsule and returned to Earth, where they were found in exactly the same shape as before the flight. The lichen survived exposure to the vacuum of space as well as the glaring ultraviolet radiation of the Sun. Hot real estate 3. Organisms have been found living happily in scalding water with temperatures as high as 235 degrees F. More than 50 heat-loving microorganisms, or hyperthermophiles, have been found thriving at very high temperatures in such locations as hot springs in WyomingÕs Yellowstone National Park and on the walls of deep-sea hydrothermal vents. Some of these species multiply best at 221 degrees F, and can reproduce at up to 235 degrees F. Has E.T. already phoned home? 4. We now have evidence that some form of life exists beyond Earth, at least in primitive form. While many scientists speculate that extraterrestrial life exists, so far there is no conclusive evidence to prove it. Future missions to Mars, the Jovian moon Europa and future space telescopes such as the Terrestrial Planet Finder will search for definitive answers to this ageless question. To infinity, and beyond! 5. We currently have the technology necessary to send astronauts to another star system within a reasonable timespan. The only problem is that such a mission would be overwhelmingly expensive. Even the the unmanned Voyager spacecraft, which left our solar system years ago at a breathtaking 37,000 miles per hour, would take 76,000 years to reach the nearest star. Because the distances involved are so vast, interstellar travel to another star within a practical timescale would require, among other things, the ability the move a vehicle at or near the speed of light. This is beyond the reach of today's spacecraft -- regardless of funding. Fellowship of the rings 6. All of the gas giant planets in our solar system (Jupiter, Saturn, Uranus and Neptune) have rings. Saturn's rings are the most pronounced and visible, but they aren't the only ones. May the force be with you 7. In the "Star Wars" films, the Imperial TIE Fighters are propelled by ion engines (TIE stands for Twin Ion Engine). While these spacecraft are fictional, real ion engines power some of todayÕs spacecraft. Ion propulsion has long been a staple of science fiction novels, but in recent years it has been successfully tested on a number of unmanned spacecraft, most notably NASAÕs Deep Space 1. Launched in 1998, Deep Space 1 rendezvoused with a distant asteroid and then with a comet, proving that ion propulsion could be used for interplanetary travel. A question of gravity 8. There is no gravity in deep space. If this were true, the moon would float away from the Earth, and our entire solar system would drift apart. While itÕs true that gravity gets weaker with distance, it can never be escaped completely, no matter how far you travel in space. Astronauts appear to experience "zero-gravity" because they are in continuous free-fall around the Earth. 9. The basic premise of teleportation -- made famous in TVÕs "Star Trek" -- is theoretically sound. In fact, scientists have already ÒteleportedÓ the quantum state of individual atoms from one location to another. As early as the late 1990s, scientists proved they could teleport data using photons, but the photons were absorbed by whatever surface they struck. More recently, physicists at the University of Innsbruck in Austria and at the National Institute of Standards and Technology in Boulder, Colorado, for the first time teleported individual atoms using the principle of quantum entanglement. Experts say this technology eventually could enable the invention of superfast "quantum computers." But the bad news, at least for sci-fi fans, is that experts donÕt foresee being able to teleport people in this manner. Good day, Suns-shine 10. Tatooine, Luke Skywalker's home planet in the "Star Wars" films, has two Suns -- what astronomers would call a binary star system. Scientists have discovered recently that planets really can form within such systems. Double-stars, or binary systems, are common in our Milky Way galaxy. Among the more than 100 new planets discovered in recent years, some have been found in binary systems, including16 Cygni B and 55 Cancri A. (But so far, no one has found a habitable planet like Luke Skywalker's Tatooine.)
<urn:uuid:f6b10c4d-a299-40d9-a878-121b4c2f613a>
CC-MAIN-2014-52
http://www.nasa.gov/multimedia/mmgallery/fact_fiction_nonflash.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770432.4/warc/CC-MAIN-20141217075250-00019-ip-10-231-17-201.ec2.internal.warc.gz
en
0.937163
1,059
3.953125
4
The latest news from academia, regulators research labs and other things of interest Posted: Dec 23, 2013 Graphene can host exotic new quantum electronic states at its edges (Nanowerk News) Graphene has become an all-purpose wonder material, spurring armies of researchers to explore new possibilities for this two-dimensional lattice of pure carbon. But new research at MIT has found additional potential for the material by uncovering unexpected features that show up under some extreme conditions — features that could render graphene suitable for exotic uses such as quantum computing. On a piece of graphene (the horizontal surface with a hexagonal pattern of carbon atoms), in a strong magnetic field, electrons can move only along the edges, and are blocked from moving in the interior. In addition, only electrons with one direction of spin can move in only one direction along the edges (indicated by the blue arrows), while electrons with the opposite spin are blocked (as shown by the red arrows). Under typical conditions, sheets of graphene behave as normal conductors: Apply a voltage, and current flows throughout the two-dimensional flake. If you turn on a magnetic field perpendicular to the graphene flake, however, the behavior changes: Current flows only along the edge, while the bulk remains insulating. Moreover, this current flows only in one direction — clockwise or counterclockwise, depending on the orientation of the magnetic field — in a phenomenon known as the quantum Hall effect. In the new work, the researchers found that if they applied a second powerful magnetic field — this time in the same plane as the graphene flake — the material’s behavior changes yet again: Electrons can move around the conducting edge in either direction, with electrons that have one kind of spin moving clockwise while those with the opposite spin move counterclockwise. “We created an unusual kind of conductor along the edge,” says Young, a Pappalardo Postdoctoral Fellow in MIT’s physics department and the paper’s lead author, “virtually a one-dimensional wire.” The segregation of electrons according to spin is “a normal feature of topological insulators,” he says, “but graphene is not normally a topological insulator. We’re getting the same effect in a very different material system.” What’s more, by varying the magnetic field, “we can turn these edge states on and off,” Young says. That switching capability means that, in principle, “we can make circuits and transistors out of these,” he says, which has not been realized before in conventional topological insulators. There is another benefit of this spin selectivity, Young says: It prevents a phenomenon called “backscattering,” which could disrupt the motion of the electrons. As a result, imperfections that would ordinarily ruin the electronic properties of the material have little effect. “Even if the edges are ‘dirty,’ electrons are transmitted along this edge nearly perfectly,” he says. Jarillo-Herrero, the Mitsui Career Development Associate Professor of Physics at MIT, says the behavior seen in these graphene flakes was predicted, but never seen before. This work, he says, is the first time such spin-selective behavior has been demonstrated in a single sheet of graphene, and also the first time anyone has demonstrated the ability “to transition between these two regimes.” That could ultimately lead to a novel way of making a kind of quantum computer, Jarillo-Herrero says, something that researchers have tried to do, without success, for decades. But because of the extreme conditions required, Young says, “this would be a very specialized machine” used only for high-priority computational tasks, such as in national laboratories. Ashoori, a professor of physics, points out that the newly discovered edge states have a number of surprising properties. For example, although gold is an exceptionally good electrical conductor, when dabs of gold are added to the edge of the graphene flakes, they cause the electrical resistance to increase. The gold dabs allow the electrons to backscatter into the oppositely traveling state by mixing the electron spins; the more gold is added, the more the resistance goes up. This research represents “a new direction” in topological insulators, Young says. “We don’t really know what it might lead to, but it opens our thinking about the kind of electrical devices we can make.” The experiments required the use of a magnetic field with a strength of 35 tesla — “about 10 times more than in an MRI machine,” Jarillo-Herrero says — and a temperature of just 0.3 degrees Celsius above absolute zero. However, the team is already pursuing ways of observing a similar effect at magnetic fields of just one tesla — similar to a strong kitchen magnet — and at higher temperatures. Philip Kim, a professor of physics at Columbia University who was not involved in this work, says, “The authors here have beautifully demonstrated excellent quantization of the conductance,” as predicted by theory. He adds, “This is very nice work that may connect topological insulator physics to the physics of graphene with interactions. This work is a good example how the two most popular topics in condensed matter physics are connected each other.” Source: By David L. Chandler, MIT If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks! Check out these other trending stories on Nanowerk:
<urn:uuid:381e880a-86cf-44b5-89a2-920197973d35>
CC-MAIN-2014-52
http://www.nanowerk.com/nanotechnology-news/newsid=33809.php
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772972.2/warc/CC-MAIN-20141217075252-00036-ip-10-231-17-201.ec2.internal.warc.gz
en
0.94076
1,168
3.53125
4
IBM researchers have built a prototype optical chip that can transfer a terabit of data per second, using an innovative design requiring 48 tiny holes drilled into a standard CMOS chip, facilitating the movement of light. Much faster and more power-efficient than today's optics, the so-called "Holey Optochip" technology could enhance the power of supercomputers. Optical chips, which move data with light instead of electrons, are commonly used for interconnects in today's supercomputers and can be found in IBM systems such as Power 775 and Blue Gene. Optical technology is favored over electrical for transmitting high-bandwidth data over longer distances, which is why it's used for telecommunications networks, said IBM Optical Links Group manager Clint Schow. As speed and efficiency improve, optical technology has become more viable in smaller settings. "I think the number one supercomputer ten years ago had no optics in it whatsoever, and now you're seeing large scale deployments, mostly for rack-to-rack interconnects within supercomputers," Schow told Ars. "It's making its way deeper into the system and getting closer and closer to the actual processor." With the Holey Optochip, Schow said "our target is the bandwidth that interconnects different processors in the system—not the processor talking to its memory, but a processor talking to another processor in a large parallel system." The Holey Optochip uses 4.7 watts in delivering nearly one trillion bits per second, enough to download 500 HD movies. At 5.2 mm by 5.8 mm, it's about one-eighth the size of a dime. IBM built the chip using standard parts so it can make its way to market relatively quickly. "The heart of the chip is a single CMOS, plain-Jane unmodified process chip," Schow said. "That base chip has all the electronic circuit functions to complete the optical link. So it's got drivers that modulate vertical cavity lasers and receiver circuits that convert photocurrent from a detector into a usable electrical signal." Drilling holes into the chip lets IBM use industry-standard, 850-nanometer vertical cavity surface emitting lasers (VCSEL), and photodiode arrays, both soldered on to the chip. The holes allow optical access through the back of the chip to the transmitter and receiver channels, making it more compact. "You need the holes because if you have the silicon substrate the chip is made out of, the light can't go through it," Schow said. "You need to make a hole to let the light pass through." An IBM spokesperson further explains that "the optical devices are directly soldered to the front of the CMOS IC (integrated circuit) and the emission/detection of the optical signals is pointed toward the back of the chip. The holes are etched through the chip, one under each laser and detector to allow the optical signals to pass through the chip itself." A standard optical chip today includes 12 channels (the links between transmitters and receivers), each moving 10 Gigabits per second, he said. The IBM Holey Optochip has 48 channels, each moving 20 gigabits per second, for a total of 960 gigabits, just below a terabit. IBM is unveiling the prototype chip today at the Optical Fiber Communication Conference in Los Angeles, calling it "the first parallel optical transceiver to transfer one trillion bits of information per second." "That's four times as many channels running twice as fast, and the power efficiency is better by at least a factor of four," Schow said. The whole chip uses more power than current ones, but transmits much more data, resulting in better efficiency as measured by watts per bit. The speed of each channel itself isn't breaking any records, given that IBM built the prototype chips using standard components. Schow noted that "there's development now that will push channel data rates to 25 gigabits per second in the near future." What's impressive about the Holey Optochip is the design, allowing optimization of density, power, and bandwidth all in one little package. "You can go really fast if you don't care about power, and you can be really power-efficient if you don't care about speed," Schow said. Getting both facets right can bring an order-of-magnitude improvement to overall performance, he said. This is the second generation of the holey prototype—the first produced speeds of 300 gigabits per second 2010. Back in 2007, Ars reported on a previous, 160Gbps optical networking chip from Big Blue. Although IBM itself won't be mass-producing the chips, Schow said they could become commercially available within a year or two. Price points could be in the $100 to $200 range, he speculated. "We're in a group within IBM Research, looking at communications technologies we'll need for future computers, particularly for crunching big data, and analytics applications when you have to have tons of bandwidth in the system," he said. "Our mission is to prototype technologies and show what's possible, to drive the industry to commercial solutions that we can then procure and put into our systems." IBM researchers also recently made a breakthrough in quantum computing, which could eventually lead to computers exponentially more powerful than today's, as our friends at Wired reported.
<urn:uuid:d46cc16c-d789-4800-a9f9-368938301ca5>
CC-MAIN-2014-52
http://arstechnica.com/business/2012/03/holey-chip-ibm-drills-holes-into-optical-chip-for-terabit-per-second-speed/
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447557824.148/warc/CC-MAIN-20141224185917-00098-ip-10-231-17-201.ec2.internal.warc.gz
en
0.949048
1,106
3.796875
4
USC Scientists Contribute to a Breakthrough in Quantum Computing Scientists have taken the next major step toward quantum computing, which will use quantum mechanics to revolutionize the way information is processed. Quantum computers will capitalize on the mind-bending properties of quantum particles to perform complex calculations that are impossible for today’s traditional computers. Using high-magnetic fields, Susumu Takahashi, assistant professor of chemistry in USC Dornsife, and his colleagues managed to suppress decoherence, one of the key stumbling blocks in quantum computing. “High-magnetic fields reduce the level of the noises in the surroundings so they can constrain the decoherence very efficiently,” Takahashi said. Decoherence has been described as a “quantum bug” that destroys fundamental properties that quantum computers would rely on. The research will appear in the online version of Nature magazine today. Quantum computing uses quantum bits, or qubits, to encode information in the form of ones and zeros. Unlike a traditional computer that uses traditional bits, a quantum computer takes advantage of the seemingly impossible fact that qubits can exist in multiple states at the same time, which is called “superposition.” While a bit can represent either a one or a zero, a qubit can represent a one and a zero at the same time due to superposition. This allows for simultaneous processing of calculations in a truly parallel system, skyrocketing computing ability. Though the concepts underpinning quantum computing are not new, problems such as decoherence have hindered the construction of a fully functioning quantum computer. Think of decoherence as a form of noise or interference, knocking a quantum particle out of superposition — robbing it of that special property that makes it so useful. If a quantum computer relies on a quantum particle’s ability to be both here and there, then decoherence is the frustrating phenomenon that causes a quantum particle to be either here or there. University of British Columbia researchers calculated all sources of decoherence in their experiment as a function of temperature, magnetic field and by nuclear isotopic concentrations, and suggested the optimum condition to operate qubits, reducing decoherence by approximately 1,000 times. In Takahashi’s experiments, qubits were predicted to last about 500 microseconds at the optimum condition — ages, relatively speaking. Decoherence in qubit systems falls into two general categories. One is an intrinsic decoherence caused by constituents in the qubit system, and the other is an extrinsic decoherence caused by imperfections of the system — impurities and defects, for example. In their study, Takahashi and his colleagues investigated single crystals of molecular magnets. Because of their purity, molecular magnets eliminate the extrinsic decoherence, allowing researchers to calculate intrinsic decoherence precisely. “For the first time, we’ve been able to predict and control all the environmental decoherence mechanisms in a very complex system — in this case a large magnetic molecule,” said Phil Stamp, University of British Columbia professor of physics and astronomy and director of the Pacific Institute of Theoretical Physics. Using crystalline molecular magnets allowed researchers to build qubits out of an immense quantity of quantum particles rather than a single quantum object — the way most proto-quantum computers are built at the moment. “This will obviously increase signals from the qubit drastically so the detection of the qubit in the molecular magnets is much easier,” said Takahashi, who conducted his research as a project scientist in the Institute of Terahertz Science and Technology and the Department of Physics at the University of California, Santa Barbara. Takahashi has been at USC Dornsife since 2010. Research for the article was performed in collaboration with Phil Stamp and Igor Tupitsyn of the University of British Columbia, Johan van Tol of Florida State University, and Chris Beedle and David Hendrickson of the University of California, San Diego. The work was supported by the National Science Foundation, the W. M. Keck Foundation, the Pacific Institute of Theoretical Physics at the University of British Columbia, by the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute for Advanced Research and the USC start-up funds. Related News Items - Income Boosts Health of Elderly December 22, 2014 - Thompson Hailed as Innovator December 16, 2014 - Achieving Accountability December 16, 2014 - Diagnosis Success December 4, 2014 - As Young as You Feel November 20, 2014 - Fan Your Feathers November 19, 2014 - Recognition for Pratt’s Work November 18, 2014 - Small in Stature, Big on Health November 13, 2014 - The Search for a Wild Weed November 10, 2014 - Diplomatic Chess Game November 6, 2014 - Mechanics of String Theory November 6, 2014 - Collaboration in 3-D October 28, 2014 - Michelson Center for Convergent Bioscience Ushers in New Era October 23, 2014 - USC Dornsife Recruits Renowned Leaders in Molecular Research October 23, 2014 - Golgi Your Brain October 20, 2014 - Big Boost for the Bench October 9, 2014 - In Their Own Words October 8, 2014 - Sugar Linked to Memory Woes October 7, 2014 - Chemists Dispel Long-held Notion September 26, 2014 - Getting All Sides September 23, 2014
<urn:uuid:7dd3f4e4-be2f-4ff0-a163-8e2c05dd18a9>
CC-MAIN-2014-52
http://dornsife.usc.edu/news/stories/984/usc-scientists-contribute-to-a-breakthrough-in-quantum-computing/
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777454.142/warc/CC-MAIN-20141217075257-00055-ip-10-231-17-201.ec2.internal.warc.gz
en
0.914116
1,140
3.703125
4
Quantum eraser experiment In quantum mechanics, the quantum eraser experiment is an interferometer experiment that demonstrates several fundamental aspects of quantum mechanics, including quantum entanglement and complementarity. The double-slit quantum eraser experiment described in this article has three stages: - First, the experimenter reproduces the interference pattern of Young's double-slit experiment by shining photons at the double-slit interferometer and checking for an interference pattern at the detection screen. - Next, the experimenter marks through which slit each photon went, without disturbing its wavefunction, and demonstrates that thereafter the interference pattern is destroyed. This stage indicates that it is the existence of the "which-path" information that causes the destruction of the interference pattern. - Third, the "which-path" information is "erased," whereupon the interference pattern is recovered. (Rather than removing or reversing any changes introduced into the photon or its path, these experiments typically produce another change that obscures the markings earlier produced.) The quantum eraser experiment described in this article is a variation of Thomas Young's classic double-slit experiment. It establishes that when action is taken to determine which slit a photon has passed through, the photon cannot interfere with itself. When a stream of photons is marked in this way, then the interference fringes characteristic of the Young experiment will not be seen. The experiment described in this article is capable of creating situations in which a photon that has been "marked" to reveal through which slit it has passed can later be "unmarked." A photon that has been "marked" cannot interfere with itself and will not produce fringe patterns, but a photon that has been "marked" and then "unmarked" can thereafter interfere with itself and will cooperate in producing the fringes characteristic of Young's experiment. This experiment involves an apparatus with two main sections. After two entangled photons are created, each is directed into its own section of the apparatus. It then becomes clear that anything done to learn the path of the entangled partner of the photon being examined in the double-slit part of the apparatus will influence the second photon, and vice-versa. The advantage of manipulating the entangled partners of the photons in the double-slit part of the experimental apparatus is that experimenters can destroy or restore the interference pattern in the latter without changing anything in that part of the apparatus. Experimenters do so by manipulating the entangled photon, and they can do so before or after its partner has passed through the slits and other elements of experimental apparatus between the photon emitter and the detection screen. So, under conditions where the double-slit part of the experiment has been set up to prevent the appearance of interference phenomena (because there is definitive "which path" information present), the quantum eraser can be used to effectively erase that information. In doing so, the experimenter restores interference without altering the double-slit part of the experimental apparatus. A variation of this experiment, delayed choice quantum eraser, allows the decision whether to measure or destroy the "which path" information to be delayed until after the entangled particle partner (the one going through the slits) has either interfered with itself or not. Doing so appears to have the bizarre effect of causing the outcome of an event after the event has already occurred. In other words, something that happens at time t apparently reaches back to some time t - 1 and acts as a determining causal factor at that earlier time. First, a photon is shot through a specialized nonlinear optical device: a beta barium borate (BBO) crystal. This crystal converts the single photon into two entangled photons of lower frequency, a process known as spontaneous parametric down-conversion (SPDC). These entangled photons follow separate paths. One photon goes directly to a detector, while the second photon passes through the double-slit mask to a second detector. Both detectors are connected to a coincidence circuit, ensuring that only entangled photon pairs are counted. A stepper motor moves the second detector to scan across the target area, producing an intensity map. This configuration yields the familiar interference pattern. Next, a circular polarizer is placed in front of each slit in the double-slit mask, producing clockwise circular polarization in light passing through one slit, and counter-clockwise circular polarization in the other slit (see Figure 1). This polarization is measured at the detector, thus "marking" the photons and destroying the interference pattern (see Fresnel–Arago laws). Finally, a linear polarizer is introduced in the path of the first photon of the entangled pair, giving this photon a diagonal polarization (see Figure 2). Entanglement ensures a complementary diagonal polarization in its partner, which passes through the double-slit mask. This alters the effect of the circular polarizers: each will produce a mix of clockwise and counter-clockwise polarized light. Thus the second detector can no longer determine which path was taken, and the interference fringes are restored. A double slit with rotating polarizers can also be accounted for by considering the light to be a classical wave. However this experiment uses entangled photons, which are not compatible with classical mechanics. - Walborn, S. P.; et al. (2002). "Double-Slit Quantum Eraser". Phys. Rev. A 65 (3): 033818. arXiv:quant-ph/0106078. Bibcode:2002PhRvA..65c3818W. doi:10.1103/PhysRevA.65.033818. - Englert, Berthold-Georg (1999). "REMARKS ON SOME BASIC ISSUES IN QUANTUM MECHANICS". Zeitschrift für Naturforschung 54 (1): 11–32. - Aharonov, Yakir; Zubairy, M. Suhail (2005). "Time and the Quantum: Erasing the Past and Impacting the Future". Science 307 (5711): pp. 875–879. Bibcode:2005Sci...307..875A. doi:10.1126/science.1107787. PMID 15705840. - Kim, Yoon-Ho; R. Yu, S.P. Kulik, Y.H. Shih and Marlan Scully (2000). "A Delayed Choice Quantum Eraser". Physical Review Letters 84: 1–5. arXiv:quant-ph/9903047. Bibcode:2000PhRvL..84....1K. doi:10.1103/PhysRevLett.84.1. - Chiao, R. Y.; P. G. Kwiat; Steinberg, A. M. (1995). "Quantum non-locality in two-photon experiments at Berkeley". Quantum and Semiclassical Optics: Journal of the European Optical Society Part B 7 (3): 6. Retrieved 13 February 2014.
<urn:uuid:422c2e44-a927-4b24-a4ad-d3461c6802b3>
CC-MAIN-2014-52
http://en.wikipedia.org/wiki/Quantum_eraser
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767453.104/warc/CC-MAIN-20141217075247-00169-ip-10-231-17-201.ec2.internal.warc.gz
en
0.877469
1,440
3.71875
4
In physics, the Mach–Zehnder interferometer is a device used to determine the relative phase shift variations between two collimated beams derived by splitting light from a single source. The interferometer has been used, among other things, to measure phase shifts between the two beams caused by a sample or a change in length of one of the paths. The apparatus is named after the physicists Ludwig Mach (the son of Ernst Mach) and Ludwig Zehnder: Zehnder's proposal in an 1891 article was refined by Mach in an 1892 article. The Mach–Zehnder interferometer is a highly configurable instrument. In contrast to the well-known Michelson interferometer, each of the well separated light paths is traversed only once. If it is decided to produce fringes in white light, then, since white light has a limited coherence length, on the order of micrometers, great care must be taken to simultaneously equalize the optical paths over all wavelengths or no fringes will be visible. As seen in Fig. 1, a compensating cell made of the same type of glass as the test cell (so as to have equal optical dispersion) would be placed in the path of the reference beam to match the test cell. Note also the precise orientation of the beam splitters. The reflecting surfaces of the beam splitters would be oriented so that the test and reference beams pass through an equal amount of glass. In this orientation, the test and reference beams each experience two front-surface reflections, resulting in the same number of phase inversions. The result is that light traveling an equal optical path length in the test and reference beams produces a white light fringe of constructive interference. Collimated sources result in a nonlocalized fringe pattern. Localized fringes result when an extended source is used. In Fig. 2, we see that the fringes can be adjusted so that they are localized in any desired plane.:18 In most cases, the fringes would be adjusted to lie in the same plane as the test object, so that fringes and test object can be photographed together. The Mach–Zehnder interferometer's relatively large and freely accessible working space, and its flexibility in locating the fringes has made it the interferometer of choice for visualizing flow in wind tunnels and for flow visualization studies in general. It is frequently used in the fields of aerodynamics, plasma physics and heat transfer to measure pressure, density, and temperature changes in gases.:18,93–95 Mach–Zehnder interferometers are used in electro-optic modulators, electronic devices used in various fibre-optic communications applications. Mach-Zehnder modulators are incorporated in monolithic integrated circuits and offer well-behaved, high-bandwidth electro-optic amplitude and phase responses over a multiple GHz frequency range. How it works A collimated beam is split by a half-silvered mirror. The two resulting beams (the "sample beam" and the "reference beam") are each reflected by a mirror. The two beams then pass a second half-silvered mirror and enter two detectors. The fully silvered and half-silvered surfaces of all mirrors, except the last, face the inbound beam, and the half-silvered surface of the last mirror faces the outbound beam exiting in the same orientation as the original collimated beam. That is, if the original beam is horizontal, the half-silvered surface of the last mirror should face the horizontally outbound beam. The Fresnel equations for reflection and transmission of a wave at a dielectric imply that there is a phase change for a reflection when a wave reflects off a change from low to high refractive index but not when it reflects off a change from high to low. In other words: - A 180 degree phase shift occurs upon reflection from the front of a mirror, since the medium behind the mirror (glass) has a higher refractive index than the medium the light is traveling in (air). - No phase shift accompanies a rear surface reflection, since the medium behind the mirror (air) has a lower refractive index than the medium the light is traveling in (glass). We also note that: - The speed of light is slower in media with an index of refraction greater than that of a vacuum, which is 1. Specifically, its speed is: v = c/n, where c is the speed of light in vacuum and n is the index of refraction. This causes a phase shift increase proportional to (n − 1) × length traveled. - If k is the constant phase shift incurred by passing through a glass plate on which a mirror resides, a total of 2k phase shift occurs when reflecting off the rear of a mirror. This is because light traveling toward the rear of a mirror will enter the glass plate, incurring k phase shift, and then reflect off the mirror with no additional phase shift since only air is now behind the mirror, and travel again back through the glass plate incurring an additional k phase shift. Caveat: The rule about phase shifts applies to beamsplitters constructed with a dielectric coating, and must be modified if a metallic coating is used, or when different polarizations are taken into account. Also, in real interferometers, the thicknesses of the beamsplitters may differ, and the path lengths are not necessarily equal. Regardless, in the absence of absorption, conservation of energy guarantees that the two paths must differ by a half wavelength phase shift. Also note that beamsplitters that are not 50/50 are frequently employed to improve the interferometer's performance in certain types of measurement. Observing the effect of a sample In Fig. 3, in the absence of a sample, both the sample beam SB and the reference beam RB will arrive in phase at detector 1, yielding constructive interference. Both SB and RB will have undergone a phase shift of (1×wavelength + k) due to two front-surface reflections and one transmission through a glass plate. At detector 2, in the absence of a sample, the sample beam and reference beam will arrive with a phase difference of half a wavelength, yielding complete destructive interference. The RB arriving at detector 2 will have undergone a phase shift of 0.5×(wavelength) + 2k due to one front-surface reflection and two transmissions. The SB arriving at detector 2 will have undergone a (1×wavelength + 2k) phase shift due to two front-surface reflections and one rear-surface reflection. Therefore, when there is no sample, only detector 1 receives light. If a sample is placed in the path of the sample beam, the intensities of the beams entering the two detectors will change, allowing the calculation of the phase shift caused by the sample. Use of the Mach–Zehnder interferometer The versatility of the Mach–Zehnder configuration has led to its being used in a wide range of fundamental research topics in quantum mechanics, including studies on counterfactual definiteness, quantum entanglement, quantum computation, quantum cryptography, quantum logic, Elitzur-Vaidman bomb tester, the quantum eraser experiment, the quantum Zeno effect, and neutron diffraction. See their respective articles for further information on these topics. - List of types of interferometers Related forms of interferometer Other flow visualisation techniques - Zehnder, Ludwig (1891). "Ein neuer Interferenzrefraktor". Zeitschrift für Instrumentenkunde 11: 275–285. - Mach, Ludwig (1892). "Ueber einen Interferenzrefraktor". Zeitschrift für Instrumentenkunde 12: 89–93. - Zetie, K.P.; Adams, S.F.; Tocknell, R.M. "How does a Mach–Zehnder interferometer work?". Physics Department, Westminster School, London. Retrieved 8 April 2012. - Ashkenas, Harry I. (1950). The design and construction of a Mach-Zehnder interferometer for use with the GALCIT Transonic Wind Tunnel. Engineer's thesis. California Institute of Technology. - Hariharan, P. (2007). Basics of Interferometry. Elsevier Inc. ISBN 0-12-373589-0. - Chevalerias, R.; Latron, Y.; Veret, C. (1957). "Methods of Interferometry Applied to the Visualization of Flows in Wind Tunnels". Journal of the Optical Society of America 47 (8): 703. doi:10.1364/JOSA.47.000703. - Ristić, Slavica. "Flow visualization techniques in wind tunnels – optical methods (Part II)". Military Technical Institute, Serbia. Retrieved 6 April 2012. - Paris, M.G.A. (1999). "Entanglement and visibility at the output of a Mach-Zehnder interferometer". Physical Review A 59 (2): 1615–1621. arXiv:quant-ph/9811078. Bibcode:1999PhRvA..59.1615P. doi:10.1103/PhysRevA.59.1615. Retrieved 2 April 2012. - Haack, G. R.; Förster, H.; Büttiker, M. (2010). "Parity detection and entanglement with a Mach-Zehnder interferometer". Physical Review B 82 (15). arXiv:1005.3976. Bibcode:2010PhRvB..82o5303H. doi:10.1103/PhysRevB.82.155303.
<urn:uuid:1723ac6f-e23b-4b06-bf02-b69ab3b25178>
CC-MAIN-2014-52
http://en.wikipedia.org/wiki/Mach%E2%80%93Zehnder_interferometer
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772398.133/warc/CC-MAIN-20141217075252-00098-ip-10-231-17-201.ec2.internal.warc.gz
en
0.841002
2,036
3.734375
4
Routing Protocols - List of Routing protocols Introduction of Routing Protocols The process of routing governs the path and passage of data traffic in the form of packets and frames. The process of routing is aim to transfer the logical packets from their source to their eventual destination. This process is however monitored by routing protocols. The routing protocols how routers can communicate among themselves. The routing information is circulated that enables the routers to communicate within the computer network. Border Gateway Protocol (BGP) The network traffic is forwarded along the desired paths during the process of routing. However this process of routing is governed by crucial routing protocols. Border gateway protocol is the significant routing protocol. Border Gateway Protocol or BGP is capable of maintaining and keeping the track of IP networks which provides network access to autonomous systems (the collection of IPs which illustrates the routing procedure to the internet). Also this protocol substituted the Exterior Gateway Protocol (EGP), the use of this protocol has diminished completely now. Cisco Discovery Protocol (CDP) This data link layer network protocol, developed and used by Cisco International. It is most compatible to be used with Cisco network devices; it can be used to share information with the other directly attached Cisco devices. It can serve another purpose of on demand routing. On demand routing enables the CDP to identify the IP addresses and the model and type of the Cisco device connected to the network. This use of CDP in enhanced on demand routing removes the use of other vibrant protocols in the network. Connectionless Network Service (CNS) It is network services at the second third layer of the OSI model that is network layer. It is referred to as CNS because it does not require the establishment of circuit and hence the messages are transferred to the destinations independent of each other. Hot Standby Router Protocol (HSRP) This redundancy protocol established by Cisco is used as a fault tolerant gateway. The default gateway failover is covered by HSRP by using a simple technique. A multicasting data packet is sent by one HSRP to the other HSRP enabled router. The router with the pre defined IP address and gateway will respond to it quickly. This router is termed as a primary router if it fails to receive the ARP request then the next router receiving the ARP request with the same MAC address is thus successful in accomplishing the default gateway failover. IGRP/EIGRP (Enhanced Interior Gateway Routing Protocol EIGPR is a Cisco routing protocol based on its earlier version IGPR. This is termed as a distance vector protocol because it is used in a packet switch networks for communicating. The basic purport of this network protocol is to stabilize the working of the router. Hence it can guide the router in utilizing the bandwidth and power. Moreover the routers associated with using EIGRP can reallocate the route information to IGRP neighbors. Internet Protocol (IP) The internet protocol performs the task of delivering eminent data packets from the source to the destination using IP address. It is used to transfer data packets in packet switched network by utilizing internet protocol suits like TCP/IP. Intermediate System-to-Intermediate System (IS-IS) IS–IS is a network protocol which determines the best and most suitable route for the data packets to be transferred via packet switched network by the network devices such as routers. Multiprotocol Label Switching (MPLS) A highly scaleable protocol skeptic mechanism used in high performing telecommunication system which assigns labels to the data packets is known as MPLS. It helps to transfer data between distant nodes by creating virtual links. Network Address Translation (NAT) Network address translation is a mechanism which modifies different network addresses into one IP header and it travels across a routing device, this serves the purpose of remapping the discrete addresses from one address legroom to another. Open Shortest Path First (OSPF) It is an interior gateway protocol that delivers IP packets to the autonomous systems. It also assembles link state information to form a topology map. This topology map helps the routing tables to make decisions merely based on IP addresses present in IP datagrams. Quality of Service (QoS) Quality of service is the term most commonly used in network technologies to refer to the ability to provide recital and performance to data flow .it is a teletraffic engineering terminology. It is specialized in guaranteeing and improving bit rate and multi media streaming capabilities. Routing Information Protocol (RIP) This efficient protocol sends the update routing messages to the routers in order to update their routes. Network routing table makes the desirable changes to it when it receives a message to alter its entry level network topologies. Interested in Advertising your products or website with us? Click Why Advertising with us ? Other Improtant topics Computer Network Architechture :: Data recovery :: What is Data Mining & techniques :: Security issues of Computer :: Frame Relay :: How to create wireless groups :: How to design security policy for network :: How to Troubleshoot LAN :: How to Troubleshoot WLAN :: Infrared Network :: Introduction to Active Directory :: Network Management Software :: Network ports List :: Network Security Software :: Networking FAQ :: Online Security Threat :: Satellite Communication :: Submarine Communication Cable :: Telecommunication Networks :: WAN Technology :: What is Cryptography :: What is Optical Router :: Working Of Telnet :: Linux Server Adminstatrion :: Wireless Bridges set up techniques :: Digital Communication :: How to Configure Linksys wireless bridge :: How to setup wireless repeater :: Distributed Computing :: Hight Performance Computing :: Parallel computing :: Quantum Computing :: Super Computing :: Cloud Computing :: How to configure print server :: How video conferencing works :: Setting up TCP/IP network :: Recover lost hard drive data :: How to solve network performance problems :: 3GPP2 Multimedia Domain Architecture :: Network management model and architechture :: What is protocol analysis & Analyzer :: What is network address translator :: Internet network architecture :: Types of information technology :: What is DSL technology :: Dsl concept :: Dsl vs Cable internet :: Network simulator :: Next generation networks :: What is Switched mesh :: What is 127.0.0.1 :: How to change mac address :: How to flush dns :: EV-DO Rev. B Technology? :: What is network protocol :: What is ASIC :: Blu ray Technology :: Field Program Gate Array (FPGA) :: Computer networking with ethernet hub :: Intelligent networks :: Adsl problems and oppertunities :: Dsl components :: What is hub :: What is networking switch :: Hubs Vs Switches :: Frame relay networks Browse All Categories - WiFi Technology - Wimax Technology - Computer Networks - Mobile Communication - IT - Certifications - Computer OS - Computer Hardware - Computer security - Technology Reviews - Networking Tutorials - Other Technology articles - Top 10 - Holiday Season Lastest articles in Category
<urn:uuid:26eec6c4-86bd-4255-8e78-833517fea20d>
CC-MAIN-2014-52
http://www.wifinotes.com/computer-networks/routing-protocols.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771164.85/warc/CC-MAIN-20141217075251-00037-ip-10-231-17-201.ec2.internal.warc.gz
en
0.887009
1,425
3.875
4
Single field shapes quantum Technology Research News computers, which tap the properties of particles like atoms, photons and electrons to carry out computations, could potentially use a variety of schemes: individual photons controlled by optical networks, clouds of atoms linked by laser beams, and electrons trapped in quantum dots embedded in Due to the strange nature of quantum particles, quantum computers are theoretically much faster than ordinary computers at solving certain large problems, like cracking secret codes. Chip-based quantum computers would have a distinct advantage: the potential to leverage the extensive experience and manufacturing infrastructure of the semiconductor industry. Controlling individual electrons, however, is extremely challenging. Researchers have recently realized that it may be possible to control the electrons in a quantum computer using a single magnetic field rather than having to produce extremely small, precisely focused magnetic fields for each electron. Researchers from the University of Toronto and the University of Wisconsin at Madison have advanced this idea with a scheme that allows individual electrons to serve as the quantum bits that store and process computer information. The scheme is an improvement over existing global magnetic field schemes, which require each qubit to consist of two or more electrons. Electrons have two magnetic orientations, spin up and spin down, which can represent the 1s and 0s of computing. The logic of quantum computing is based on one-qubit gates and two-qubit gates. One-qubit gates flip individual spins, changing a 1 to a 0 and vice versa. Two-qubit gates cause two spins to become linked, or entangled. The researchers' scheme relies on the interactions of pairs of electrons to create both types of gates. Tiny electrodes positioned near quantum dots -- bits of semiconductor material that can trap single electrons -- can draw neighboring electrons near enough that they exchange energy. If the electrons interact long enough, they swap spin orientations. The challenge is finding a way to use the interaction to flip the spin of one electron without flipping the spin of the other. The scheme does so by taking a pair of electrons through eleven incremental steps using the electron interaction and the global magnetic field. "We first turn on the exchange interactions... through small electrodes to generate a swap gate, then turn on the global magnetic field," said Lian-Ao Wu, a research associate at the University of Toronto. The eleven steps -- four electron interactions and seven pulses of the magnetic field -- alter the spins. Because the magnetic field diminishes in strength over distance each electron is exposed to a different strength. By tuning the field, the researchers can make the process cancel out the changes to one spin while flipping the other, according to Wu. The researchers' scheme could be implemented using a pair of square, 100-nanometer-diameter aluminum nanowires separated by a thin insulating layer. A row of quantum dots in a zigzag pattern would be positioned parallel to the wires, with half of the dots 200 nanometers from the wires and the other half 300 nanometers away. A nanometer is one millionth of a millimeter, or the span of 10 hydrogen atoms. The ability to build such a quantum computer depends on developments in nanotechnology, said Wu. "It is still hard to design a complete control scheme of the exchange interactions," he said. "Once such obstacles are overcome, our scheme should offer significant simplifications and flexibility." The on-chip conducting wires called for in the researchers' scheme have been used in physics experiments involving controlling beams of atoms and Bose-Einstein condensates, which are small clusters of atoms induced to behave as one quantum entity, according to Wu. The researchers are working on reducing the number of steps required for their quantum logic circuit, combining their scheme with quantum error correction techniques, and reducing the engineering challenge of implementing the design, said Wu. The scheme would require making the aluminum wires with a precision of a single layer of atoms, but optimizing the scheme should make it possible to loosen the requirements to several atomic layers, which is technologically feasible, according to Wu. "The main challenge is [achieving a] high degree of control of the exchange interactions," he said. The technique could be used practically in 10 to 20 years, said Wu's research colleague was Daniel A. Lidar at the University of Toronto and Mark Friesen at the University of Wisconsin at Madison. The work appeared in the July 15, 2004 issue of Physical Review Letters. The research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Army Research Office/Advanced Research and Development Activity (ARO/ARDA). Timeline: 10-20 years TRN Categories: Quantum Computing and Communications Story Type: News Related Elements: Technical paper, "One-Spin Quantum Logic Gates from Exchange Interactions and a Global Magnetic Field," Physical Review Letters, July 15, 2004 November 3/10, 2004 DNA machines take a walk DNA in nanotubes Single field shapes lengthen to centimeters Lasers move droplets promise reliable MRAM Research News Roundup Research Watch blog View from the High Ground Q&A How It Works News | Blog Buy an ad link
<urn:uuid:0e10bcb7-986b-4ea1-bb26-59b8381e0180>
CC-MAIN-2014-52
http://www.trnmag.com/Stories/2004/110304/Single_field_shapes_quantum_bits_110304.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802776996.17/warc/CC-MAIN-20141217075256-00118-ip-10-231-17-201.ec2.internal.warc.gz
en
0.869432
1,141
3.84375
4
According to experts, the chips used to power data center servers will continue to get smaller and faster, and Moore's Law of doubled performance every two years will continue unabated for at least the next 10 years. Also, new features like security and wireless communications will be bundled into microprocessors, ultimately easing the jobs of people in the data center who implement and maintain the servers. First discovered in 1824 as a means of conducting electricity, silicon has been used as an essential semiconductor building block virtually since the first integrated circuit was designed at Texas Instruments in 1958. After 2010, though, things could get very interesting. "Everyone in the field is pretty comfortable that Moore's Law and silicon will dominate through the end of the decade," says Nathan Brookwood, a semiconductor analyst at Insight 64, an independent consultancy in Saratoga, Calif. "After that, though, people are a little worried about whether silicon will be able to go on indefinitely." Some believe that silicon eventually will start to run out of steam; after all, one can cram only so many things into a tiny space before reaching a point of diminishing returns. Experts have been debating for years when that limit will be reached. There's been plenty of research going on in various areas of chip design and fabrication to help overcome the obstacles. Nanotechnology, quantum computing and other technologies have come to the fore as possible silicon replacements, Brookwood says. In the meantime, semiconductor makers are trying to do all they can to keep silicon alive. Although research in new areas is ongoing, to actually mass-produce anything other than silicon will cost billions of dollars in new semiconductor manufacturing and design equipment. As the economy continues to spiral downward, these are not costs that chip makers are eager to bear. One method of prolonging silicon's life is to put two or more processors on a single piece of silicon. Called chip multiprocessing, this is expensive because all the components in each chip must be replicated. A less expensive approach -- one already being used by Intel and about to be adopted by Sun and others -- is called simultaneous multithreading. With this technique, some parts of the chip are replicated but the device can switch among multiple threads while sharing a lot of the underlying chip resources. Thus one chip can do the work of two or more, cutting down the number of chips needed to do the same amount of work. Other approaches are being tried, too. In April, IBM and Sony announced a deal to jointly develop chips based on silicon-on-insulator and other types of materials. Silicon-on-insulator places a level of insulation between the transistor and its silicon substrate, reducing distortion and improving switching by 20 to 35 percent. Another up-and-coming area is called a "system on a chip," in which the central processing unit, communications features and memory are integrated onto one chip. In the meantime, though, data center staffers will continue to see familiar, if welcome, improvements in chip and server technologies. Mike Splain, chief technology officer for Sun's processor and network products group, expects several things to happen during the next few years with high-end servers. One is a migration from software-based recovery to hardware-based, so error recovery is more automatic and totally guaranteed. Another trend will be the use of doubled-up processor cores -- using multiple threads in one CPU. "You won't get exactly a doubling every two years, but it will be some factor of that," Splain says. "So you might see a true doubling every three to four years" in the highest-end machines. He also expects the machines to become much smaller because of this doubling-up, so customers "will get more space back in their data centers." For its part, Intel has committed to expanding its NetBurst architecture, now used in Intel's Pentium and Xeon chips as well as the Itanium line, to be able to handle 10 GHz, up from 2.8 GHz today, according to Mike Graf, product line manager for the Itanium processor family. Intel is positioning Itanium as the highest-end chip in its lineup, making it the basis for machines with 32-plus processors. Distributed databases, scientific technical computing and other high-performance niches are its target markets. Intel is currently shipping its second-generation Itanium. At the Intel Developer's Forum in September, the company laid out plans for what's ahead. By summer 2003, the "Madison" generation of the chip will debut, with twice the cache (6M bytes versus the current 3M bytes), and those chips will be 30 to 50 percent faster than the current generation. Also, Itanium will feature multithreading and hyperthreading, which allows the computer to execute multiple threads in parallel. This, in turn, improves transaction rates by up to 30 percent over systems that don't have hyperthreading, Intel says. Multithreading and hyperthreading are already features in the Xeon family and will now move to Itanium. Within about two years, the company will debut a chip with 1 billion transistors on a single processor, Graf says. Today's top-of-the-line processor has about 221 million transistors, and the Madison generation will sport around a half-billion. Perhaps just as important, future generations of Itanium -- and there are five in development -- will be compatible at both the hardware and software levels with the existing chip set. This should translate into fewer installation problems with device drivers and other issues down the road. The goal is to provide IT shops, especially in these troubled economic times, the means for "doing more with less," Graf says. All told, Intel will "take technology traditionally in the high end of the market and bring it into the mainstream," says Tony Massimini, chief of technology at Semico Research Corp. in Phoenix, an independent consultancy specializing in semiconductor research. "They will keep pumping out chips in high volume and low price, and this will look very attractive" to IT shops, he says. Although Massimini doesn't expect anything radically different for server chips in the next few years, he did say that Intel will be "pushing wireless" features a great deal. "With greater wireless connectivity for notebooks and desktops, that will put a load on the data center guys" to support those features from the server side, he adds. According to Massimini, at its recent developer forum Intel "alluded" to the idea improving security by embedding some features into its chips, through something code-named La Grande, although the company didn't provide a timeframe for doing this. Intel's Graf wouldn't disclose any information about this project, but said that security is an issue the company is aware of and working on. For more information: - Search390.com's Best Web Links on processors/servers - Learn more about Moore's Law and Intel processors in this tip: Intel brings economies of scale to high-end space with Itanium 64-bit processor
<urn:uuid:0f470dd2-3680-429d-a93e-d48ae2956f3c>
CC-MAIN-2014-52
http://searchoracle.techtarget.com/tip/Moore-s-law-has-a-shelf-life-Chip-makers-plan-for-a-post-silicon-world
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447552650.71/warc/CC-MAIN-20141224185912-00007-ip-10-231-17-201.ec2.internal.warc.gz
en
0.954248
1,445
3.578125
4
In my previous article, I talked about the RSA cryptosystem which is widely used on the Internet for secure data transmission. The power and security of the RSA cryptosystem derives from the fact that the factoring problem is “hard.” That is, it is believed that the full decryption of an RSA ciphertext is infeasible because no efficient classical algorithm currently exists for factoring large numbers. However, in 1994 Peter Shor showed that a quantum computer could be used to factor a number in polynomial time, thus effectively breaking RSA. It may be tempting to use the speed of a quantum computer to simply check all possible divisors in parallel. In this case, we would be performing a classical algorithm on a quantum computer, making use only of the increased speed of the quantum machine. Unfortunately, this is not going to work. In a way, it is possible for a quantum computer to try all possible divisors. However, due to the nature of quantum computing, when measuring the outcome of the computations, you will get a random possible divisor, which is almost certainly not the one you want. How, then, can we use a quantum computer to solve the factoring problem? The key to a fast and accurate quantum factoring algorithm is to make use of the structure of the factoring problem itself. Instead of looking for factors directly, we must use some mathematical property of factoring. Fortunately, the factoring problem has plenty of special properties from which to choose. For example, given a positive integer, even if we do not know its prime factorization we do know that it has exactly one factorization. This fact does not help us solve the factorization problem, but it does give us hope that the problem has other nice mathematical properties that will. The property we will use is the ability to reduce the prime factorization problem into a problem of order (or period) finding. Let’s start by looking at an example. Consider the sequence of numbers 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, … Now, let’s look at this same sequence of powers of two, but taken “mod 15.” In other words, we will create a new sequence of numbers consisting of the remainders when each power of two is divided by 15. This gives us the new sequence 2, 4, 8, 1, 2, 4, 8, 1, 2, 4, … We see that taking the powers of two mod 15 gives us a periodic sequence whose period (or order) is four. For another example, consider the same powers of two, but taken mod 21. In this case, we have the new sequence 2, 4, 8, 16, 11, 1, 2, 4, 8, 16, … Here, we get a periodic sequence whose period is six. In the 1760s, Euler discovered a beautiful pattern to this period finding problem. Let be the product of two prime numbers, and . Consider the sequence If is not divisible by or , then the above sequence will repeat with some period that evenly divides . In our examples above, we have . In the first example, we have which has the prime factors and . Then, , which is divisible by the period of 4. In the second example, we have which has the prime factors and . Then, , which is divisible by the period of 6. But how does this help us solve the factorization problem? If we can find the period of the sequence then we learn something about the prime factors of . In particular, we learn a divisor of . Of course, we’d rather learn the factors and themselves, but this, at least, represents progress. If we determine several random divisors of by trying different random values of , then we can multiply those divisors together to obtain itself. Once we know , we can then determine and . However, there’s still a problem with applying our observations to the factorization problem. Even though the sequence will eventually start repeating itself, the number of steps before it repeats could be almost as large as , which in the RSA cryptosystem is a very large number! This issue is why finding the period of the sequence does not appear to lead to a classical factoring algorithm. However, with the help of quantum mechanics, we can define a quantum algorithm that works in a reasonable amount of time. Shor’s algorithm is composed of two parts. The first part turns the factoring problem into the period finding problem, and can be computed on a classical computer. The second part (step 2 below) finds the period using the quantum Fourier transform and is responsible for the quantum speedup of the algorithm. We begin by briefly describing all five steps. After that, we will focus on the quantum part of the algorithm (i.e. step 2). To factor a large integer (which, without loss of generality, we may assume is odd), we use Shor’s algorithm: 1. Choose a random positive integer . Compute gcd, which may be done in polynomial time using the Euclidean algorithm. If gcd, then we have found a non-trivial factor of , and we are done. If, on the other hand, gcd, proceed to step 2. 2. Use a quantum computer to determine the unknown period of the sequence 3. If is an odd integer, then return to step 1. The probability that is odd is , where is the number of distinct prime factors of . If is even, then proceed to step 4. 4. Since is even, If , then go to step 1. If , then proceed to step 5. It can be shown that the probability that is less than , where denotes the number of distinct prime factors of . 5. Compute gcd using the Euclidean algorithm. Since , it can be shown that is a non-trivial factor of . Exit with the answer . Thus, the task of factoring an odd positive integer reduces to the problem of finding the period of a function/sequence. Shor’s period-finding algorithm (step 2 above) relies heavily on the ability of a quantum computer to be in many states simultaneously (a superposition of states). To compute the period of a function , we evaluate the function at all points simultaneously. Unfortunately, quantum mechanics does not allow us to access all of this information directly. Instead, a measurement must be taken which will yield only one of the possible values (destroying all others). Because of this issue, we must transform the superposition to another state that will return the correct period with high probability. This is achieved by using the quantum Fourier transform. The main components of Shor’s period-finding algorithm are as follows: 1. Create a superposition of states. This can be done by applying Hadamard gates to all qubits in the input register. 2. Implement the function as a quantum transform. To achieve this, Shor used repeated squaring for his modular exponentiation transform. 3. Perform a quantum Fourier transform. After these transformations, a measurement will yield an approximation to the period . Now let’s look at an example of how can be factored using Shor’s algorithm. Step 1. Choose a random positive integer , say . Since gcd, proceed to step 2 to find the period of the function given by Step 2. Run Shor’s period-finding algorithm on a quantum computer to find (with high probability) that the period . Step 3. Since is even, we proceed to step 4. Step 4. Since proceed to step 5. Step 5. With the Euclidean algorithm, compute gcd= gcd= gcd We have succeeded in using Shor’s algorithm to find a non-trivial factor of , namely . Shor, P. W. (1997). Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing, 26(5), 1484-26. doi:http://dx.doi.org/10.1137/S0097539795293172 Lomonaco, Jr, S. J. (2000) Shor’s Quantum Factoring Algorithm. arXiv:quant-ph/0010034
<urn:uuid:7a993a74-bb8a-40c7-a003-9479b946d3a4>
CC-MAIN-2014-52
http://blogs.ams.org/mathgradblog/2014/04/30/shors-algorithm-breaking-rsa-encryption/
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770432.4/warc/CC-MAIN-20141217075250-00035-ip-10-231-17-201.ec2.internal.warc.gz
en
0.91168
1,768
3.515625
4
One of the things I have a hard time intuiting, in quantum computing, is the interplay between classical bits and quantum bits. A good example of this is superdense coding. Superdense coding encodes two classical bits into a single transmitted qubit, by taking advantage of a previously shared qubit. (Superdense coding is also fun to say.) Thought of another way, superdense coding turns previously entangled qubits into a fuel you can store and then later consume to double your bandwidth. Which is what I mean when I say superdense coding lets you store bandwidth. In case you don't want to watch this video explaining superdense coding (I'd recommend the whole series it's part of), I will explain it here. In order to do superdense coding you need three things: - A way to store qubits. - A quantum communication channel to transmit qubits. - The ability to do a few quantum operations to qubits. The actual protocol is not too complicated, although understanding why it works can be. Here is a quantum circuit diagram showing what happens, which I will explain below: Imagine that Alice is the one who wants to send information, and Bob is the one who will receive it. Alice roughly corresponds to the top of the diagram, and Bob to the bottom. The sequence of events, from left to right, is as follows. First, ahead of time, Alice and Bob each get half of a Bell pair. That is to say, two qubits are placed into a superposition where either both are false or both are true, and then Alice and Bob each take one of those qubits. There's a lot of flexibility in who actually makes the Bell pair. Alice can do it, Bob can do it, or an unrelated third party can do it. Regardless, what matters is that the Bell pair can be delivered ahead of time and stored for later use. Second, Alice decides what information she wants to send to Bob. She can send two bits (i.e. one of four possibilities). We'll call the possible messages Third, Alice encodes the message by applying operations to her qubit (the one from the Bell pair). The operations are based on the message she wants to send. If she wants to send 00, she does nothing. For 01, she rotates the qubit 180° around its Z axis. For 10 she instead rotates 180° around its X axis. Otherwise the message is 11 and she rotates both 180° around the X axis and then 180° around the Z axis. Note that the 11 case is technically a rotation around the Y axis, but it's nice to split it into X and Z rotations because it makes the circuit simpler. It means Alice can just apply the X rotation if the second bit is true, and afterwards the Z rotation if the first bit is true. Fourth, Alice sends her qubit to Bob. So Bob will end up with both halves of the Bell pair, but Alice has operated on one of them. Fifth, Bob does a decoding operation. He conditionally-nots his qubit, conditioned on Alice's qubit. This will flip the value of his qubit in the parts of the superposition where hers is true. Then he rotates Alice's qubit by 180° around the diagonal X+Z axis (i.e. applies the Hadamard operation). Note that the decoding operation Bob applies is actually the inverse of how the Bell pair is made. Normally the decoding operation would just "unmake" the pair, leaving Bob with two qubits set to false and not in superposition. That's why Alice applying no operation corresponds to sending 00. The reasons the other operations give the right results are a bit harder to explain, and I won't try here, but the qubits do always end up in the right state. Finally, Bob measures the two qubits and retrieves the message. The interesting thing about superdense coding is that, although Bob still has to receive one qubit per classical bit, one of the qubits can be sent far in advance. Then, when the actual message has to be sent, half of what's needed to reconstruct it has already arrived. So, basically, the pre-shared Bell pairs let you store bandwidth. They are a fuel that you consume to transmit at double speed. This would do interesting things to network design. For example, during times of low utilization you could use the remaining capacity to share Bell pairs and build up bandwidth to be consumed during high utilization. This would smooth out traffic peaks. Alternatively, you could double the bandwidth of a low-latency channel by continuously making Bell pairs on a secondary high latency channel. (Imagine a truck showing up every day to drop off a box filled with trillions of qubits in Bell pairs, so your internet can go faster.) Of course all of this assumes that you'll want to use quantum channels to send classical information. Maybe classical channels will simply be more than twice as fast (do photons decohere when sent over fiber?). Maybe quantum channels will be too expensive to bother. Maybe we'll be too busy sending qubits over them to spare time to send classical bits. There's tons of practical reasons it might not work out. But still, I enjoy the hypothetical image of trucks dropping off boxes of internet-go-fast. exploits empowers a quantum communication channel to send, ahead of time, half of what will be needed to reconstruct a classical message. This lets you transmit at double speed until the pre-delivered qubits run out.
<urn:uuid:a0dcc19e-ff7e-4712-a84b-e3831137681a>
CC-MAIN-2014-52
http://strilanc.com/quantum/2014/05/03/Storing-Bandwidth-with-Superdense-Coding.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769709.84/warc/CC-MAIN-20141217075249-00031-ip-10-231-17-201.ec2.internal.warc.gz
en
0.936758
1,174
3.859375
4
It’s unique and quite intriguing to discover that quantum mechanics can manifest itself in a form that could enhance the capabilities of traditional computer systems, which as we all know today work on binary. The application creates what is called as quantum computers and it harvests the principles of quantum mechanics to attain computing power that is beyond the scope of classical computers that we now use. The article gives a brief overview of this phenomenon of computing in layman terms, one that non-physicist computing geeks could possibly digest. How traditional computers work – All information is processed and understood by a computer using this binary language composed of bits (0 or 1). When you break a computer down, you will find a bunch of silicon chips with circuits of logic gates made up of transistors or switches which function using voltage. A high voltage represents on state of the switch equivalent to 1 and a low equivalent to 0. All forms of data be it text, music, audio, video or software are ultimately encoded and stored by the computer as binary in the computer’s memory. Rethinking binary and transistors – Abandoning the existing classical principles of computing, this new world of quantum computing follows its own rules, one that nature is based on. Nature is not classical. The natural world does not function at the macroscopic level and it is this fundamental aspect that quantum computing is built on, that is: To reduce what we call “bits” or switches down to the smallest possible discrete unit or quantum level, computing like nature computes. This gives rise to “qubits” as opposed to classical bits. How quantum computers work – Logically, the quantum system uses, as mentioned earlier, what is coined as qubits as the smallest discrete units to represent information, which may be electrons with spins, photons with polarization, trapped ions, semiconducting circuits etc. The property of quantum mechanics comes into play as a single qubit can exist not only, in two discrete energy states, low and high (similar to 0 and 1) but it can also exist in a superposition state where in it exists in both states at once. When measured however, the superposition fades and one of the two distinct states is returned based on the probabilities of each state. When using two qubits instead of a single qubit 4 discrete energy states exist, (2 discrete states for each qubit) and a qubit can even exist in a superposition of these states. Similarly using n qubits, 2n states are achieved which exist as combinations of 0s and 1s in parallel. So this gives a way to represent information. The next step is to process information, which requires manipulation of these qubits. This is brought about by the use of special quantum logic gates and quantum algorithms such as Shor’s algorithm and Grover’s algorithm which function using the principles of quantum mechanics of superposition, entanglement and measurement. Without going into the complicated details of the quantum phenomena, the state of the qubits is manipulated by application of precise electromagnetic waves, microwaves and amplification functions as defined by the algorithms. Advantages of quantum computers – Two key factors make quantum computers a billion times more powerful than the most powerful supercomputer known to us today. These are: - Exponential increase in computing ability with the addition of each qubit This gives quantum computers processing power that is beyond the scope of a classical computer. Applications of quantum computing – Processing of billions of bytes can easily be performed by quantum computers, which can be applied in: - Big data - Molecular Simulations - Protein Folding - Drug Discovery - Genome Sequencing - Diagnose DNA sequence - Catalyst Analysis - Financial Analysis - Climate Prediction - Graphic searches of complicated databases - Massive Software Testing Work on quantum computers is an ongoing endeavor with tremendous potential to revolutionize the way we understand the digital world. It does not seek to replace classical computers but a sustainable quantum computer could aid classical computers in computationally intensive tasks that are restrictive, difficult and time consuming for our traditional Turing based computers. GeeksforGeeks has prepared a complete interview preparation course with premium videos, theory, practice problems, TA support and many more features. Please refer Placement 100 for details - What is the need of CMOS battery in Computers? - 10 Interesting Facts About Computers - Role of Computers in Crime - Introduction of Ports in Computers - Everything You Need to Know About Google's Quantum Supremacy - Introduction to quantum computing - Quantum Computing - pros and cons - Who Will Win The Quantum Supremacy Debate: Google or IBM? - Effect of Google Quantum Supremacy on Data Science - Quantum Computing - The Computing Technology of Tomorrow - Conventional Computing vs Quantum Computing - Calculate Efficiency Of Binary Classifier - Program for Binary To Decimal Conversion - Program for Decimal to Binary Conversion - Endian order and binary files If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
<urn:uuid:c4a686c6-10e5-426b-854f-7ffa611d90c5>
CC-MAIN-2020-24
https://www.geeksforgeeks.org/rethinking-binary-with-quantum-computers/?ref=rp
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513321.91/warc/CC-MAIN-20200606124655-20200606154655-00564.warc.gz
en
0.900764
1,116
3.671875
4
Scientists Use Real Data to Measure the Cosmos Illustration of the concept of Baryonic Acoustic Oscillations. Scientists from the Imperial College London have used data, rather than calculations related to general relativity, to measure large distances in the Universe for the first time. A research team from Imperial College London and the University of Barcelona has used data from astronomical surveys to measure a standard distance that is central to our understanding of the expansion of the universe. Previously the size of this ‘standard ruler’ has only been predicted from theoretical models that rely on general relativity to explain gravity at large scales. The new study is the first to measure it using observed data. A standard ruler is an object which consistently has the same physical size so that a comparison of its actual size to its size in the sky will provide a measurement of its distance to earth. “Our research suggests that current methods for measuring distance in the Universe are more complicated than they need to be,” said Professor Alan Heavens from the Department of Physics, Imperial College London who led the study. “Traditionally in cosmology, general relativity plays a central role in most models and interpretations. We have demonstrated that current data are powerful enough to measure the geometry and expansion history of the Universe without relying on calculations relating to general relativity. “We hope this more data-driven approach, combined with an ever increasing wealth of observational data, could provide more precise measurements that will be useful for future projects that are planning to answer major questions around the acceleration of the Universe and dark energy.” The standard ruler measured in the research is the baryon acoustic oscillation scale. This is a pattern of a specific length which is imprinted in the clustering of matter created by small variations in density in the very early Universe (about 400,000 years after the Big Bang). The length of this pattern, which is the same today as it was then, is the baryon acoustic oscillation scale. The team calculated the length to be 143 Megaparsecs (nearly 480 million light years) which is similar to accepted predictions for this distance from models based on general relativity. Published in Physical Review Letters, the findings of the research suggest it is possible to measure cosmological distances independently from models that rely on general relativity. Einstein’s theory of general relativity replaced Newton’s law to become the accepted explanation of how gravity behaves at large scales. Many important astrophysics models are based on general relativity, including those dealing with the expansion of the Universe and black holes. However some unresolved issues surround general relativity. These include its lack of reconciliation with the laws of quantum physics and the need for it to be extrapolated many orders of magnitude in scales in order to apply it in cosmological settings. No other physics law have been extrapolated that much without needing any adjustment so its assumptions are still open to question. Co-author of the study, Professor Raul Jimenez from the University of Barcelona said: “The uncertainties around general relativity have motivated us to develop methods to derive more direct measurements of the cosmos, rather than relying so heavily on inferences from models. For our study we only made some minimal theoretical assumptions such as the symmetry of the Universe and a smooth expansion history.” Co-author Professor Licia Verde from the University of Barcelona added: “There is a big difference between measuring distance and inferring its value indirectly. Usually in cosmology we can only do the latter and this is one of these rare and precious cases where we can directly measure distance. Most statements in cosmology assume general relativity works and does so on extremely large scales, which means we are often extrapolating figures out of our comfort zone. So it is reassuring to discover that we can make strong and important statements without depending on general relativity and which match previous statements. It gives one confidence that the observations we have of the Universe, as strange and puzzling as they might be, are realistic and sound!” The research used current data from astronomical surveys on the brightness of exploding stars (supernovae) and on the regular pattern in the clustering of matter (baryonic acoustic oscillations) to measure the size of this ‘standard ruler’. The matter that created this standard ruler formed about 400,000 years after the Big Bang. This period was a time when the physics of the Universe was still relatively simple so the researchers did not need to consider more ‘exotic’ concepts such as dark energy in their measurements. “In this study we have used measurements that are very clean,” Professor Heavens explained, “And the theory that we do apply comes from a time relatively soon after the Big Bang when the physics was also clean. This means we have what we believe to be a precise method of measurement based on observations of the cosmos. Astrophysics is an incredibly active but changeable field and the support for the different models is liable to change. Even when models are abandoned, measurements of the cosmos will survive. If we can rely on direct measurements based on real observations rather than theoretical models then this is good news for cosmology and astrophysics.” The research was supported by the Royal Society and the European Research Council. Publication: Alan Heavens, et al., “Standard Rulers, Candles, and Clocks from the Low-Redshift Universe,” Phys. Rev. Lett. 113, 241302, 2014; doi:10.1103/PhysRevLett.113.241302 Image: Chris Blake & Sam Moorfield - MIT Engineers Explain Why Puddles Stop Spreading - Scientists Reveal Blueprint for How to Construct a Large Scale Quantum Computer - 2D Material, Just 3 Atoms Thick, Has Potential for Use in Quantum Computing - Solving the Mystery of Quantum Light in Thin Layers – Exotic Phenomenon Finally Explained - OLYMPUS Experiment Shows Two Photons Are Exchanged During Electron-Proton Interactions - Yale Physicists Discover Signs of a Time Crystal - Human Presence Increases Indoor Bacteria - “Magnetic” Memory Discovered in European Glass Eels - Expert Says Oils Added to Vaping Products Cause Damage to Lungs - Baseline Configuration of Ariane 6 Selected - Juno Spacecraft Image Shows Crescent Jupiter with the Great Red Spot - Gas Flows Back into Star-Forming Galaxies - Astronomers Identified Moons Capable of Supporting Life - Physicists Complete First End-to-End Quantum Data Transmission Done on Demand
<urn:uuid:25e5576d-ad2c-4ddf-8a6b-e939f3755944>
CC-MAIN-2020-24
http://xianso.com/Article/6316
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00364.warc.gz
en
0.932737
1,361
3.65625
4
Light travels extremely fast – in less than a second, it could travel seven times the circumference of the Earth. Most of our communication systems use light or other electromagnetic waves to send messages, which means we can talk to others on the far side of the world in almost no time at all. It’s difficult to imagine that, on Earth, we would ever need anything faster. However, space is big: sending a message to Mars using light would take 12.5 minutes, resulting in a very jolted conversation. Sending a message to the nearest star beyond our Sun would take no less than four years. If superluminal communication (communication faster than light) were possible, it would open up doors for how we might communicate with deep space explorers in the future. Looking at whether superluminal communication is possible takes us on a whirlwind tour around some of the most exciting places in physics, from space-warped wormholes to particles that can travel backwards in time. We begin, though, in the bizarre world of quantum mechanics. Some of the strangest phenomena in science are described by the theory of quantum mechanics, a theory that was developed in the 1930s and has received great experimental support since. One of the strangest phenomena is known as ‘quantum entanglement’, which appears to allow quantum particles to communicate with each other at more than 10,000 times the speed of light. Entanglement occurs when two particles are linked to each other in such a way that they behave as one and the same entity. Entangled particles can be created quite easily in the laboratory with the right equipment. Particles have a property called ‘spin’, and a particle’s spin can be either up or down. Quantum mechanics tells us that two particles that are entangled don’t have a definite spin until their spin is actually measured. This means that the act of measuring the particle actually changes the state it is in. This is bizarre enough, but here is the crux: for entangled particles, the act of measuring one particle doesn’t just change the state that particle is in, but also changes the state of the other particle. If the first particle’s spin was measured, and found to be up, the second particle’s spin would then change from being indefinite to being down. What is especially striking is that, according to quantum mechanics, the particles have this influence on each other however far apart they are, even if they are on opposite sides of the universe. Since the 1980s, experiments have been performed demonstrating this phenomenon, with more recent experiments showing that the influence is taking place at least at 10,000 times the speed of light. If quantum entanglement could be exploited to send messages, it would mean big things for superluminal communication. Unfortunately, however, it has been proved that quantum entanglement cannot be used to send messages superluminally, and that nor can it be used to send any kinds of messages at all. This law is known as the ‘no signaling theorem’. Its proof essentially shows that, despite the link between two entangled particles, there is nothing that one person can do to one entangled particle that would be detectable by another person with the other entangled particle. Warped Spacetime and Wormholes Our next stop takes us to the theory of general relativity, into the very fabric of the universe. Because of the three spatial dimensions and one temporal dimension that makes up our universe, we call this fabric ‘spacetime’. The idea of a wormhole, first introduced in the 1920s, is based on the thought that spacetime can be warped, providing a shortcut between two distant points in the universe. To conceptualise how this might work, imagine that two distant points in the universe are represented by two ends of a long thin rubber tube. The rubber tube itself represents the distance between these two points. However, if you curl (or ‘warp’) the tube so that the two ends meet, you have created a shortcut for getting from one point to the other. If such wormholes do exist, it might be possible to use them to send messages from one point in spacetime to another. Though the message would not actually be travelling superluminally, it would certainly appear to be. However, though the theory of general relativity, which is currently our best theory of how spacetime works, does not deem them impossible, no evidence of wormholes has yet been found. Moreover, even if they did exist, it would be serious challenge to use them to send messages: they would be extremely unstable and sending a signal through it might cause it to collapse. Moving faster than light What if we could communicate superluminally simply by speeding up the signals that carry our messages so they go faster than the speed of light? Unfortunately, as the theory of special relativity tells us, this is not possible. The speed of light in any given medium is always constant. This means that we cannot speed up the light or other electromagnet waves that carry our messages. Nor can we get a different particle, one with mass, to speed up so much that it crosses the barrier of the speed of light, and moves superluminally. Special relativity shows that the more and more energy you give a particle with mass, the heavier and heavier it gets; and subsequently, the harder it is to speed it up. In fact, it would take an infinite amount of energy to make a particle with mass travel at the speed of light. As we could never harness an infinite amount of energy, getting a particle to cross the speed of light is definitely a no-go. But what about a particle that always moves superluminally. Though special relativity excludes the possibility of a particle crossing the speed of light, it has no qualms about a particle that permanently moves at more than the speed of light. Such particles were first hypothesized in the 1960s, and are called ‘tachyons’. Though tachyons have never been detected, their existence has not been ruled out either. If tachyons do exist, they have many strange properties: to start with, their mass, derived from taking the square root of a negative number, is mathematically ‘imaginary’. Furthermore, they have negative energy: in fact, the less energy a tachyon has, the faster it moves and a tachyon with no energy moves infinitely fast. To top it all off, tachyons can actually move backwards in time. Incorporating such characteristics into a coherent theory is something of a challenge (how can something have an imaginary mass?). But if tachyons could be used to send messages, perhaps the biggest challenge of all would be dealing with the paradoxes that arise. Consider this: suppose that Alice sends Bob a message at midday using tachyons, which, since tachyons can move backwards in time, Bob receives at 11am. The message to Bob reads: “Send Alice a message telling her to not contact you anymore”. So, at 11am, Bob sends Alice a message using tachyons, telling her to not contact him anymore, which Alice receives at 10am. Then from 10am, Alice stops all contact with Bob. So, because of the message she sends at midday, she will no longer send that message at midday. Thus arises the paradox. It seems unlikely that we will be able to have superluminal communication in the future, because the potential avenues that may lead to it are riddled with theoretical impossibilities. However, these avenues take us through some of the most interesting areas of physics that explore the fundamental nature of our universe, where there is still so much more to learn. So, never say never. Kruti Shrotri is studying for an MSc in Science Communication
<urn:uuid:1df1d66e-de7c-4c17-9336-45b9bfd9c43e>
CC-MAIN-2020-24
http://isciencemag.co.uk/features/superluminal-communication-were-talking-faster-than-light/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348519531.94/warc/CC-MAIN-20200606190934-20200606220934-00365.warc.gz
en
0.957427
1,624
3.96875
4
Capacitance measurement of bilayer graphene at a high magnetic field. The vertical dark blue to orange lines are signatures of fractional quantum Hall states that are shared between the two layers of the bilayer graphene sheet. The vertical line going through the center is believed to host an intriguing type of particles: non-Abelian anyons. Credit: University of California - Santa Barbara What kinds of 'particles' are allowed by nature? The answer lies in the theory of quantum mechanics, which describes the microscopic world. In a bid to stretch the boundaries of our understanding of the quantum world, UC Santa Barbara researchers have developed a device that could prove the existence of non-Abelian anyons, a quantum particle that has been mathematically predicted to exist in two-dimensional space, but so far not conclusively shown. The existence of these particles would pave the way toward major advances in topological quantum computing. In a study that appears in the journal Nature, physicist Andrea Young, his graduate student Sasha Zibrov and their colleagues have taken a leap toward finding conclusive evidence for non-Abelian anyons. Using graphene, an atomically thin material derived from graphite (a form of carbon), they developed an extremely low-defect, highly tunable device in which non-Abelian anyons should be much more accessible. First, a little background: In our three-dimensional universe, elementary particles can be either fermions or bosons: think electrons (fermions) or the Higgs (a boson). "The difference between these two types of 'quantum statistics' is fundamental to how matter behaves," Young said. For example, fermions cannot occupy the same quantum state, allowing us to push electrons around in semiconductors and preventing neutron stars from collapsing. Bosons can occupy the same state, leading to spectacular phenomena such as Bose-Einstein condensation and superconductivity, he explained. Combine a few fermions, such as the protons, neutrons, and electrons that make up atoms and you can get either type, but never evade the dichotomy. In a two-dimensional universe, however, the laws of physics allow for a third possibility. Known as "anyons," this type of quantum particle is neither a boson nor a fermion, but rather something completely different—and some kinds of anyons, known as non-Abelian anyons, retain a memory of their past states, encoding quantum information across long distances and forming the theoretical building blocks for topological quantum computers. Although we don't live in a two dimensional universe, when confined to a very thin sheet or slab of material, electrons do. In this case, anyons can emerge as "quasiparticles" from correlated states of many electrons. Perturbing such a system, say with an electrical potential, leads to the entire system rearranging just as if an nayon had moved. The hunt for non-Abelian anyons begins by identifying the collective states that host them. "In fractional quantum Hall states—a type of collective electron state observed only in two dimensional samples at very high magnetic fields—the quasiparticles are known to have precisely a rational fraction of the electron charge, implying that they are anyons," Young said. "Mathematically, sure, non-Abelian statistics are allowed and even predicted for some fractional quantum Hall states." he continued. However, scientists in this field have been limited by the fragility of the host states in the semiconductor material where they are typically studied. In these structures, the collective states themselves appear only at exceptionally low temperatures, rendering it doubly difficult to explore the unique quantum properties of individual anyons. Graphene proves to be an ideal material to build devices to search for the elusive anyons. But, while scientists had been building graphene-based devices, other materials surrounding the graphene sheet—such as glass substrates and metallic gates—introduced enough disorder to destroy any signatures of non-Abelian states, Zibrov explained. The graphene is fine, it's the environment that is the problem, he said. The solution? More atomically thin material. "We've finally reached a point where everything in the device is made out of two-dimensional single crystals," said Young. "So not only the graphene itself, but the dielectrics are single crystals of hexagonal boron nitride that are flat and perfect and the gates are single crystals of graphite which are flat and perfect." By aligning and stacking these flat and perfect crystals of material on top of each other, the team achieved not only a very low-disorder system, but one that is also extremely tunable. "Besides realizing these states, we can tune microscopic parameters in a very well controlled way and understand what makes these states stable and what destabilizes them," Young said. The fine degree of experimental control—and elimination of many unknowns— allowed the team to theoretically model the system with high accuracy, building confidence in their conclusions. The materials advance gives these fragile excitations a certain amount of robustness, with the required temperatures nearly ten times higher than needed in other material systems. Bringing non-Abelian statistics into a more convenient temperature range proves an opportunity for not only for investigations of fundamental physics, but reignites hope for developing a topological quantum bit, which could form the basis for a new kind of quantum computer. Non-Abelian anyons are special in that they are thought to be able to process and store quantum information independent of many environmental effects, a major challenge in realizing quantum computers with traditional means. But, say the physicists, first things first. Directly measuring the quantum properties of the emergent quasiparticles is very challenging, Zibrov explained. While some properties—such as fractional charge—have been definitively demonstrated, definitive proof of non-Abelian statistics—much less harnessing nonabelian anyons for quantum computation—has remained far out of the reach of experiments. "We don't really know yet experimentally if non-Abelian anyons exist," Zibrov said. "Our experiments so far are consistent with theory, which tells us that some of the states we observed should be non-Abelian, but we still don't have an experimental smoking gun." "We'd like an experiment that actually demonstrates a phenomenon unique to non-Abelian statistics," said Young, who has won numerous awards for his work, including the National Science Foundation's CAREER Award. "Now that we have a material that we understand really well, there are many ways to do this—we'll see if nature cooperates!"
<urn:uuid:6bfef811-d2b5-4c2c-8274-82c9dabf1f98>
CC-MAIN-2020-24
http://www.singlecrystal.net/2018/01/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00566.warc.gz
en
0.947315
1,380
3.53125
4
In this article, we will start a path to explain in detail all you need to know about digital quantum electronics. In the first article published earlier, we focused on qubits as "bits" of information for quantum systems and some elements of quantum mechanics. But how are qubits physically realized? How can electronics manage these elements that belong to a quantum ecosystem? In this article, we will start a path to explain in detail all you need to know about digital quantum electronics. The classic computer bits can be 0 and 1, and two bits form four possible states: 00, 01, 10, 11. In general, with n bits, you can build 2n distinct states. How many states can you get with n qubit? The space of the states generated by a system of n qubit has dimension 2n: each vector normalized in this space represents a possible computational state, which we will call quantum register of n qubit. This exponential growth in the number of qubits suggests the potential ability of a quantum computer to process information at a speed that is exponentially higher than that of a classical computer. Note that for n = 200 you get a number that is larger than the number of atoms in the universe. Quantum Computer Design: An introduction Formally, a quantum register of n qubit is an element of the 2n-dimensional Hilbert space, C2n, with a computational basis formed by 2n registers at n qubit. Let’s consider the case of two qubits. In analogy with the single qubit, we can construct the computational base of the states’ space as formed by the vectors |00>, |01>, |10>, |11>. A quantum register with two qubits is an overlapping of the form: With the normalization on the amplitudes of the coefficients. Like classical computers, a quantum computer is made up of quantum circuits consisting of elementary quantum logic gates. In the classical case, there is only one (non-trivial) one-bit logical port, the NOT port, which implements the logical negation operation defined through a truth table in which 1 → 0 and 0 → 1. To define a similar operation on a qubit, we cannot limit ourselves to establishing its action on the primary states |0> and |1>, but we must also specify how a qubit that is in an overlapping of the states |0> and |1> must be transformed. Intuitively, the NOT should exchange the roles of the two primary states and transform α |0> + β |1> into β |0> + α |1>. Clearly |0> would turn into |1> and |1> into |0>. The operation that implements this type of transformation is linear and is a general property of quantum mechanics that is experimentally justified. The matrix corresponding to quantum NOT is called for historical reasons X and is defined by: With the condition of normalization|α|2 + |β|2 = 1 any quantum stateα |0> + β |1>. Besides NOT, two important operations are represented by the Z matrix: which acts only on the component |1> exchanging its sign, and the Hadamard port: This last operation is very often used in the definition of quantum circuits. Its effect is to transform a base state into an overlap that results, after a measurement in the computational base, to be 0 or 1 with equal probability. The effect of H can be defined as a NOT executed in half so that the resulting state is neither 0 nor 1, but a coherent superposition of the two primaries (base) states. The most important logical ports that implement operations on two classic bits are the AND, OR, XOR, NAND, and NOR ports. The NOT and AND ports form a universal set, i.e., any Boolean function can be achieved with a combination of these two operations. For the same reason, NAND forms a universal set. The quantum equivalent of XOR is the CNOT (controlled-NOT) port, which operates on two qubits: the first is called the control qubit, and the second is the target qubit. If the control is zero, then the target is left unchanged; if the control is one, then the target is negated, that is: Where A is the control qubit, B is the target and ⊕ is the classic XOR operation (Figure 1). Another important operation is represented by the symbol in Figure 2 and consists of measuring a qubit |ψ> = α |0>+β |1>. The result is a classic bit M (indicated with a double line), which will be 0 or 1. The CNOT port can be used to create states that are entangled. The circuit in Figure 3 generates for each state of the computational base |00>, |01>, |10> , |11> a particular entangled state. These states, which we indicate with β00, β10, β01, β11, are called Bell or EPR states by Bell, Einstein, Podolsky, and Rosen who first discovered their extraordinary properties. The way to encode information in modern digital computers is done through voltages or currents on tiny transistors within integrated circuits that act as digital or analog elements. Each transistor is addressed by a bus that is able to define a state of 0 (low voltage) or 1 (high voltage). Quantum computers have different similarities, and the basic idea is illustrated in figure 4. In this figure, we observe a superconducting qubit (also called SQUID – Superconducting QUantum Interference Device), which is the basic element of a quantum computer (a quantum 'transistor'). The term 'Interference' refers to electrons – which behave like waves within a quantum wave, interference patterns that give rise to quantum effects. In this case, the basic element is niobium, not silicon, as in a classic transistor. The property of the material allows electrons to behave like qubits. When the metal is cooled, it becomes known as a superconductor and begins to show quantum mechanical effects. The superconducting qubit structure encodes 2 states as tiny magnetic fields pointing in opposite directions. By means of quantum mechanics, we can control these states defined +1 and -1 or |ψ> = α |0>+β |1>. By means of elements known as superconducting loop couplers, a multi-qubit processor is created. A programmable quantum device can be designed by putting together many of these elements, such as qubits and couplers (Figure 5). To control the operation of qubits, it is important to have a switch structure consisting of Josephson junctions that direct each qubit (routes pulses of magnetic information to the correct points on the chip) and stores the information in a local magnetic memory element to each device. The Josephson effect consists in the development of current between two superconductors separated by an insulating junction, called Josephson junction. The effect is due to the tunnel effect of the electron pairs in each of the superconductors. If the insulator is too wide, the probability of tunnel effect is low, and the effect does not occur. Most Josephson junctions represent a quantum processing unit (QPU). The QPU has no large areas of memory (cache), as they are designed more like a biological brain than the common 'Von Neumann' architecture of a conventional silicon processor. One can think of qubits as neurons and couplers as synapses that control the flow of information between these neurons. The requirements for a successful quantum implementation are encapsulated in the number of quantum bits that must be large enough for high efficiency. This also implies that you must be probably able to perform a lot of quantum bit operations in a short time. The algorithms require the application of many gates logic on many quantum bits. To keep the probability of error low enough, the various gates must be very precise. The quantum structure of the computer needs very cold temperatures to work properly. In particular, a temperature reduction approximately below 80mK is required. The performance of a quantum processor increases as the temperature drops – the lower the temperature, the better. The latest generation D-Wave 2000Q system has an operating temperature of about 15 millikelvins. The QPU and parts of the input/output (I/O) system, which includes about 10 kg of material, is cooled to this temperature. To reach temperatures close to absolute zero, the systems use liquid helium as a coolant. Liquid helium resides within a closed-loop system, where it is recycled and recondensed using pulse tube technology. This makes them suitable for remote use, as there is no need to replenish liquid helium on site.
<urn:uuid:d51e8510-0e5e-4474-a3f4-098d1ff22e65>
CC-MAIN-2020-24
https://www.ednasia.com/quantum-computer-design-electronics-circuits/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00366.warc.gz
en
0.93213
1,817
4.375
4
BIJECTIVE PHYSICS FOR THE RENAISSANCE OF PHYSICS Physics main task is to build exact models of physical reality. On the basis of the physics picture of reality, we build technology. Computer and mobile phone technology, for example, is based on the theory of electromagnetism which is such a precise picture of electromagnetic phenomena that makes the development of computers possible. Real is what works. In physics, we have two kinds of elements. Some are obtained by measurement as for example, gravitational constant G, the other is obtained theoretically by the calculations as for example the age of the universe according to the Big Bang model. Definitely, elements obtained by measurement are more secure than elements obtained by calculations. In physics, experimental data are of fundamental importance. The most known element of physics which is not based on observed data is space-time where time is considered to be the 4th dimension of universal space. We are believing for more than 100 years that space-time is a fundamental arena of the universe and we do not have the support of experimental data. The Space-time model is pure speculation described by Minkowski manifold. Also, the idea that universal space-time is “empty” is pure theoretical speculation. We talk today about “quantum fluctuations in space”, we should rather talk about “quantum fluctuations of space”. Universal space is made out of quantum fluctuation, it is not that quantum fluctuations exist in some “empty” universal space. Bijective Physics requires that every element we use in physics is based on experimental data. The theoretical frame of bijective physics is a bijective function of set theory. The universe is set X and the model of the universe is set Y. Every element in the set X is related to its corresponding element in the set Y with the bijective function: f: X → Y. In this way, we get 100% extract picture of physical reality. The universe is set X, the model of the universe is set Y. NASA has measured back in 2014 that universal space has a Euclidean shape. They measured that the sum of angles in a triangle composed out of three stellar objects is always 180 degrees. This means universal space is infinite in its volume. The research of Barbour, Fiscaletti, Sorli has proved that time has no physical existence. Time is the numerical sequential order of material changes running in universal space. Every experiment done in physics confirms that with clocks we measure numerical sequential order of material changes, i.e. motion in space. Changes have no duration on their own. Duration enters existence when being measured by the observer. This means that material changes run only in universal space which is time-invariant. Seeing the universe developing in some physical time is a stubbornly persistent illusion. Universe develops only in space which is the primordial non-created energy of the universe. We call it today “superfluid quantum vacuum” or also “physical vacuum”. Vacuum energy is time-invariant, which is in accord with the first law of thermodynamics. The idea of some beginning of the universe is an extension of Biblical thought in physics. In the Bible, the creation has taken 6 days, in Big Bang cosmology a fraction of the second. The elapsed time is the only difference between the Biblical model of the universe and the Big Bang model. We are in 21 century and to progress physics wed have to disconnect with some theoretical assumptions of 20th-century physics. This is the aim of Bijective physics in the name of the Physics progress. The Bijective physics group has the following principal researchers: Paolo Di Sia Amrit S. Šorli Main published articles on the Bijective physics are the following: 1. THERE IS NO PHYSICAL TIME IN THE UNIVERSE. TIME IS NUMERICAL SEQUENTIAL ORDER OF EVENTS IN SPACE. UNIVERSAL SPACE IS TIME-INVARIANT. Fiscaletti, D., Sorli, A. Perspectives of the Numerical Order of Material Changes in Timeless Approaches in Physics. Found Phys 45, 105–133 (2015). https://doi.org/10.1007/s10701-014-9840-y 2. UNIVERSAL SPACE IS MEDIUM OF QUANTUM ENTANGLEMENT Fiscaletti, D., Sorli, A. Searching for an adequate relation between time and entanglement. Quantum Stud.: Math. Found. 4, 357–374 (2017). https://doi.org/10.1007/s40509-017-0110-5 3. MASS-ENERGY EQUIVALENCE EXTENTION ON UNIVERSAL SPACE IS THE BASIS OF PHYSICS AND COSMOLOGY PROGRESS Šorli, A.S. Mass-Energy Equivalence Extension onto a Superfluid Quantum Vacuum. Sci Rep 9, 11737 (2019). https://doi.org/10.1038/s41598-019-48018-2 4. ADVANCES OF RELATIVITY https://www.preprints.org/manuscript/201912.0326/v1 5. BLACK HOLES ARE REJUVENATING SYSTEMS OF THE UNIVERSE Sorli, A. S. (2020). Black Holes are Rejuvenating Systems of the Universe. JOURNAL OF ADVANCES IN PHYSICS, 17, 23-31. https://doi.org/10.24297/jap.v17i.8620 5. EVOLUTION OF LIFE IS CONSISTENT PART OF UNIVERSAL DYNAMICS Sorli, A. S., & Čelan, Štefan. (2020). Integration of Life and Consciousness into Cosmology. JOURNAL OF ADVANCES IN PHYSICS, 17, 41-49. https://doi.org/10.24297/jap.v17i.8623 6. EINSTEIN VISION ON TIME IS THE BIG BANG COSMOLOGY FUNERAL Sorli, A.S. (2020), Einstein’s Vision of Time and Infinite Universe without Singularities – The End of Big Bang Cosmology, JOURNAL OF ADVANCES IN PHYSICS 7. ENTIRE CYCLOTRON PHYSICS IS FALSE Sorli, A.S. (2020) System Theory, Proton Stability, Double-Slit Experiment, and Cyclotron Physics, JOURNAL OF ADVANCES IN PHYSICS (accepted in publication)
<urn:uuid:4db74c16-8278-46d9-9de8-e6858e4e58e3>
CC-MAIN-2020-24
http://bijectivephysics.com/universe-dynamic-equilibrium/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390448.11/warc/CC-MAIN-20200526050333-20200526080333-00368.warc.gz
en
0.86979
1,398
3.984375
4
When Albert Einstein first predicted that light travels the same speed everywhere in our universe, he essentially stamped a speed limit on it: 670,616,629 miles per hour — fast enough to circle the entire Earth eight times every second. But that’s not the entire story. In fact, it’s just the beginning. Before Einstein, mass — the atoms that make up you, me, and everything we see — and energy were treated as separate entities. But in 1905, Einstein forever changed the way physicists view the universe. Einstein’s Special Theory of Relativity permanently tied mass and energy together in the simple yet fundamental equation E=mc 2. This little equation predicts that nothing with mass can move as fast as light, or faster. The closest humankind has ever come to reaching the speed of light is inside of powerful particle accelerators like the Large Hadron Collider and the Tevatron. These colossal machines accelerate subatomic particles to more than 99.99% the speed of light, but as Physics Nobel laureate David Gross explains, these particles will never reach the cosmic speed limit. To do so would require an infinite amount of energy and, in the process, the object’s mass would become infinite, which is impossible. (The reason particles of light, called photons, travel at light speeds is because they have no mass. Since Einstein, physicists have found that certain entities can reach superluminal (that means “faster-than-light”) speeds and still follow the cosmic rules laid down by special relativity. While these do not disprove Einstein’s theory, they give us insight into the peculiar behaviour of light and the quantum realm. The light equivalent of a sonic boom So, in theory, if something travels faster than the speed of light, it should produce something like a “luminal boom.” In fact, this light boom happens on a daily basis in facilities around the world — you can see it with your own eyes. It’s called Cherenkov radiation, and it shows up as a blue glow inside of nuclear reactors, like in the Advanced Test Reactor at the Idaho National Laboratory in the image to the right. Cherenkov radiation is named after Soviet scientist Pavel Alekseyevich Cherenkov, who first measured it in 1934 and was awarded the Nobel Physics Prize in 1958 for his discovery. Cherenkov radiation glows because the core of the Advanced Test Reactor is submerged in water to keep it cool. In water, light travels at 75 % the speed it would in the vacuum of outer space, but the electrons created by the reaction inside of the core travel through the water faster than the light does. Particles, like these electrons, that surpass the speed of light in water, or some other medium such as glass, create a shock wave similar to the shock wave from a sonic boom. When a rocket, for example, travels through air, it generates pressure waves in front that move away from it at the speed of sound, and the closer the rocket reaches that sound barrier, the less time the waves have to move out of the object’s path. Once it reaches the speed of sound, the waves bunch up creating a shock front that forms a loud sonic boom. Similarly, when electrons travel through water at speeds faster than light speed in water, they generate a shock wave of light that sometimes shines as blue light, but can also shine in ultraviolet. While these particles are travelling faster than light does in water, they’re not actually breaking the cosmic speed limit of 670,616,629 miles per hour. When the rules don’t apply Keep in mind that Einstein’s Special Theory of Relativity states that nothing with mass can go faster than the speed of light, and as far as physicists can tell, the universe abides by that rule. But what about something without mass? Photons, by their very nature, cannot exceed the speed of light, but particles of light are not the only massless entity in the universe. Empty space contains no material substance and therefore, by definition, has no mass. “Since nothing is just empty space or vacuum, it can expand faster than light speed since no material object is breaking the light barrier,” said theoretical astrophysicist Michio Kaku on Big Think. “Therefore, empty space can certainly expand faster than light.” This is exactly what physicists think happened immediately after the Big Bang during the epoch called inflation, which was first hypothesized by physicists Alan Guth and Andrei Linde in the 1980s. Within a trillionth of a trillionth of a second, the universe repeatedly doubled in size and as a result, the outer edge of the universe expanded very quickly, much faster than the speed of light. Quantum entanglement makes the cut Quantum entanglement sounds complex and intimidating but at a rudimentary level entanglement is just the way subatomic particles communicate with each other. “If I have two electrons close together, they can vibrate in unison, according to the quantum theory,” Kaku explains on Big Think. Now, separate those two electrons so that they’re hundreds or even thousands of light years apart, and they will keep this instant communication bridge open. “If I jiggle one electron, the other electron ‘senses’ this vibration instantly, faster than the speed of light. Einstein thought that this therefore disproved the quantum theory, since nothing can go faster than light,” Kaku wrote. In fact, in 1935, Einstein, Boris Podolsky and Nathan Rosen, attempted to disprove quantum theory with a thought experiment on what Einstein referred to as “spooky actions at a distance.“ Ironically, their paper laid the foundation for what today is called the EPR (Einstein-Podolsky-Rosen) paradox, a paradox that describes this instantaneous communication of quantum entanglement — an integral part of some of the world’s most cutting-edge technologies, like quantum cryptography. Dreaming of wormholes Since nothing with mass can travel faster than light, you can kiss interstellar travel goodbye — at least, in the classical sense of rocketships and flying. Although Einstein trampled over our aspirations of deep-space road trips with his Theory of Special Relativity, he gave us a new hope for interstellar travel with his General Theory of Relativity in 1915. While Special Relativity wed mass and energy, General Relativity wove space and time together. “The only viable way of breaking the light barrier may be through General Relativity and the warping of space time,” Kaku writes. This warping is what we colloquially call a “wormhole,” which theoretically would let something travel vast distances instantaneously, essentially enabling us to break the cosmic speed limit by traveling great distances in a very short amount of time. In 1988, theoretical physicist Kip Thorne — the science consultant and executive producer for the recent film “Interstellar” — used Einstein’s equations of General Relativity to predict the possibility of wormholes that would forever be open for space travel. But in order to be traversable, these wormholes need some strange, exotic matter holding them open. “Now it is an amazing fact that exotic matter can exist, thanks to weirdnesses in the laws of quantum physics,” Thorne writes in his book “The Science of Interstellar.” And this exotic matter has even been made in laboratories here on Earth but in very tiny amounts. When Thorne proposed his theory of stable wormholes in 1988 he called upon the physics community to help him determine if enough exotic matter could exist in the universe to support the possibility of a wormhole. “This triggered a lot of research by a lot of physicists; but today, nearly thirty years later, the answer is still unknown.” Thorne writes. At the moment, it’s not looking good, “But we are still far from a final answer,” he concludes. You May Like This
<urn:uuid:acd78a9b-fe60-4a1e-a1c2-4cb9a160ae54>
CC-MAIN-2020-24
https://graptechpedia.com/1991/science/faster-than-the-speed-of-light/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410352.47/warc/CC-MAIN-20200530200643-20200530230643-00568.warc.gz
en
0.932966
1,686
3.78125
4
How will quantum computing change the future of security? What does a quantum computer look like? Mike and Daniel sit down with Lee Barford to get some answers. Last time we looked at “what is quantum computing” and talked about quantum bits and storing data in superstates. 00:40 Lee talks about how to crack RSA and Shor’s algorithm (wikipedia) 00:50 The history of quantum computing (wiki). The first person to propose it was Richard Feynman in the mid 1960s. There was some interest, but it died out. In the 1990s, Peter Shor published a paper pointing out that if you could build a quantum computer with certain operational properties (machine code instructions), then you could find one factor of a number no matter how long it is. Then, he outlined another number of things he would need, like a quantum Fast Fourier Transform (FFT). Much of the security we use every day is both the RSA public key system and the Diffie Hellman Key Exchange algorithm. HTTPS connections use the Diffie Hellman Key Exchange algorithm. RSA stands for “ really secure algorithm” “Rivest, Shamir, and Adelman.” RSA only works if the recipients know each other, but Diffie Hellman works for people who don’t know each other but still want to communicate securely. This is useful because it’s not practical for everyone to have their own RSA keys. Factoring numbers that are made up of large prime numbers is the basis for RSA. The processing power required for factoring is too large to be practical. People have been working on this for 2500 years. Shor’s algorithm is theoretically fast enough to break RSA. If you could build a quantum computer with enough quantum bits and operate with a machine language cycle time that is reasonable (us or ms), then it would be possible to factor thousand bit numbers. Famous professors and famous universities have a huge disparity of opinion as to when a quantum computer of that size could be built. Some say 5-10 years, others say up to 50. What does a quantum computer look like? It’s easier to describe architecturally than physically. A quantum computer isn’t that much different from a classical computer, it’s simply a co-processor that has to co-exist with current forms of digital electronics. If you look at Shor’s algorithm, there are a lot of familiar commands, like “if statements” and “for loops.” But, quantum gates, or quantum assembly language operations, are used in the quantum processor. (more about this) Lee thinks that because a quantum gate operates in time instead of space, the term “gate” isn’t a great name. What quantum computers exist today? Some have been built, but with only a few quantum bits. The current claim is that people have created quantum computers with up to 21 quantum bits. But, there are potentially a lot of errors and noise. For example, can they actually maintain a proper setup and hold time? Continuing the Schrodinger’s Cat analogy… In reality, if you have a piece of physics that you’ve managed to put into a superimposed quantum state, any disturbance of it (photon impact, etc.) will cause it to collapse into an unwanted state or to collapse too early. So, quantum bits have to be highly isolated from their environments. So, in vacuums or extreme cold temperatures (well below 1 degree Kelvin!). The research companies making big claims about the quantity of bits are not using solid state quantum computers. The isolation of a quantum computer can’t be perfect, so there’s a limited lifetime for the computation before the probability of getting an error gets too high. Why do we need a superposition of states? Why does it matter when the superimposed states collapse to one state? If it collapses at the wrong time you’ll get a wrong answer. With Shor’s algorithm it’s easy to check for the right answer. And, you get either a remainder of 0 or your don’t. If you get 0, the answer is correct. The computation only has to be reliable enough for you to check the answer. If the probability of getting the right answer is high enough, you can afford to get the wrong answer on occasion. The probability of the state of a quantum bit isn’t just 50%, so how do you set the probability of the state? It depends on the physical system. You can write to a quantum bit by injecting energy into the system, for example using a very small number of photons as a pulse with a carefully controlled timing and phase. Keysight helps quantum computer researchers generate and measure pulses with metrological levels of precision. The pulses have to be very carefully timed and correlated with sub nanosecond accuracy. You need time synchronization between all the bits at once for it to be useful. What is a quantum bit? Two common kinds of quantum bits are 1: Ions trapped in a vacuum with laser trapping . The ions can’t move because they are held in place by standing waves of laser beams. The vacuum can be at room temperature but the ions are low temperature because they can’t move. 2. Josephson junctions in tank circuits (a coil capacitor) produce oscillations at microwave frequencies. Under the right physical conditions, those can be designed to behave like an abstract two state quantum system. You just designate zero and one to different states of the system. Probabilities are actually a wrong description, it should be complex quantum amplitudes. After working with quantum computing, it’s common to walk away feeling a lot less knowledgeable. Stupid question section: “If you had Schrodinger’s cat in a box, would you look or not?” Lee says the cat’s wave function really collapsed as it started to warm up so the state has already been determined.
<urn:uuid:a6065643-eaf0-4e77-bef7-780d0b43764a>
CC-MAIN-2020-24
https://eestalktech.com/quantum-bits/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00368.warc.gz
en
0.928869
1,270
3.5625
4
The Einstein-Podolsky-Rosen paradox is that measurement in quantum mechanics seems to require faster-than-light communication under certain circumstances. This, they claimed, is absurd, and proves quantum mechanics to be incomplete -- certain things must be decided in advance. This argued towards a local hidden variables theory: a theory which maintains locality better than quantum mechanics and - more pertinently to the three theorists - lacks quantum weirdness. How was it less weird? The distributions of the properties of particles were all now statistical distributions over unknown quantities, and not intrinsic distributions over quantum operator eigenvalues. At first, there was no known way to actually test whether the seemingly absurd result of faster-than-light communication was true or not: no one could think of a prediction quantum mechanics made that no local hidden variables theory could make. In 1951, some progress was made as David Bohm created a more tractable variant of the paradox involving spin, but still no specific different predictions could be found. In 1964, this changed: John Bell used Bohm's special case to devise his famous inequality, which pointed out a difference between the two schema. That is, though there were a variety of ways a local hidden variables theory could make things work, there was a limit to how coordinated they could make things. Quantum mechanics could cross this limit. To explain this notion - suppose you can make one of 3 binary measurements - A, B, or C. Measurements A and B are closely related and read, say, 75% the same, with only 25% reading the opposite way. B and C are closely related and also 75% the same, with 25% flipped. So, if you compare the results of measurements A and C, you should classically expect not more than 50% of your results to flip. This is what quantum mechanics disagrees on. You can set up situations in which you expect 75% of the results to flip when you compare A with C. And as it turns out, you can pull this off with separated particles like EPR were talking about. It seems in this case like information must be transmitted to help A and C be more opposite than random. Alain Aspect used a special case of the inequality to form the basis of his famous experiment, which was finished in 1982. The final form of the Aspect experiment went so: - Set up a device which creates Einstein-Podolsky-Rosen pairs (EPR pairs) of photons proceeding in opposite directions. What makes each pair an EPR pair is that the two photons' polarizations are oriented the opposite way*, not by picking them to be some specific opposite pair of values, but by assigning that constraint without constraining their individual polarizations. This quantum dependence is known as entanglement. To get technical, the spinor ket of the photon pair is X | + - > + X' | - + > for some X and X', in a linear polarization basis. (There's more that could be said, but the algebra would suddenly get very intense and it wouldn't materially help.) - Place three detectors to detect each photon, each detecting the polarization along an axis at 60° from the other two. Use only one detector at a time at each end. Let's call these A, B, and C at one end, and A', B', and C' at the other. This setup gives the sameness ratio predictions used above. - Rapidly randomly determine which axis is used on each detector, and reset the choice after each photon is detected. Make sure the switching is good enough to keep the switching events spacelike separated. There are two cases here: the photons' polarization is detected along the same axis, or the photons' polarization is detected along different axes. In the event that the photons were detected along the same axis (A & A', B & B', or C & C'), things are simple -- they will be read oppositely. This serves as a check on the efficiency of the setup. In the other case, in which the photons are measured along different directions (A & B', say - 6 combinations), the Bell Inequality comes into play: quantum mechanics and local-hidden-variables theories make different predictions on how often the two spins will look 'more opposite' than 'more aligned'. As it turned out, quantum mechanics' prediction was strongly validated. Even after this experiment, there were a few loopholes through which it was conceivable that one could fit a local hidden variables theory: the theory could involve 'looking ahead' at the detectors and finding what orientations they would be, going so far as to examine the state of the randomization mechanism. With improving randomization, this became an increasingly wild supposition. As time progressed, the various loopholes were closed tighter and tighter: supposing local hidden variables now requires incredibly complicated and un-physics-like 'conspiracy'-style theories. Taking this to mean that local hidden variables theories are false, what does that leave? - Quantum mechanics Up side: we already know what it is, it has succeeded all tests. Down side: under the Copenhagen interpretation, locality is violated. - A global hidden variables theory Up side: at least Quantum mechanics and all of its weirdness isn't true. Down side: we don't know what such a theory would be (though one has been devised, by David Bohm), and since there isn't a single difference in testable predictions between any of them and quantum mechanics, pursuing it has entered the realm of metaphysics. To snag a quote from a Nobel-Winner** "If it makes different predictions from Quantum Mechanics, I'm not interested. If it makes the same predictions as Quantum Mechanics, I'm not interested." At first, it seems like we're stuck with nonlocality in our physical theory, whether by global variables or by a global wavefunction which collapses superluminally. This would be downright ugly. But the locality problem is not with Quantum Mechanics itself, but with the Copenhagen Interpretation: it is the collapse of the wavefunction which is a problem. If we consider the measurement process to be another case of entanglement, then the consistency of the results follows straightforwardly and involves only local information exchange -- the exchange occurring when you bring together the various results (note that this one supposition is the entire basis of the Many-Worlds Interpretation). Entirely separate from the philosophical implications, this paradox yielded a tool of practical utility: EPR pairs form the basis of Quantum Teleportation, and play a major role in Quantum Computing. * An entangled system can have any sort of relation, not only opposite spin. The relationship does not even have to deal with spin. There can be more than two particles, and they do not even need to be the same type. The case used in the experiment kept things as simple as possible. ** I believe it was John Schrieffer, but I could be wrong; it's hard to track these things. Einstein, A.; Podolsky, B.; and Rosen, N. "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Phys. Rev. 47, 777-780, 1935. Bohm, D. "The Paradox of Einstein, Rosen, and Podolsky." Quantum Th., 611-623, 1951 Bell, J. S. "On the Einstein-Podolsky-Rosen Paradox." Physics 1, 195-200, 1964. Aspect, A.; Grangier, P.; and Roger, G. "Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities." Phys. Rev. Let. 49, 91-94, 1982.
<urn:uuid:eef059ad-3ed9-47ff-a560-f853c5bd8c3f>
CC-MAIN-2020-24
https://www.everything2.org/title/Einstein-Podolsky-Rosen+paradox
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347385193.5/warc/CC-MAIN-20200524210325-20200525000325-00369.warc.gz
en
0.945774
1,607
3.53125
4
On Monday, a group of researchers from MIT published the results from recent experiments that used the light from stars emitted 7.8 billion and 12.2 billion years ago to help confirm the reality of quantum entanglement. These results help settle a long standing debate in physics about whether entanglement is just an illusion that can actually be explained using principles of classical physics. These new results suggest that entanglement actually occurs because if it didn’t exist the universe would somehow have to have “known” 7.8 billion years ago that these MIT scientists would perform these experiments in 2018. Quantum entanglement is the theory that particles can be connected in such a way that measuring one particle can instantaneously convey information about that measurement to the other particle, regardless of the distance between them. It almost sounds like magic, which is probably why it received a healthy dose of criticism from the physics community when the theory was first proposed nearly 100 years ago. Albert Einstein was a particularly vocal critic of entanglement, which he famously described as “spooky action at a distance.” Part of Einstein’s beef with the quantum mechanics crowd was that he believed that particles have definite qualities that exist before they are measured and that two particles distant in space and time can’t affect one another instantaneously since they are limited by the speed of light—a viewpoint known as local realism. Under quantum mechanics, however, the properties of a particle don’t exist independently of measurement used to determine those properties. Moreover, when it comes to entangled particles, the measurement of one particle will instantaneously influence the properties of the other entangled particle. This means that the values of these properties will be highly correlated—so highly correlated, in fact, that the degree of coincidence in their values can’t really be explained without recourse to quantum mechanics. Nevertheless, local realism has continued to haunt the development of quantum physics. In the 1960s, the physicist John Bell calculated the upper limit on the degree of correlation between two particles if their relationship was governed by local realism rather than quantum mechanics—a value known as Bell’s inequality. “As strange as quantum mechanics may seem, it continues to match every experimental test we can devise.” In the past half-century, however, numerous experiments have demonstrated values in excess of Bell’s inequality, which created a serious theoretical dilemma. Either these experiments demonstrated the reality of quantum entanglement or there were some “loopholes” unintentionally introduced into the experiments that could explain the results through classical physics without invoking quantum mechanics. One of the most pernicious loopholes is known as the “freedom of choice loophole.” This is the idea that the way a researcher sets up an experiment—from the choice of particles used to the way properties of these particles are measured—can influence the results of the measurement in unforeseen ways. In order to truly demonstrate quantum entanglement, critics argue, it is necessary to negate this freedom of choice loophole in quantum experiments. A COSMIC SOLUTION TO FREEDOM OF CHOICE In May, a group of researchers led by physicists from the Institute for Photonic Sciences in Spain published the results from the largest experiment to tackle the freedom of choice loophole. This experiment involved over 100,000 people from around the world playing a video game and the results of their gameplay were used in the experiment. The idea was that because the actions of these 100,000 people could not be predicted in advance, this would effectively remove any bias introduced into the experimental set up by researchers and thus close the “freedom of choice” loophole in the experiment. Around the same time these researcher were collecting their data from participants, however, a group of physicists led by researchers from MIT were also exploring how to close the freedom of choice loophole in quantum mechanics. Yet rather than search for solutions on Earth, these physicists turned to the cosmos to eliminate human bias. In the past, physicists have tried to close the freedom of choice loophole by generating entangled particles from a single source and then sending these entangled particles to detectors at two different locations. In the instant before the particle arrives, the detectors would use a random number generator to decide what property of the particle to measure (spin, polarity, etc.) in an effort to eliminate human bias. The problem, however, was that even this random number generator could technically be influenced by hidden, non-quantum variables that affect the measurement. To eliminate the influence of hidden variables, the researchers from MIT ditched the random number generators in favor of stars. In their experiment, the MIT researchers trained telescopes at two detector sites on various stars at least 600 light years away and used the photons from these stars to determine which measurements would be conducted on entangled particles at the detectors. The theory was that using 600 year-old starlight would help close the freedom of choice loophole because any hidden variables in the experiment would have to have been set in motion before the photons left their host star over 600 years ago. “The real estate left over for the skeptics of quantum mechanics has shrunk considerably,” MIT physicist David Kaiser said in a statement shortly after the results of the experiment were published last year. “We haven’t gotten rid of [the freedom of choice loophole], but we’ve shrunk it down by 16 orders of magnitude.” In research published in Physical Review Letters on Monday, the same team of MIT physicists made some wild improvements on their previous measurements and reduced the freedom of choice loophole even more. The new experiment is more or less the same, but instead of using normal stars as their source of randomness for quantum measurements, the researchers used light from two ancient quasars that were 7.8 and 12.2 billion light years away. “The Earth is about 4.5 billion years old, so any alternative mechanism different from quantum mechanics that might have produced our results by exploiting this loophole would’ve had to be in place long before even there was a planet Earth, let alone an MIT,” Kaiser said in a statement. “So we’ve pushed any alternative explanations back to very early in cosmic history.” Quasars are basically dense clouds of gas that surround the massive black holes that can be found at the center of most galaxies. As the gas from the quasar falls into the black hole it emits strong bursts of energy that are smeared across the electromagnetic spectrum. In the most recent experiment, the researchers used incredibly sensitive telescopes to measure the wavelength of photons—particles of light—emitted by the quasars. At the same time, a station between the two telescopes generated thousands of entangled photons which were then sent to detectors at the telescope. For each pair of entangled photons, the detectors would measure the wavelength of incoming interstellar photons relative to some baseline metric and use this value to determine what measurement would be performed on the incoming photons. In total, the researchers performed this measurement on just shy of 30,000 entangled photon pairs. The correlation between the measurements performed on the photons far exceeded Bell’s inequality, which suggested that the particles were experiencing quantum entanglement. In fact, Kaiser and his colleagues calculated that the odds that this degree of correlation was the result of classical rather than quantum physics was about one in one hundred billion billion. According to MIT physicist Alan Guth, the research makes it “unbelievably implausible that a local realistic theory could be underlying the physics of the universe.” Despite the overwhelming results in favor of quantum entanglement, there’s still the (incredibly) small possibility that local realism can account for these effects. To reduce these uncertainties even more, Kaiser, Guth and their colleagues are considering experiments that look even further back in time for a source of randomness, such as the cosmic microwave background. Performing these experiments, however, would involve overcoming a host of significant technical challenges. “It is fun to think about new types of experiments we can design in the future, but for now we are very pleased that we are able to address this particular loophole so dramatically,” Kaiser said. “ Our experiment with quasars puts extremely tight constraints on various alternatives to quantum mechanics. As strange as quantum mechanics may seem, it continues to match every experimental test we can devise.”
<urn:uuid:37d59c1c-e98f-4854-99da-7efa2a76d2a9>
CC-MAIN-2020-24
https://www.vice.com/en_us/article/bjbknz/ancient-starlight-just-helped-confirm-the-reality-of-quantum-entanglement
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347422803.50/warc/CC-MAIN-20200602033630-20200602063630-00368.warc.gz
en
0.952531
1,713
3.828125
4
A Brief History of Computing, starting in 150 BC CS 441 Lecture, Dr. Lawlor Folks have been using physical devices to perform computations for a long time. Notable accomplishments: - 150 BC: Greeks, likely including Archimedes, built clockwork-like chains of gears such as the Antikythera mechanism to predict astronomical events such as eclipses, and to measure time and convert between calendars. - 1640's: Blaise Pascal built a series of adding machines, which used a series of hand-cranked cogs to add (similar to a car's mechanical odometer), or via complement arithmetic, subtract; or via repeated addition, multiply. - 1820's: Charles Babbage designed (but never built) a fully-mechanical polynomial evaluator, the difference engine, via the method of finite differences. He also started work on a fully programmable model, the analytical engine, but building the thing with rod logic would have taken a huge amount of labor. - 1890: IBM corporation uses the patented electromechanical (mercury switches and relays) Hollerith tabulator to count up the punched cards that represent the 1890 census results. The 1891 Electrical Engineer raved: "This apparatus works unerringly as the mills of the gods, but beats them hollow as to speed." - 1941: Konrad Zuse builds the world's first fully-programmable computer, the Zuse Z3. Sadly, it used scavenged telephone switching relays, and was built in wartime Germany, so it was ignored for years. - 1944: John von Neumann proposes using the same memory to store both program and data, now known as a "von Neumann machine". Previous designs used separate memories for program and data, known as the Harvard architecture. - 1946: ENIAC, the first vacuum-tube electronic automatic computer, built by the US military. ENIAC is fully programmable. Vacuum tubes can switch in nanoseconds, like transistors, rather than milliseconds, like - 1948: CURTA, a popular fully-mechanical pocket calculator. - 1949: MONIAC, a hydraulic computer, models the country's financial system using water. - 1950's: the automatic transmission, a hydraulic computer, becomes cheap enough for ordinary people to buy. - 1956: IBM releases Fortran, the first successful programming language. Prior to Fortran, machines were typically programmed using wires, machine code, or assembly. - 1960's: IBM's System/360, which adds microcode and binary backward compatability. - 1964: Seymore Cray's CDC 6600 achieves amazing performance using superscalar processing, caching, newfangled transistors, liquid cooling, and offloading I/O to dedicated "peripheral processors", which were hyperthreading-style barrel processors. - 1971: Upstart Intel creates a single-chip CPU, the 4004, which computes 4-bit values at up to 0.74MHz. - 1972: HP-35, the first electronic pocket calculator good enough to replace the slide rule, for only $395. - 1972: Intel's 8008, 8-bit values at up to 0.5MHz. - 1978: Intel's 8086, 16-bit values at up to 10MHz. - late 1970's: "micro" digital computers, like the Apple I, become cheap enough for dedicated hobbyists to buy and solder together. - 1981: digital computers, like the IBM PC, become cheap enough for ordinary people to buy pre-assembled. The notion of selling software is popularized by the upstart "Micro-soft" - 1984: Apple releases a 32-bit personal computer, the Mac 128K. - 1985: Intel's 80386, 32-bit values at up to 40MHz. - 1985: The notion of specialized hardware for graphics is popularized by Silicon Graphics corporation. RISC instruction sets are pushed by MIPS corporation. - 1990: IBM introduces a superscalar RISC machine for personal computers, PowerPC. - 1994: Intel's releases a 100MHz Pentium CPU. - 1990's: graphics hardware for personal computers takes off with GLQuake and other 3D games. - 2000: Intel releases a 1 GHz Pentium III CPU. - 2002: Intel releases a 3 GHz Pentium 4 CPU, with hyperthreading. - 2002: Graphics cards become programmable in assembly language (ARB_fragment_program), and support dozens of threads. - 2003: NVIDIA releases "Cg", a C++-like language for programming graphics cards. Limitations include a single write per program. - 2003: AMD corporation introduces chips with a 64-bit extension to the x86 instruction set, which Intel later adopts. - 2004: Intel abandons plans for a 4GHz Pentium 4 chip. - 2006: Intel releases dual-core and quad-core CPUs at around 2GHz. The great multithreaded programming model panic begins. - 2007: Intel announces "V8" eight-core systems. - 2007: NVIDIA releases CUDA, a very C-like language for programming graphics cards for non-graphics tasks. Supports arbitrary reads and writes. - 2008: Graphics hardware now supports between thousands and millions of threads. - CPUs are still clocked at 2-4 GHz, just like in 2002. So in the future, your machine won't have very many more GHz, instead it will have many more cores. Nobody knows how to program those - For highly parallel problems, graphics cards already dramatically surpass CPU parallelism and hence performance. Thousand-core graphics software is commonplace; thousand-core CPU software is - Technology changes, like gears to relays, relays to vacuum tubes, or tubes to transistors, have the capability to totally re-make the computer industry in less than 10 years. Biological/nano or quantum computing has a similar potential!
<urn:uuid:15aa40e9-d4c3-41dd-b78a-95a092d89a0b>
CC-MAIN-2020-24
https://www.cs.uaf.edu/2009/fall/cs441/lecture/09_04_history.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410745.37/warc/CC-MAIN-20200531023023-20200531053023-00369.warc.gz
en
0.870313
1,292
3.796875
4
Storing quantum bits of information, or qubits, is a lot harder than storing ordinary binary digits. It’s not simply ones or zeroes, but the whole range of subtle quantum superpositions between them. Electrons can easily slide out of those states if they’re not stored in the right materials, which is why electrical engineers at Princeton are working with a UK manufacturer to create a better storage material — synthetic diamonds — from scratch. They published an account of their success on Thursday in Science. For decades, physicists, materials engineers, and others have been trying to achieve the conceptual promise of quantum-encrypted communications because the data transferred in that process is theoretically immune to covert surveillance. Any attempt to observe that data between parties — à la the Heisenberg Uncertainty Principle — would fundamentally alter that information, quickly revealing that it was compromised. The problem has been storing and preserving qubits and then converting them to fiber optic-ready photons, and using diamonds appears to be the route toward achieving both. But not just any diamond will do, which is why Princeton’s team has been hard at work creating a synthetic one, as they describe in their paper. “The properties that we’re targeting are what’s relevant for quantum networks,” electrical engineer Nathalie de Leon tells Inverse. At Princeton, where de Leon is an assistant professor, her team’s focus is essentially inventing quantum hardware. “It’s applications where you want something that has a long storage time, and then also has a good interface with photons so that you can send light over very long distances.” Photonic interactions matter a lot for high-speed international communications because all of the information traveling along fiber optic cables moves through our global infrastructure as discrete photons — cruising at 69 percent of the speed of light. (Nice.) “That puts a lot of constraints on the optical characteristics,” de Leon says. “As one example, it’s really important that the color be stable. If the color of the photon is jumping around over time, then that’s really bad for these protocols.” Right now, de Leon’s group is trying to craft a version of these synthetic diamonds that can convert to the standard 1,550-nanometer wavelength on which photons now traverse fiber optic cables. Currently, her team’s synthetic diamonds support 946-nanometer photon wavelengths. (Photon “color” is a bit of a euphemism here since both of these wavelengths are shades of infrared outside the visible spectrum.) The hurdle that her team just succeeded in crossing is storing those qubits in crystalline quantum repeaters, similar to the repeaters that are currently used to prevent signal loss and degradation in today’s fiber-optic communications. The critical step in this process was producing synthetic diamonds with as little unwanted impurities as possible (nitrogen, mainly) and more of the impurities they actually did want (silicon and boron). “Nitrogen turns out to be the predominant defect that you get in these diamonds,” de Leon says. Her group’s partners at the British diamond maker Element Six had to create above-average vacuum conditions since even ordinary vacuums can leave enough nitrogen in the chamber to contaminate the artificially-made crystals. Because nitrogen has one more free electron than carbon, nitrogen impurities disturb the unique electrical makeup that the researchers are hoping for. Other small defects can undermine the qubit-storing potential of these diamonds, too. The goal is to have pairs of atom-sized vacancies in the crystal framework alongside a substituted silicon atom where a single carbon used to be, but sometimes those pairs can bunch up together in “vacancy clusters” that start to redistribute their electrons in annoying, counterproductive ways. Sometimes polishing and etching damage on the surface of the diamond can also cause a domino effect, messing with this pattern of electrons, too. This is where adding boron — which has one less free electron than carbon — can help. “What we had to do,” de Leon says, “is both start with this ultra-high purity diamond and then grow in some boron to basically soak up any of the extra electrons that we couldn’t control. Then there was a lot of materials processing — boring stuff like thermal annealing and repairing the surface at the end to make sure that we still get rid of a lot of these other types of defects that give you extra charges.” Mastering both of these challenges, many in the field suspect, are the keys to fully functional and nearly impossible to crack quantum encryption. Before the dawn of synthetic diamonds only a few years ago, researchers in the field of quantum optics had to rely on natural diamonds to do their work — one specific diamond, in particular. According to de Leon, everyone in the field of quantum optics had to rely on a single, naturally-made diamond from Russia that just happened to have the right percentage of boron, nitrogen, and other impurities to make their research possible. Fragments of the diamond were cleaved off and distributed to research groups across the world. “Many of the groups had their own little piece of the ‘magic’ Russian diamond,” as de Leon told Princeton’s in-house news service in 2016. “At Harvard, we called ours ‘Magic Alice’ and ‘Magic Bob.’” So, TL;DR, Western scientists are getting better at manufacturing their own magical quantum computing diamonds instead of depending on slivers of Russia’s magical quantum computing diamond. This is a factual sentence that sounds ridiculous. Classic 2018.
<urn:uuid:30aacba6-e52d-4bf3-934f-b5182e01bab2>
CC-MAIN-2020-24
https://www.inverse.com/article/46728-synthetic-diamonds-are-necessary-for-quantum-computing-privacy
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00170.warc.gz
en
0.939895
1,194
3.59375
4
Mar 5, 2012 An international team of physicists is the first to implement in the lab an important "error correction" technique that could play a vital role in the development of practical quantum computers. Known as topological error correction (TEC), the technique is based on "clusters" that each contain eight highly entangled photons. These clusters are useful for this purpose because a measurement on one photon does not destroy the entire entangled state. The multiparticle cluster state at the centre of the current work was first proposed in 2001 by Robert Raussendorf and Hans Briegel, who were then at the University of Munich. Now at the University of British Columbia in Canada, Raussendorf is also involved in this latest research. Such a cluster could be used to perform "one-way" quantum computing, in which the states of individual particles are measured in a specific sequence so that the quantum state of the remaining particles gives the result of the computation. Like a doughnut Although quantum computers promise a lot, anyone wishing to build a practical device has to deal with the tricky fact that the quantum nature of qubits fizzles away rapidly as they interact with the heat and noise of the surrounding environment. Quantum error correction offers a way of staving off this "decoherence" – at least long enough for a quantum computation process to occur – by distributing the quantum information held in one "logical" qubit among a number of entangled "physical" qubits. Subjecting these physical qubits to an error-correction algorithm can then reveal if one or more qubits has undergone decoherence and, if so, to restore quantum information. Developed by Jian-Wei Pan and colleagues at the University of Science and Technology of China in Shanghai, along with Raussendorf and other physicists in Canada and Australia, the new experimental demonstration of TEC involves defining qubits in terms of fundamental shapes that cannot be changed by continuous deformations. A doughnut, for example, remains a doughnut if it is poked, stretched or prodded – unless the perturbation is so violent that it cuts the loop. Topological qubits are similar in the sense that they are not easily perturbed by noise and heat, and must take a big hit before they are destroyed. The team's cluster state comprises eight entangled photons, each acting as a physical qubit that can have a value of "0" and "1" depending upon its polarization state. The state is made by creating four pairs of entangled photons from firing a laser pulse at a non-linear crystal. The pairs are separated and combined in new pairs that are entangled by having them interfere on polarization-dependent beamsplitters. The photons can be thought of as forming a 3D cube, in which each photon is entangled with its nearest neighbours. This arrangement has a certain topology that protects a specific quantum correlation between two physical qubits – something that could be used as a building block to create logical qubits in a topological quantum computer. The TEC is implemented on the cluster state by making a series of measurements on the photons – essentially performing a one-way quantum-computing algorithm. To test the correction scheme, the team purposely introduced errors into the system. First, the researchers caused decoherence in one specific qubit and found that the TEC algorithm could identify which photon was affected and correct the error. Next, the team introduced a fixed amount of decoherence to all photons simultaneously, and again the scheme was able to identify the problem and correct it. "Our experiment provides a proof of principle that topological error correction would be one of the most practical approaches for designing quantum computers," Pan told physicsworld.com. Pan points out that TEC offers several benefits when compared with conventional schemes – in particular, it can handle the highest error rates of any scheme, making it easier to use with real physical devices, which will always suffer from errors. "Moreover, the architecture used in topological error correction is rather simple: it is sufficient to create interactions between two quantum bits that neighbour each other," he adds. This means that TEC should be compatible with a range of different qubit schemes, including quantum dots and Josephson junctions. This is important because such solid-state qubits should be easier to integrate and scale up to create a practical quantum computer. Raymond Laflamme, director of the Institute for Quantum Computing at the University of Waterloo in Canada, says that the work is an important result that shows that TEC can be implemented in principle. But given that not all types of qubits are compatible with TEC, Laflamme cautions that its future usefulness will depend on which qubit technologies are ultimately used to create practical quantum computers. The next step in the team's research is to create cluster states involving larger numbers of qubits – to do TEC on a logical qubit rather than just a correlation. Ultimately, physicists would like to develop systems that implement TEC on topological qubits and topological quantum-logic gates. The work is described in Nature.
<urn:uuid:15733d4e-63de-49e2-bf92-ce8a1b4b7599>
CC-MAIN-2020-24
https://seqre.net/topological-quantum-computing-moves-closer
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436466.95/warc/CC-MAIN-20200603210112-20200604000112-00572.warc.gz
en
0.939067
1,040
3.609375
4
Quantum physics deals with the realm of the very small, and most of us never expect to see the weird world it describes. But could we? Recently, scientist Geraldo Barbosa of Northwestern University designed an experiment to answer that question. The quantum effect Barbosa is hoping to see is called quantum entanglement, in which two or more particles can become "entangled" so that even after they are separated in space, when an action is performed on one particle, the other particle responds immediately. A common experiment illustrating entanglement is to fire a laser at a special type of crystal. Occasionally a photon particle from the laser "splits" into two. The energy and momentum of the two new photons each add up to the value of the one originally fired. These two "daughter" photons are entangled — if you look at the state of one photon, you know the state of the other, instantly. Einstein described this eerie connection as "spooky action at a distance." Next, the physicists change the form of the laser beam in the experiment to create an image. They have found that the image isn't visible unless two detectors are able to "see" the photons at the same time. While these physics experiments rely on detectors to "see" the photons and the resulting images, Barbosa foresees setting up an experiment in which a person's retinas would act as the detectors. [Stunning Photos of the Very Small] Spooky action in the lab The entangled photons have opposite polarization states: in other words, their waves are oriented differently. (On a quantum level, particles can behave like waves, and waves like particles.) In these experiments when only one photon is detected, it could be in any polarization state and it can hit the detector at any time. That means scientists can't tell whether the photon hitting their detector is from the entangled duo. Without that knowledge, a person can't reconstruct the image these photons are meant to create. But when both entangled photons are detected, you can figure out the photon's polarization state. Knowing one, you know both, and can recreate the image. The "spooky" part is that by observing either one of the photons you've eliminated all the other possibilities — both observed photons must have the polarization states you see. But how does the entangled photon "know" what state to be in? Relativity says that you can't have information travel faster than light. Observing entangled photons, though "forces" them into a certain state at the same time. [10 Effects of Faster-Than-Light Discovery] Essentially, the information in both photons is added to recreate the original image. This experiment has been done many times. But what would happen if the two detectors were human retinas? Would a person see the higher-order image or just the classical one, the flash of light? Ordinarily, we see things by perceiving the intensity of the light in several wavelengths. Mixing various wavelengths makes up all the various colors and saturations we perceive. This situation would be different — if brains could see quantum effects like entangled photons, one would expect a different image when looking with one eye than with both. This is a deeper question than it may seem, because if people can see such images, it means our macroscopic brains can pick up subtle, microscopic quantum effects. Next step in quantum vision Barbosa said there are still difficulties with setting up such an experiment. One problem is the signal-to-noise ratio in human neurons. We can't perceive individual photons even though they hit our retinas, as it takes a certain number of photons hitting our eyes for our brains to interpret the signal as, for example, a flash of light. In his paper, which is posted on the physics pre-print website arXiv, Barbosa notes that it is far from clear that one could generate enough photons to trigger a response from the human retina — at least seven photons are necessary to do that, and they would all have to be entangled. Robert Boyd, professor of optics at the University of Rochester, said he doesn't see anything in principle wrong with the idea. "Even here, there are two possibilities," Boyd wrote in an email to LiveScience. "One is that the human brain simply does not work in the manner that Barbosa proposes. The other is that it does, but that the effect is so weak as to be unobservable." Barbosa, meanwhile, said he has been thinking about this for a while —he did some of the first experiments with quantum images in his lab in 1994. And he sketches out some of the equipment that would be needed to make the experiment work, such as special goggles to get the photons to the right part of the retina. "This would only indicate that the complex neural system is able to process quantum signals —an amazing feature," Barbosa wrote. Copyright 2012 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
<urn:uuid:a062953c-f70d-4ee0-befb-f35496233b94>
CC-MAIN-2020-24
https://www.foxnews.com/science/can-humans-see-spooky-quantum-images
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00172.warc.gz
en
0.957852
1,037
3.65625
4
Computers consist of a processing component and a memory component. In the most basic sense, processors perform computations and memory stores data. For simple computations, a single processor may do the job. For more complex operations, however, multiple processors are often the only way to solve a problem. Many applications in the public and private sector require massive computational resources, such as real-time weather forecasting, aerospace and biomedical engineering, nuclear fusion research and nuclear stockpile management. Since these applications exceed the capacity of a single server, computer engineers have devised high-performance computing platforms that can deliver substantially more processing power. The most powerful computer systems in use today leverage thousands of linked processors to perform computations quickly by sharing the workload among multiple processors. There are two general models for managing and coordinating large numbers of processors. One is typified by supercomputers. These are large, expensive systems—usually housed in a single room—in which multiple processors are connected by a fast local network. The other is distributed computing. These are systems in which processors are not necessarily located in close proximity to one another—and can even be housed on different continents—but which are connected via the Internet or other networks. Advantages and Disadvantages of Each Model The advantage of supercomputers is that since data can move between processors rapidly, all of the processors can work together on the same tasks. Supercomputers are suited for highly-complex, real-time applications and simulations. However, supercomputers are very expensive to build and maintain, as they consist of a large array of top-of-the-line processors, fast memory, custom hardware, and expensive cooling systems. They also do not scale well, since their complexity makes it difficult to easily add more processors to such a precisely designed and finely tuned system. By contrast, the advantage of distributed systems is that relative to supercomputers they are much less expensive. Many distributed systems make use of cheap, off-the-shelf computers for processors and memory, which only require minimal cooling costs. In addition, they are simpler to scale, as adding an additional processor to the system often consists of little more than connecting it to the network. However, unlike supercomputers, which send data short distances via sophisticated and highly optimized connections, distributed systems must move data from processor to processor over slower networks making them unsuitable for many real-time applications. Weather forecasting is a prototypical supercomputing problem, in part because of how much data it takes to produce a weather forecast that is accurate by contemporary standards. Weather simulations take in massive quantities of data on temperature, wind, humidity, pressure, solar radiation, terrain, and numerous other environmental factors, and must account for global as well as local changes in these variables. Processing this data on a distributed system would mean repeatedly transferring data over relatively slow networks thereby seriously limit forecasting speeds. Since changes in weather occur continuously, having to wait for data to move around the system makes for forecasts that are already out of date as soon as they are produced. Other examples of supercomputing applications include nuclear stockpile management and large-scale physics simulations such as those involved in aerospace engineering. In contrast, distributed systems are most useful for problems that are not as sensitive to latency. For example, when NASA’s Jet Propulsion Laboratory (JPL) needed to process high volumes of image data collected by its Mars rovers, a computer cluster hosted on the Amazon Cloud was a natural fit. Such tasks are not substantially hindered by small delays in individual computations, so distributed systems offered the most pragmatic solution. Other distributed computing applications include large-scale records management and text mining. The Road Ahead Since the emergence of supercomputers in the 1960s, supercomputer performance has often been measured in floating point operations per second (FLOPS). The CDC 6600, a popular early supercomputer, reached a peak processing speed of 500 kilo-FLOPS in the mid-1960s. To put this in perspective, the processor in an iPhone 5S is nearly 250,000 times faster than the CDC 6600. Since the 1960s, the capabilities of supercomputers have grown tremendously. In 2013, the world’s fastest supercomputer, China’s Tianhe-2, could operate at a peak speed of nearly 34 peta-FLOPS, a 70-billionfold speed increase Meanwhile, the Amazon cloud, one of the world’s fastest distributed systems, achieved a speed of 1.2 peta-FLOPS for the first time in 2013. While this cannot compete with supercomputers like the Tianhe-2, distributed systems can typically be built much more cheaply than supercomputers. A 2013 HP study found that the hourly cost of renting a processor on a dedicated supercomputer was approximately 2-3 times as great as on a comparable distributed cloud-based system. Does the relative low cost of distributed computing mean the government should stop investing in supercomputers? Absolutely not. Supercomputers provide a distinct and irreplaceable set of capabilities and will continue to be of critical importance to national priorities for years to come to address problems such as cancer research, macroeconomic modeling, and natural disaster forecasting. The federal government should continue to fund research for both supercomputing and distributed computing. So far, we are moving in the right direction. The 2014 National Defense Authorization Act directs the Department of Energy to develop supercomputers capable of exa-FLOPS speeds, also known as “exascale” supercomputers, within 10 years, and Obama administration has made distributed computing a key part of its “big data strategy.” But there is more that could be done. If the federal government wants to maximize the value of its investments in high-performance computing, it will need to reduce barriers to using these technologies. This means it should continue to ensure that high-speed networking infrastructure is available to scientists at a broad range of locations and build tools that allow researchers who lack expertise in supercomputing to leverage high-performance systems. In addition, the world of high-performance computing is evolving quickly and federally-funded research should continue to support investments in next-generation computing technology such as quantum computing and molecular computing. Photo: Flickr user Sam Churchill
<urn:uuid:7de2116a-fcc5-4247-a458-b7d62dddd9bd>
CC-MAIN-2020-24
https://www.datainnovation.org/2014/01/supercomputing-vs-distributed-computing-a-government-primer/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513321.91/warc/CC-MAIN-20200606124655-20200606154655-00574.warc.gz
en
0.950707
1,278
4.125
4
Quantum computers promise to be a revolutionary technology because their elementary building blocks, qubits, can hold more information than the binary, 0-or-1 bits of classical computers. But to harness this capability, hardware must be developed that can access, measure and manipulate individual quantum states. Researchers at the University of Pennsylvania’s School of Engineering and Applied Science have now demonstrated a new hardware platform based on isolated electron spins in a two-dimensional material. The electrons are trapped by defects in sheets of hexagonal boron nitride, a one-atom-thick semiconductor material, and the researchers were able to optically detect the system’s quantum states. Fellow Bassett Lab members David Hopper and Raj Patel, along with Marcus Doherty of the Australian National University, also contributed to the study. There are number of potential architectures for building quantum technology. One promising system involves electron spins in diamonds: these spins are also trapped at defects in diamond’s regular crystalline pattern where carbon atoms are missing or replaced by other elements. The defects act like isolated atoms or molecules, and they interact with light in a way that enables their spin to be measured and used as a qubit. These systems are attractive for quantum technology because they can operate at room temperatures, unlike other prototypes based on ultra-cold superconductors or ions trapped in vacuum, but working with bulk diamond presents its own challenges. “One disadvantage of using spins in 3D materials is that we can’t control exactly where they are relative to the surface” Bassett says. “Having that level of atomic scale control is one reason to work in 2D. Maybe you want to place one spin here and one spin there and have them talk them to each other. Or if you want to have a spin in a layer of one material and plop a 2D magnet layer on top and have them interact. When the spins are confined to a single atomic plane, you enable a host of new functionalities.” With nanotechnological advances producing an expanding library of 2D materials to choose from, Bassett and his colleagues sought the one that would be most like a flat analog of bulk diamond. “You might think the analog would be graphene, which is just a honeycomb lattice of carbon atoms, but here we care more about the electronic properties of the crystal than what type of atoms it’s made of,” says Exarhos, who is now an assistant professor of Physics at Lafayette University. “Graphene behaves like a metal, whereas diamond is a wide-bandgap semiconductor and thus acts like an insulator. Hexagonal boron nitride, on the other hand, has the same honeycomb structure as graphene, but, like diamond, it is also a wide-bandgap semiconductor and is already widely used as a dielectric layer in 2D electronics.” With hexagonal boron nitride, or h-BN, widely available and well characterized, Bassett and his colleagues focused on one of its less well-understood aspects: defects in its honeycomb lattice that can emit light. That the average piece of h-BN contains defects that emit light had previously been known. Bassett’s group is the first to show that, for some of those defects, the intensity of the emitted light changes in response to a magnetic field. “We shine light of one color on the material and we get photons of another color back,” Bassett says. “The magnet controls the spin and the spin controls the number of photons that the defects in the h-BN emit. That’s a signal that you can potentially use as a qubit.” Beyond computation, having the building block of a quantum machine’s qubits on a 2D surface enables other potential applications that depend on proximity. “Quantum systems are super sensitive to their environments, which is why they’re so hard to isolate and control,” Bassett says. “But the flip side is that you can use that sensitivity to make new types of sensors. In principle, these little spins can be miniature nuclear magnetic resonance detectors, like the kind used in MRIs, but with the ability to operate on a single molecule. Nuclear magnetic resonance is currently used to learn about molecular structure, but it requires millions or billions of the target molecule to be assembled into a crystal. In contrast, 2D quantum sensors could measure the structure and internal dynamics of individual molecules, for example to study chemical reactions and protein folding. While the researchers conducted an extensive survey of h-BN defects to discover ones that have special spin-dependent optical properties, the exact nature of those defects is still unknown. Next steps for the team include understanding what makes some, but not all, defects responsive to magnetic fields, and then recreating those useful defects. Some of that work will be enabled by Penn’s Singh Center for Nanotechnology and its new JEOL NEOARM microscope. The only transmission electron microscope of its kind in the United States, the NEOARM is capable of resolving single atoms and potentially even creating the kinds of defects the researchers want to work with. “This study is bringing together two major areas of scientific research,” Bassett says. “On one hand, there’s been a tremendous amount of work in expanding the library of 2D materials and understanding the physics that they exhibit and the devices they can make. On the other hand, there’s the development of these different quantum architectures. And this is one of the first to bring them together to say ‘here’s a potentially room-temperature quantum architecture in a 2D material.’” This work was supported by the Army Research Office (W911NF-15–1–0589) the Australian Research Council (DE170100169) and the National Science Foundation through the Materials Research Science and Engineering Center Program (DMR-1720530) and the National Nanotechnology Coordinated Infrastructure Program (NNCI-1542153)
<urn:uuid:29165875-8919-49f7-a4f3-0728fe511e05>
CC-MAIN-2020-24
https://medium.com/penn-engineering/penn-engineers-develop-room-temperature-two-dimensional-platform-for-quantum-technology-cae3a5c0d8f9
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389355.2/warc/CC-MAIN-20200525192537-20200525222537-00177.warc.gz
en
0.934238
1,265
3.53125
4
OCTOBER 2, 2019 by Vienna University of Technology Energy is a quantity that must always be positive—at least that’s what our intuition tells us. If every single particle is removed from a certain volume until there is nothing left that could possibly carry energy, then a limit has been reached. Or has it? Is it still possible to extract energy even from empty space? Quantum physics has shown time and again that it contradicts our intuition, which is also true in this case. Under certain conditions, negative energies are allowed, at least in a certain range of space and time. An international research team at the TU Vienna, the Université libre de Bruxelles (Belgium) and the IIT Kanpur (India) have now investigated the extent to which negative energy is possible. It turns out that no matter which quantum theories are considered, no matter what symmetries are assumed to hold in the universe, there are always certain limits to “borrowing” energy. Locally, the energy can be less than zero, but like money borrowed from a bank, this energy must be “paid back” in the end. “In the theory of General Relativity, we usually assume that the energy is greater than zero, at all times and everywhere in the universe,” says Prof. Daniel Grumiller from the Institute for Theoretical Physics at the TU Wien (Vienna). This has a very important consequence for gravity: Energy is linked to mass via the formula E=mc². Negative energy would therefore also mean negative mass. Positive masses attract each other, but with a negative mass, gravity could suddenly become a repulsive force. Quantum theory, however, allows negative energy. “According to quantum physics, it is possible to borrow energy from a vacuum at a certain location, like money from a bank,” says Daniel Grumiller. “For a long time, we did not now about the maximum amount of this kind of energy credit and about possible interest rates that have to be paid. Various assumptions about this “interest” (known in the literature as “Quantum Interest”) have been published, but no comprehensive result has been agreed upon. The so-called “quantum null energy condition” (QNEC), which was proven in 2017, prescribes certain limits for the “borrowing” of energy by linking relativity theory and quantum physics: An energy smaller than zero is thus permitted, but only in a certain range and only for a certain time. How much energy can be borrowed from a vacuum before the energetic credit limit has been exhausted depends on a quantum physical quantity, the so-called entanglement entropy. “In a certain sense, entanglement entropy is a measure of how strongly the behavior of a system is governed by quantum physics,” says Daniel Grumiller. “If quantum entanglement plays a crucial role at some point in space, for example close to the edge of a black hole, then a negative energy flow can occur for a certain time, and negative energies become possible in that region.” Grumiller was now able to generalize these special calculations together with Max Riegler and Pulastya Parekh. Max Riegler completed his dissertation in the research group of Daniel Grumiller at the TU Wien and is now working as a postdoc in Harvard. Pulastya Parekh from the IIT in Kanpur (India) was a guest at the Erwin Schrödinger Institute and at the TU Wien. “All previous considerations have always referred to quantum theories that follow the symmetries of Special Relativity. But we have now been able to show that this connection between negative energy and quantum entanglement is a much more general phenomenon,” says Grumiller. The energy conditions that clearly prohibit the extraction of infinite amounts of energy from a vacuum are valid for very different quantum theories, regardless of symmetries. The law of energy conservation cannot be outwitted Of course, this has nothing to do with mystical “over unity machines” that allegedly generate energy out of nothing, as they are repeatedly presented in esoteric circles. “The fact that nature allows an energy smaller than zero for a certain period of time at a certain place does not mean that the law of conservation of energy is violated,” stresses Daniel Grumiller. “In order to enable negative energy flows at a certain location, there must be compensating positive energy flows in the immediate vicinity.” Even if the matter is somewhat more complicated than previously thought, energy cannot be obtained from nothing, even though it can become negative. The new research results now place tight bounds on negative energy, thereby connecting it with quintessential properties of quantum mechanics. More information: Daniel Grumiller et al. Local Quantum Energy Conditions in Non-Lorentz-Invariant Quantum Field Theories, Physical Review Letters (2019). Journal information:Physical Review Letters Provided by Vienna University of Technology
<urn:uuid:61e625da-90ab-4ba5-8867-defdec6d47f7>
CC-MAIN-2020-24
https://longdnguyen.com/quantum-vacuum-less-than-zero-energy/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410284.51/warc/CC-MAIN-20200530165307-20200530195307-00178.warc.gz
en
0.934963
1,060
3.734375
4
Physicists have just upped their ante: Not only have they split atoms but, even trickier, they've put them back together. Their secret? Quantum physics. A team of scientists was able to "split" an atom into its two possible spin states, up and down, and measure the difference between them even after the atom resumed the properties of a single state. The research wasn't just playtime for quantum physicists: It could be a steppingstone toward the development of a quantum computer, a way to simulate quantum systems (as plant photosynthesis and other natural processes appear to be) that would help solve complex problems far more efficiently than present-day computers can. The team at the University of Bonn in Germany did a variation on the famous double-slit experiment, which shows how ostensibly solid particles (atoms, electrons and the like) can behave like waves. The researchers found that they could send an atom to two places at once, separated by 10 micrometers (a hundredth of a millimeter — a huge distance for an atom). [Graphic: Nature's Tiniest Particles Explained] In the classic double-slit experiment, atoms are fired at a wall with two breaks in it, and they pass through to the other side, where they hit a detector, creating the kind of interference pattern expected from a wave. If atoms behaved the way we intuitively expect particles to behave, they should emerge out of one slit or the other, with no interference pattern. As more and more atoms passed through the slits, there should be a cluster of them around the two points behind the slits. Since this is quantum mechanics, that's not what happens. Instead, there's an interference pattern that shows peaks and valleys. The atoms behave like light waves. The atom is in two places at once. But if you try to see the atom in one or both places, it "collapses" into one, as the act of observing it determines its fate; hence, the interference pattern disappears. In the experiment at Bonn, the researchers fired two lasers in sequence at a single atom of cesium, moving it to the left or right. The lasers allowed the researchers to control the movement of the atom precisely, in a way that the old-fashioned double slit would not. (Before firing the lasers, the researchers cooled the atom to within a hair of absolute zero, eliminating most of its own movement.) Each atom has a spin state, which is either up or down. By moving the atom in two directions at once (using both lasers), the scientists were able to make it "split." Unlike splitting an atom into its constituent subatomic particles, as happens in radioactive decay, in this case the atom was essentially splitting into a set of twins. It was in two states at once — up and down. [Twisted Physics: 7 Mind-Blowing Findings] It's not possible to see both states at once. If one were to try to measure the state of the atom, it would "collapse" into a single state. But when one looks at the atom at the end of its journey, the combination of the two states can be measured. Since atoms — and other quantum particles — behave like waves, they have phases, just as waves do. (The phase is the particular point in the cycle of a wave, and is measured by degrees. Two waves that are the same shape and 180 degrees out of phase with each other will cancel each other out as one's trough aligns with the other's crest. Waves in phase with each other will add up as one crest aligns with the other crest). The laser distorts the wave phase when it moves the atom to the left or right. So there is now a difference in the phases of the two spin states when the atom arrives at its destination and is no longer "split." Even though it's not possible to see both states at once, when one looks at the atom at the end of its journey, the combination of the two states can be measured. In addition to measuring that phase difference, the researchers also saw "delocalization" — the double path through space the atom takes — at a greater distance than ever before, on the scale of micrometers as opposed to nanometers. It's this dual nature, called a superposed state, of atoms that would make quantum computers so powerful. The bits (known as "qubits") could be in more than one state at once, allowing for calculations that would take ordinary computers an extremely long time. It also means that quantum computers could be useful for simulating other quantum systems. Physicist Andrea Alberti, one of the paper's co-authors, said that's why in the future the researchers want to experiment with more atoms. "With two atoms, you have four different trajectories, but only one is where they are 'meeting,'" he said. By controlling the phase of more atoms, you have more bits. One could think of it as two bits in all four possible states at once. It isn't clear, he said, what minimum number of bits would be needed to make a working quantum computer. But the fact that scientists can control the phase states of a single atom means it should be possible to do the same thing with more than one. The point, Alberti said, is to build a way of simulating quantum systems. Right now that is difficult because the calculations are so complex. But a quantum computing system lends itself to such calculations better than a classical computer does. Copyright 2012 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
<urn:uuid:94ab953a-bccc-4e20-8cdf-0d17891b3388>
CC-MAIN-2020-24
https://www.foxnews.com/science/franken-physics-atoms-split-in-two-put-back-together
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00379.warc.gz
en
0.9589
1,173
4.15625
4
The next generation of computers is a few years off, but it’s pretty damn cool. It’s like no computer you’ve ever seen, nor are you likely to ever own. It promises speed and the ability to tackle problems ordinary computers can’t handle. The machine is the D-Wave 2X, and the only working model outside the company is in the Quantum Artificial Intelligence Lab. A joint project between Google, NASA, and the Universities Space Research Association, the lab will test-drive the 2X on some sticky problems in high-powered computing. The 2X is a type of quantum computer, which means it uses devices that exploit quantum physics to replace transistors and other components of ordinary computers. The quantum nature of the inner workings in theory should make the computer solve problems much faster than anything else available, making it useful for a wide range of applications. While there are no fully quantum computers out yet, the 2X is the closest yet—assuming it works as advertised. All ordinary computers—laptops, desktops, tablets, phones, e-readers, smartwatches, or whatever – are based on semiconductors, materials that conduct electricity reluctantly. That reluctance makes it easy to control the flow of power using devices like transistors, so that current is either flowing or not: represented by the numbers “1” for “on” or “0” for “off.” Combining the current through different parts of circuits allows computers to perform simple mathematical operations using just those two numbers. The power of a computer lies in doing lots and lots and lots of computations, faster than we perceive. (Note to experts: this is a simplified explanation. Don’t try this at home, kids!) Quantum computers also use just two numbers, but instead of manipulating electric current, they manipulate “quantum states.” A quantum state contains a kind of list of all the possible configurations a particle (or other microscopic system) can have: its position, speed, energy, spin, and so forth. When two quantum systems interact with each other, or we perform a measurement in the lab, the quantum state describes how likely the outcome of that interaction or measurement was. Until the measurement, though, the state is undetermined. For a quantum computer, a “quantum bit” or “qubit” could be either 0 or 1, but we don’t know until the computer reads it out in some way. One qubit, just like one bit in a normal computer, is pretty useless. However, if you have lots of qubits, you can perform many calculations simultaneously. Theoretically, a quantum computer could solve a given problem every possible way, including finding all the wrong solutions, in the amount of time it would take a normal computer to find a single solution to the same problem. That makes quantum computers useful for stuff like decryption and finding the optimal approach to performing searches. But “theoretically” is the key word. Nobody has yet built a true quantum computer, and the D-Wave 2X is no exception. (More about what it is shortly.) One difficulty is that quantum states are delicate things: interactions between particles behave exactly the same way a measurement does, altering the state and screwing up whatever calculation we were trying to do. A larger problem is that all qubits in the computer need to be entangled with each other, meaning that their quantum states are linked up: a measurement on one qubit restricts the possible outcome of similar measurements on all the others. The more qubits, the harder the entanglement becomes. To minimize such snafus, the 2X and similar devices run at very cold temperatures, to keep ambient vibrations and other noise to a minimum. We’re talking very cold: the D-Wave 2X at Google’s Quantum Artificial Intelligence Lab has to run at 0.015 degrees Celsius above absolute zero, or 15 millikelvins. (For comparison, the ambient temperature of outer space is 2.7 degrees above absolute zero.) Even with that, the 2X isn’t the ideal quantum computer described by theory – D-Wave describes it as a “quantum optimizer” instead – and some people are still skeptical it’s doing fully quantum calculations. The D-Wave 2X uses over 1000 superconducting qubits, linked in a circuit resembling ordinary computer processors. Rather than trying to solve all the difficulties of quantum computing in one go, the 2X is an “adiabatic quantum optimizer.” In principle, you feed it the mathematical representation of the problem you want to solve, and the qubits adjust to find the quantum configuration that corresponds to the solution. This is a standard solution technique called “annealing,” but in a normal computer the configuration is simulated, rather than worked out using actual quantum systems. In my physics research days, I wrote programs like this on ordinary computers. It’s a fascinating idea, and one that looks very promising. However, there’s some disagreement over whether D-Wave’s machines are working as advertised. Ordinary benchmarks used to measure a computer’s speed haven’t found a noticeable improvement from going quantum. D-Wave engineers say that we should be using a different set of benchmarks instead, since the way the 2X processes is fundamentally different. The real proof is in the results. If the 2X or other quantum computers can solve problems that are either too hard or too slow on ordinary computers, then we’ll call it a victory for the next generation of computers. And that is pretty damn cool.
<urn:uuid:5e97a1c0-d40f-4698-ac66-8e57c561f9e2>
CC-MAIN-2020-24
https://www.massarate.ma/google-and-nasa-team-up-on-quantum-computer.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347404857.23/warc/CC-MAIN-20200529121120-20200529151120-00581.warc.gz
en
0.928975
1,184
3.75
4
The six stages of quantum networks Quantum networks will go through different stages of development until they reach their full functionality. Recently, researchers from QuTech proposed a roadmap towards a full quantum internet, detailing six stages of development that are determined by the functionality available to the end nodes in the network. The initial stage is that of trusted repeater networks. In these networks, end nodes that are directly connected can perform quantum key distribution, and end nodes that are connected by a chain of intermediate repeaters can also establish a secure key, provided that the intermediate repeaters are trusted. This stage can be regarded as a pre-quantum network, or zeroth stage, since no quantum information is exchanged between end nodes. Stages 1 and 2 The first truly quantum stage, prepare and measure networks, makes the end-to-end delivery of qubits possible. This allows for instance to perform quantum key distribution between any two end nodes or secure login (see pages 17 and 34-35). The second stage, entanglement distribution networks, allows for the distribution of entanglement between arbitrary nodes in the network. In this stage it becomes possible to implement the device independent version of quantum key distribution, based on entanglement (see page 17). The first and second stages can be seen as stages of a proto-quantum networks since they make the first applications for quantum internet available. The next three stages enable further applications and are therefore advanced quantum networks. Instead of classical nodes, a proto-quantum network has quantum nodes (quantum repeaters - the blue blocks in the illustration) installed along the line. Such a network allows for direct communication between two parties; this is not possible in the pre-quantum network. Quantum key distribution enables completely secure communication, since quantum nodes – unlike classical nodes – do not learn the key while refreshing the signal. A quantum network with direct communication between the end nodes (end-to-end entanglement) is called an entanglement distribution network. In 2015, an entanglement distribution network covering a short distance was demonstrated in Delft. The two end nodes at positions A and B were placed 1.28 km apart. Entanglement between the end-nodes A and B was provided through position C. An entanglement distribution quantum network enables the implementation of several tasks. Notably quantum key distribution, but also more mundane ones such as coordinated strategies to win online games. What is a quantum memory A quantum internet needs a memory to store the states of qubits. Such a quantum memory can be compared to the short-term memory that a classical computer uses to speed up the access to a program, the cache memory. Without a quantum memory, a large quantum network would not be possible. Many protocols require memories and all network links would have to be established nearly simultaneously, which is very unlikely in larger networks. Any failure would mean that all quantum superpositions are lost and need to be re-created from the start. A quantum memory allows for the network to be established step by step, while storing the precious quantum states. This enables, for example, reliably sending quantum states by quantum teleportation. How long a quantum memory should be able to store a qubit state depends on the time it takes for the communication to succeed in the rest of the network. A couple of seconds to a minute will probably be enough. While that is trivially achieved with classical bits, most types of qubits lose their state in a few microseconds. The quantum memory in the Delft quantum network can already keep superposition states for over 10 seconds. More research is underway to make sure that these quantum memories remain reliable even when the network links are operated at the same time. Stages 3, 4 and 5 Advanced quantum networks The third stage, memory networks, requires nodes to be able to keep quantum infor-mation in a quantum memory for a certain amount of time. At this stage, teleportation (page 25) and blind quantum computation (page 27) become possible, provided that a remote quantum computer is connected to the quantum network. In this stage, the implementation of quantum clock synchronization protocols, extending the baseline of telescopes, and quantum anonymous transmission (pages 36-37) also become possible. To reach the fourth stage, fault-tolerant few-qubit networks, local operations and memory lifetimes need to be so good that a networked or distributed quantum computer (pages 36-37) can be implemented by connecting nodes from the network. In the fifth and final stage, quantum computing networks, a full-fledged quantum computer is situated at each of the end nodes. In this stage, all quantum applications that we currently envision can be executed. For instance, this stage is necessary to implement quantum voting protocols. Several pre-quantum networks are already in operation. Their quantum link is established through classical nodes, referred to as classical trusted repeaters, that are installed along the line. This setup is necessary, because quantum signals get lost when travelling through optical fibers. Typically, the classical nodes ‘refresh’ the signal at least every 100 kilometers. Notably, Japan and China have implemented pre-quantum networks and quantum key distribution has already been performed there. This quantum key distribution, however, is not optimally secure because the classical nodes also learn the key while refreshing the signal, and need to be trusted. Quantum network in Japan In Japan, an operation centre in Otemachi is connected with three other places that are situated 12, 13 and 45 km away. In 2010, a secure TV conference was demonstrated between Kogenei and Otemachi by performing trusted quantum key distribution. Quantum network in China In China, one trusted repeater network already covers a long distance: 2000 kilometre of optical fibre connects Beijing with Shanghai. This network is being tested for banking and commercial communications, such as linking up data centres or online shopping businesses. Advanced quantum network The quantum nodes in advanced quantum networks are superior in functionality to those in proto-quantum networks. A lot more applications are therefore possible with such networks. The most advanced quantum network, shown in this picture, is one in which the end nodes are replaced with quantum computers. QuTech is working on realising an advanced quantum network in the Netherlands with quantum nodes placed at Delft, The Hague, Leiden and Amsterdam. These quantum nodes will function as end nodes as well as quantum repeaters; they therefore need three properties. First, a quantum node should have a quantum memory that can robustly store qubit states. Second, it should be possible to process quantum information with high fidelity within a quantum node. Third, the quantum nodes should be able to communicate via fibres that are currently used for our classical internet.
<urn:uuid:6c93f098-2b6a-42ef-b7dc-8962db36a7c1>
CC-MAIN-2020-24
https://tu-delft.foleon.com/tu-delft/quantum-internet/the-six-stages-of-quantum-networks
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00584.warc.gz
en
0.925723
1,389
3.53125
4
An accurate analog clock ticks along with a constant precision and well known frequency: one tick per second. The longer you let it tick, the better to test its accuracy—10 times as long corresponds to a ten-fold improvement in any frequency uncertainty. But is there a faster way to determine a frequency? It turns out there is, as researchers report in Physical Review Letters. The speed-up in frequency measurement comes from quantum mechanics. When a quantum bit is used to measure the frequency of a signal, the strange rules of quantum mechanics allow the frequency measurement to be much more accurate. The technique hinges on the ability to put the quantum bit in a superposition of its two quantum states, and then shift these states around in time with the signal. Kater Murch, assistant professor of physics at Washington University in St. Louis, along with graduate student Mahdi Naghiloo and theory collaborator Andrew Jordan of the University of Rochester, described the technique as a “quantum magic trick.” “It’s reminiscent of the magic tricks that involve a ball placed under one of two cups and the cups are shuffled around—except this time, the ball can be under both cups at the same time,” Murch says. “The resulting speedup in frequency measurement is astonishing. Now, by measuring for 10 times as long, the frequency uncertainty can be reduced by a factor of 100—enabling enhanced resolution of the frequency beyond any other technique of its kind. “Earlier theory work published by the Jordan group this year has proven in two separate papers that the technique applied in this paper is the theoretical optimum that quantum mechanics allows.” Exploiting quantum physics The experiment involved using a superconducting quantum system where an external oscillating signal with unknown frequency caused the quantum system to undergo periodic changes. By applying quantum pulses on top of the oscillating signal, the state of the system could be controlled so that the final readout of the quantum system became highly sensitive to the precise value of the oscillation frequency. The underlying physical source of the advantage is related to the fact that the energy of the quantum system is time-dependent, which causes the quantum states corresponding to different frequencies to accelerate away from each other, giving enhanced distinguishability in a given time. This method permitted enhanced resolution of the frequency beyond any other technique of its kind, Jordan says. This work is just one example of how the new field of quantum technologies uses the laws of quantum physics for technological advantage over classical physics, Jordan says. Other examples include quantum computing, quantum sensing, and quantum simulation. For those fields, the exploitation of quantum physics provides benefits such as a speed up of database search, the factoring of large numbers, or the rapid simulation of complex molecules. Such fine-scale measurement of the frequency of a periodic signal is the fundamental ingredient in diverse applications, including MRI medical imaging devices, the analysis of light emitted from stars, and, of course, clock precision. Accelerating these measurements in a way that Murch and Jordan have demonstrated could have profound impacts in many areas. Life before GPS Murch and Naghiloo used timekeeping and GPS, and such constantly advancing technologies, as examples of the importance of their findings. “In the 1700s, accurate clocks were the main limitation to ocean navigation.” “Nowadays, most of us carry a phone in our pocket that is capable of telling us almost exactly where we are on Earth using the Global Positioning System,” Murch says. “The way this works is that your phone receives signals from several different satellites, and by timing the relative arrival of these signals it infers your position. The accuracy of the timing directly relates to the accuracy of your position—a relationship between timekeeping and navigation that has persisted for hundreds of years. “Well before GPS, a sailor who wanted to know his location would navigate by the stars. In the Northern Hemisphere, the height of the north star will tell you your latitude, but to know your longitude, you need to keep track of the time. As the night goes on, the stars circle around the north star—the height of any star above the horizon is related to the local time, and by comparing this time to a clock set to Greenwich Mean Time, the time difference gives your longitude.” Nautical timekeeping underscores the vitality of frequency advances. “In the 1700s, accurate clocks were the main limitation to ocean navigation,” Murch says. “The Scilly naval disaster of 1707—one of the worst disasters in British naval history—was widely blamed on poor navigation, prompting the British government to invest heavily in precise clocks. The resulting chronometers transformed marine navigation and greatly accelerated the age of discovery. “Advances in timekeeping continue to have profound impact on technology and fundamental science. Quantum tools, such as the quantum speedup in frequency measurement that we discovered, are necessary to push these technologies forward. This is an exciting time for quantum physics because these quantum resources are increasingly leading to practical advantages over traditional measurement approaches.” The National Science Foundation, the Office of Naval Research, and the Army Research Office supported the work. This research used facilities at the Institute of Materials Science and Engineering at Washington University. Murch also acknowledges support from the Sloan Foundation.
<urn:uuid:6b438ef8-25f7-49ba-83de-0d36cf96bd4b>
CC-MAIN-2020-24
https://www.futurity.org/clock-frequency-quantum-1595022-2/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407289.35/warc/CC-MAIN-20200530040743-20200530070743-00586.warc.gz
en
0.931391
1,103
3.828125
4
Time crystals—how scientists created a new state of matter Some of the most profound predictions in theoretical physics, such as Einstein's gravitational waves or Higgs' boson, have taken decades to prove with experiments. But every now and then, a prediction can become established fact in an astonishingly short time. This is what happened with "time crystals", a new and strange state of matter that was theorised, disproved, revamped and finally created in just five years since it was first predicted in 2012. Crystals, such as diamond and quartz, are made of atoms arranged in a repeating pattern in space. In these new crystals, atoms also follow a repeating pattern, but in time. Because of this weird property, time crystals could one day find applications in revolutionary technologies such as quantum computing. The story of time crystals begins in 2012 with Nobel Prize winner Frank Wilczek from MIT. As a theoretical physicist and a mathematician, Wilczek made a crucial step in transferring a key property of regular crystals – called symmetry breaking – to create the idea of time crystals. To understand what symmetry breaking is, think of liquid water. In a water droplet, molecules are free to move about and can be anywhere within the liquid. The liquid looks the same in any direction, meaning that it has a high degree of symmetry. If the water freezes to form ice, attractive forces between the molecules force them to rearrange into a crystal, where molecules are spaced at regular intervals. But this regularity means that the crystal isn't as symmetrical as the liquid, so we say the symmetry of the liquid has been broken when freezing into ice. Symmetry breaking is one of the most profound concepts in physics. It is behind the formation of crystals, but also appears in many other fundamental processes. For example, the famous Higgs mechanism, which explains how subatomic particles come to acquire mass, is a symmetry breaking process. Back in 2012, Wilczek came up with a tantalising idea. He wondered if, in the same way that a crystal breaks symmetry in space, it would be possible to create a crystal breaking an equivalent symmetry in time. This was the first time the idea of a time crystal was theorised. Such an object would have an intrinsic time regularity, equivalent to the crystal's regular pattern in space. For a time crystal, the pattern would be a continuous change back and forth in one of its physical properties, a kind of heartbeat that repeats forever, a bit like a perpetual motion machine. Perpetual motion machines, which are machines that can work indefinitely without an energy source, are forbidden by the laws of physics. Wilczek recognised this oddity of his time crystal theory and, in 2015, another group of theoretical physicists showed a perpetual motion crystal would indeed be impossible. But this was not the end of the story. In 2016, new research showed that time crystals could still exist in theory, but only if there was some external driving force. The idea was that the time regularity would be somehow dormant, hidden from view, and that adding a little energy would bring it to life and unveil it. This solved the paradox of perpetual motion, and brought new hopes for the existence of time crystals. Then, in the summer of 2016, the conditions to create and observe time crystals were laid out in an article in the online arXiv repository, and later published in the peer-reviewed journal Physical Review Letters. The researchers studied how a special property of particles known as quantum spin could be repeatedly reversed by an external force at regular intervals. They predicted that if they did this to a set of particles, the interactions between the particles would produce their own oscillations in the spin, creating a "driven" time crystal. In a matter of months, two different experimental groups had taken on the challenge to create the time crystals in the laboratory. One of the teams fired laser pulses at a train of ytterbium atoms that produced oscillations in the atoms' properties, at different intervals from the pulses. This meant that the ytterbium atoms were behaving as a time crystal. The other team focused on an entirely different system, consisting of impurities in a diamond crystal. They used microwaves to disturb the impurities at well-defined intervals, and observed the same type of time-crystal oscillations as the first team. At last, time crystals had been created and Wilczek's main ideas proven true. The prediction, realisation and discovery of time crystals opens a new chapter in quantum mechanics, with questions about the properties of this newly found state of matter and whether time crystals might occur in nature. The symmetry-breaking properties of ordinary crystals have lead to the creation of phononic and photonic metamaterials, deliberately designed materials that selectively control acoustic vibrations and light that can be used to boost the performance of prosthetics, or to increase the efficiency of lasers and fibre-optics. So the time symmetry-breaking properties of time crystals will likely find their way into equally novel fields, such as chrono-metamaterials for quantum computing, which uses the inherent properties of atoms to store and process data. The story of time crystals started with a beautiful idea by a theoretical physicist, and now has culminated its first chapter with conclusive experimental evidence after a mere five years. Far from coming to an end as scientists prove their big theories, it seems physics is more alive than ever.
<urn:uuid:c6921b82-211d-44c3-8244-c52812dc88db>
CC-MAIN-2020-24
https://phys.org/news/2017-02-crystalshow-scientists-state.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00385.warc.gz
en
0.955342
1,110
3.515625
4
The dream of useful quantum computing may have just come one step closer. Australian researchers are combining two of the hottest topics in science: quantum computing and machine learning. Specifically, they’ve succeeded in training an algorithm to predict the evolving state of a simple quantum computer. Such an understanding allows real time stabilization of the system, much as tightrope walker uses a pole for balance, according to a paper published Monday in Nature Communications. That would be a big deal for everyone – from Silicon Valley to Washington, D.C. Quantum computing extends the familiar concept of the bit to propose the "qubit." While we usually etch transistors in silicon, the quantum analog could be a single particle such as a photon or electron. Like the transistor, this particle is able to exist in two states that correspond to 0 or 1. The difference is, the world at the quantum level looks nothing like ours. In addition to being 0 or 1, the particle can occupy a state not purely 0 or 1 but in some sense a mixture of the two. For this reason, a qubit can be much more flexible than a regular bit. Exploiting this probabilistic messiness is the key to quantum computing. The mathematical behavior resists simple characterization, but the general idea is that they could take advantage of a phenomenon called interference to analyze many solutions to a problem simultaneously. In the end, more likely solutions would be amplified and less likely solutions eliminated by competing qubits, much like how ocean waves can combine to make superwaves, or cancel out entirely. This simultaneous solution testing capability makes quantum computers theoretically useful for solving certain types of problems that would usually require a brute force approach, such as factoring large numbers and encryption. However, each problem requires a specialized method, so chances we’ll someday be checking Facebook and playing games on a quantum computer are slim. This tantalizing dream of super-fast quantum computers not bound by the standard laws of physics has hovered on the horizon for decades, but progress is slow. IBM built a functional five-qubit system, and the record belongs to USC/Lockheed-Martin’s reportedly 1098 qubit D-Wave 2X system, although the topic is so complicated that no one can say for sure if it’s working or not. What makes it so tricky? Quantum computing depends on its qubits doing multiple things at once, for example spinning clockwise and counterclockwise at the same time, and interfering with other qubits in a useful way. Such behavior is so rare at our level of reality as to be unimaginable, and recreating it on demand requires an exacting environment, isolated from the destabilizing influence of the outside world. The D-Wave system, for example, operates at two-one-hundredths of a degree Celsius above absolute zero. As a rough analogy, you could imagine the activities of the qubits are like a tightrope walker at risk of being knocked off balance at any moment by a gust of wind or a lobbed tomato. To protect the walker, we can take defensive measures to block out external influences, say by erecting a glass barrier around them. The quantum analog of falling off the tightrope is a process called "decoherence," which describes what happens when a system starts to act classically. That’s no good for a quantum computer, which depends on "coherence" for its quantum magic. To make matters worse, in the quantum world, if we look at the tightrope walker, they fall. "To build a quantum computer," explains University of Toronto physics professor Aephraim Steinberg, "you need to be sure no information leaks out that could possibly tell which one it was,” a "0" or a "1." In addition to isolation, the Australian team, lead by quantum physicist Michael Biercuk has made progress on a more active form of qubit aid called quantum error correction. Instead of just protecting the tightrope walker, they’re actively helping. Whenever a qubit is about to decohere, they give a stabilizing nudge with a laser or adjust the frequency of oscillation, which would be something like tweaking the angle of the tightrope or having the walker speed up, or slow down, according to Daniel Lidar, professor of electrical engineering at the University of Southern California. Without such error correction, "quantum computing would have been dead in the water 20 years ago," Dr. Steinberg tells The Christian Science Monitor in an email, but the novel aspect of the paper is how Dr. Biercuk’s team knew what corrections to make. Remember, looking at the tightrope makes the walker fall, so we have to help while blindfolded. As Steinberg puts it, you need a clever scheme to "measure whether an error occurred, and which one, without measuring what state the qubit is actually in." Biercuk realized that if his team could predict how the qubits would decohere, they could apply the necessary corrections in real time and keep the balancing act going. But how do you guess what a chaotic system you can’t look at is going to do in the future? Enter machine learning "We used algorithms which have broad applications in many fields of science and engineering, and are already widely used," Biercuk tells the Monitor in an email. "Much of the power of our finding is that existing machine learning techniques now have a role to play in building quantum tech." Based on past data of a qubit’s behavior, the team’s algorithm was able to train itself to predict how the system would evolve in the future. The processes governing this evolution are largely random, but there are some patterns the machine is able to detect. "The random behavior we can correct contains within it what are known as "correlations" in time – such processes change in such a way as to exhibit memory of the past state of the system. It is this correlation which we learn and exploit," Biercuk explains. First, the team trained the program using data from repeated observations of the qubits, finding out whether the tightrope walker was right handed or left handed, if the wind blows primarily from the east, or west. Of course, during observation the quantum computer is useless. To apply what the algorithm had learned, they used a technique called "multiplexing." For a time, the trained algorithm watched the computer run, absorbing more transient aspects of the system. Was the tightrope walker sleepy that day? Was the room breezy? Then, they closed the box and immediately let the computer operate in the isolation it needs to perform useful calculations. The algorithm forecasted what would likely be happening inside the box, and the system could apply the appropriate corrections, which reportedly led to a significant improvement over previous methods. While Best Buy may not be stocking its shelves with code-busting quantum supercomputers anytime soon, Biercuk’s method is a new approach with the potential to move the field forward. "This technique joins and nicely complements the existing arsenal of quantum error correction techniques and will undoubtedly find wide use," says Dr. Lidar, who was not involved in this research. "The marriage of machine learning and quantum error correction may prove to be an important step towards the realization of scalable quantum computing." [Editor's note: This story has been updated to correct which quantum system holds the record for the most qubits. The current record holder is the D-Wave 2X at the USC-Lockheed Martin Quantum Computing Center.]
<urn:uuid:0ebb79dd-a416-4de8-aa2a-a5b92c52e6cd>
CC-MAIN-2020-24
https://www.csmonitor.com/Science/2017/0117/Machines-learn-to-find-patterns-in-quantum-chaos
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00389.warc.gz
en
0.941696
1,588
3.53125
4
The next 100 years For time immemorial, society has been fascinated with how science and technology will shape the future. Yet although it’s really exciting to contemplate how our daily lives may be transformed, we can never accurately predict the future. History shows that unexpected breakthroughs can send science and technology down equally unexpected paths. For example, Alexander Graham Bell foresaw global telecommunication and renewable energy technologies in 1918. But no one 100 years ago could have predicted the discovery of 2D wonder materials like graphene, or the particle zoo hidden below the scale of atoms that make up the universe. However, today’s deep understanding of the physical world does provide some clues for how physics might impact future generations. In this centenary year for the IOP, where we are celebrating the past, recognising the present and looking to the future, we ask: what could the future hold for physics and society? Energy - Realising the potential of carbon-free fusion Society has been harnessing atomic energy to produce electricity since 1951. Yet these nuclear reactors rely on fission, splitting uranium atoms to heat water and ultimately produce energy. The dream for nuclear power is to build nuclear reactors that instead exploit fusion – the process powering all stars, including our Sun. Fusion would have a limitless supply of fuel, running on atoms distilled from water. It would release four times the energy of nuclear fission. Moreover, it would come with none of the risks of fission, with no possibility of a Fukushima-like nuclear meltdown and no long-lived radioactive waste. In rural southern France, construction is underway on the biggest and most ambitious experimental fusion reactor ever conceived – ITER. ITER scientists aim to be the first to produce net energy and maintain fusion for long periods by the mid-2030s, with ITER-like fusion power plants expected to be producing electricity in the latter half of this century. Yet if significant challenges can be overcome, smaller, more efficient designs for commercial fusion power plants might be contributing to the energy mix as early as the 2030s. As we witness the impacts of global warming and climate change, a future where fusion power can provide limitless, clean energy could be invaluable. Space - The race to Mars The grainy footage of Neil Armstrong’s first steps on the Moon is indelibly imprinted in the minds of those lucky enough to have witnessed what is arguably humankind’s greatest achievement. Setting foot on Mars could have an even greater impact on society back on Earth, heralding an era in which humanity may become a multi-world species and offering the possibility, albeit remote, of finding hidden alien life. But getting there is no mean feat – half of all missions to Mars have failed since the first Soviet attempts to send probes in the 1960s, and none of these were transporting fragile, living human bodies. Only recently has the technology been developed to send humans to the Red Planet. NASA’s Orion spacecraft is designed for deep space missions, including trips to Mars in the 2030s. Private spacecraft company SpaceX, meanwhile, could be sending its first astronauts to Mars on its Starship Hopper as early as 2024, with an even more ambitious aim of building a city on the Red Planet by 2050. Surviving and thriving on this truly alien world will depend on technologies being developed right now, including new techniques to 3D print protective habitats, grow crops in regolith, produce oxygen from Martian atmospheric carbon dioxide, and engineer materials to protect humans from the harmful effects of ionizing radiation on Mars walks. Quantum - A world powered by quantum computers Governments and companies around the world are investing billions in developing quantum computers. Why? The simple answer is that quantum computers would solve certain problems by using the quantum properties of superposition and entanglement. This would allow them to consider many probable outcomes simultaneously – instead of sorting through all possible answers one by one – and arrive at an answer rapidly. In theory, quantum computers could rapidly make calculations that would bamboozle a conventional supercomputer – and thereby solve some of society’s most intractable problems. This includes developing advanced weather and climate models to help combat climate change, accelerating drug discovery to fight disease, building unhackable data security using quantum cryptography, and modelling quantum physics to lift the veil on unsolved mysteries of the quantum world. There is however a ‘but’. Like most quantum phenomena, qubits – the smallest unit of data in a quantum computer – are incredibly delicate. Any interaction with the environment could destroy them. This makes building a quantum computer with enough stable qubits to solve practical problems a herculean task. At present, a collaboration between Google, NASA and the Oak Ridge National Laboratory is closest, having recently announced that they have achieved a significant milestone known as 'quantum supremacy'; a threshold where quantum computers can solve problems that traditional computers simply cannot, in practical terms. Their 54-qubit quantum processor solved a problem in 200 seconds that would take the world’s fastest supercomputer 10,000 years. Though the problem has little practical application – sampling the output of a pseudo-random quantum circuit – the achievement signals the beginning of a new era in quantum computing. Health - Physics convergence for diagnosis and treatment Healthcare is on the cusp of transforming beyond recognition. A conflation of scientific and technological progress will reduce hospital visits through preventative medicine, improve the way patients are diagnosed and treated, and allow researchers to find new cures. Physics will contribute to this revolution in a myriad of ways. For instance, real-time data from wearable fitness monitors, health apps and implantable health monitoring devices will be combined with genomic information, scans that probe the body at various scales and many other sources to allow AI and machine learning algorithms to predict, prevent or treat diseases. This big data approach to healthcare will represent the pinnacle of patient-centric personalised medicine. Elsewhere, physicists will be crucial to unlocking the secrets of the brain. New quantum sensors will measure magnetic fields generated by current flow through the brain’s neural assemblies. New imaging techniques and combinations of imaging modalities will provide insights into the anatomy and function of the human brain. Sophisticated physics models of the brain will allow researchers to safely study diseases like epilepsy or stroke in silico. And neuromorphic computing techniques will offer a tool for neuroscientists to understand the dynamic processes of learning and development, while also offering the tantalising possibility of neuromorphic chips to be developed with emergent intelligence.
<urn:uuid:016a15e4-ac72-49a7-bf85-f0df0cb75115>
CC-MAIN-2020-24
https://beta.iop.org/next-100-years
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347404857.23/warc/CC-MAIN-20200529121120-20200529151120-00589.warc.gz
en
0.922447
1,332
4.03125
4
Barely a week goes by without reports of some new mega-hack that’s exposed huge amounts of sensitive information, from people’s credit card details and health records to companies’ valuable intellectual property. The threat posed by cyberattacks is forcing governments, militaries, and businesses to explore more secure ways of transmitting information. Today, sensitive data is typically encrypted and then sent across fiber-optic cables and other channels together with the digital “keys” needed to decode the information. The data and the keys are sent as classical bits—a stream of electrical or optical pulses representing 1s and 0s. And that makes them vulnerable. Smart hackers can read and copy bits in transit without leaving a trace. Quantum communication takes advantage of the laws of quantum physics to protect data. These laws allow particles—typically photons of light for transmitting data along optical cables—to take on a state of superposition, which means they can represent multiple combinations of 1 and 0 simultaneously. The particles are known as quantum bits, or qubits. The beauty of qubits from a cybersecurity perspective is that if a hacker tries to observe them in transit, their super-fragile quantum state “collapses” to either 1 or 0. This means a hacker can’t tamper with the qubits without leaving behind a telltale sign of the activity. Some companies have taken advantage of this property to create networks for transmitting highly sensitive data based on a process called quantum key distribution, or QKD. In theory, at least, these networks are ultra-secure. What is quantum key distribution? QKD involves sending encrypted data as classical bits over networks, while the keys to decrypt the information are encoded and transmitted in a quantum state using qubits. Various approaches, or protocols, have been developed for implementing QKD. A widely used one known as BB84 works like this. Imagine two people, Alice and Bob. Alice wants to send data securely to Bob. To do so, she creates an encryption key in the form of qubits whose polarization states represent the individual bit values of the key. The qubits can be sent to Bob through a fiber-optic cable. By comparing measurements of the state of a fraction of these qubits—a process known as “key sifting”—Alice and Bob can establish that they hold the same key. As the qubits travel to their destination, the fragile quantum state of some of them will collapse because of decoherence. To account for this, Alice and Bob next run through a process known as “key distillation,” which involves calculating whether the error rate is high enough to suggest that a hacker has tried to intercept the key. If it is, they ditch the suspect key and keep generating new ones until they are confident that they share a secure key. Alice can then use hers to encrypt data and send it in classical bits to Bob, who uses his key to decode the information. We’re already starting to see more QKD networks emerge. The longest is in China, which boasts a 2,032-kilometer (1,263-mile) ground link between Beijing and Shanghai. Banks and other financial companies are already using it to transmit data. In the US, a startup called Quantum Xchange has struck a deal giving it access to 500 miles (805 kilometers) of fiber-optic cable running along the East Coast to create a QKD network. The initial leg will link Manhattan with New Jersey, where many banks have large data centers. Although QKD is relatively secure, it would be even safer if it could count on quantum repeaters. What is a quantum repeater? Materials in cables can absorb photons, which means they can typically travel for no more than a few tens of kilometers. In a classical network, repeaters at various points along a cable are used to amplify the signal to compensate for this. QKD networks have come up with a similar solution, creating “trusted nodes” at various points. The Beijing-to-Shanghai network has 32 of them, for instance. At these waystations, quantum keys are decrypted into bits and then reencrypted in a fresh quantum state for their journey to the next node. But this means trusted nodes can’t really be trusted: a hacker who breached the nodes’ security could copy the bits undetected and thus acquire a key, as could a company or government running the nodes. Ideally, we need quantum repeaters, or waystations with quantum processors in them that would allow encryption keys to remain in quantum form as they are amplified and sent over long distances. Researchers have demonstrated it’s possible in principle to build such repeaters, but they haven’t yet been able to produce a working prototype. There’s another issue with QKD. The underlying data is still transmitted as encrypted bits across conventional networks. This means a hacker who breached a network’s defenses could copy the bits undetected, and then use powerful computers to try to crack the key used to encrypt them. The most powerful encryption algorithms are pretty robust, but the risk is big enough to spur some researchers to work on an alternative approach known as quantum teleportation. What is quantum teleportation? This may sound like science fiction, but it’s a real method that involves transmitting data wholly in quantum form. The approach relies on a quantum phenomenon known as entanglement. Quantum teleportation works by creating pairs of entangled photons and then sending one of each pair to the sender of data and the other to a recipient. When Alice receives her entangled photon, she lets it interact with a “memory qubit” that holds the data she wants to transmit to Bob. This interaction changes the state of her photon, and because it is entangled with Bob’s, the interaction instantaneously changes the state of his photon too. In effect, this “teleports” the data in Alice’s memory qubit from her photon to Bob’s. The graphic below lays out the process in a little more detail: Researchers in the US, China, and Europe are racing to create teleportation networks capable of distributing entangled photons. But getting them to scale will be a massive scientific and engineering challenge. The many hurdles include finding reliable ways of churning out lots of linked photons on demand, and maintaining their entanglement over very long distances—something that quantum repeaters would make easier. Still, these challenges haven’t stopped researchers from dreaming of a future quantum internet. What is a quantum internet? Just like the traditional internet, this would be a globe-spanning network of networks. The big difference is that the underlying communications networks would be quantum ones. It isn’t going to replace the internet as we know it today. Cat photos, music videos, and a great deal of non-sensitive business information will still move around in the form of classical bits. But a quantum internet will appeal to organizations that need to keep particularly valuable data secure. It could also be an ideal way to connect information flowing between quantum computers, which are increasingly being made available through the computing cloud. China is in the vanguard of the push toward a quantum internet. It launched a dedicated quantum communications satellite called Micius a few years ago, and in 2017 the satellite helped stage the world’s first intercontinental, QKD-secured video conference, between Beijing and Vienna. A ground station already links the satellite to the Beijing-to-Shanghai terrestrial network. China plans to launch more quantum satellites, and several cities in the country are laying plans for municipal QKD networks. Some researchers have warned that even a fully quantum internet may ultimately become vulnerable to new attacks that are themselves quantum based. But faced with the hacking onslaught that plagues today’s internet, businesses, governments, and the military are going to keep exploring the tantalizing prospect of a more secure quantum alternative.
<urn:uuid:6d572112-f4ff-40e3-b000-d11a9a4ac2be>
CC-MAIN-2020-24
https://www.technologyreview.com/2019/02/14/103409/what-is-quantum-communications/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00391.warc.gz
en
0.93629
1,656
3.65625
4
/ The world based on numbers and relations / 11:15, restate my assumptions: 1. Mathematics is the language of nature; 2. Everything around us can be represented and understood through numbers; 3. If you graph these numbers, patterns emerge. Therefore: There are patterns everywhere in nature. – Max Cohen Mathematics (from the Greek word: mathema) means knowledge, study, learning and includes the study of such topics as quantity, structure, space and change. It seek and use patterns to formulate new conjectures; they resolve the truth or falsity of conjectures by mathematical proof. When mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, and the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry. The brightest minds in history have used mathematics to lay the foundation for now we measure and understand our universe. Time and time again, we have proved that it only takes one simple formula to alter the course of humanity: Isaac Newton’s Law of Universal Gravitation Newton’s law explains why the planets move the way they do and how gravity works, both on Earth and throughout the universe. First published in the ”Principia” in July 1687, the Law of Universal Gravitation was the defacto reference equation for nearly 200 years until Einstein’s Theory of General Relativity replaced it. Albert Einstein’s Theory of Relativity Einstein’s most famous undertaking is the generally accepted theory on the relationship between space and time. First proposed in 1905, the Theory of Relativity has both radically altered the course of physics and deepened our knowledge of the universe’s past, present and future. The Pythagorean Theorem This ancient – first recorded circa 570-495 BC – is a fundamental principle in Euclidean Geometry and the basis for the definition of distance between two points. Pythagora’s theorem also describes the relationship between the sides of a right triangle on a flat plane. James Clerk Maxwell’s set of equations describe how electric and magnetic fields are generated and altered, both by each other and by charges and currents. First published between 1861 and 1862, they are to classical electromagnetism what Newton’s laws of motion and universal gravitation are to classical mechanics. The Second Law of Thermodynamics Rudolf Clausius’ law states that energy always flows from higher concentration to lower concentrations. It also states that whenever energy changes or moves, it becomes less useful. Formulated in 1865, it has led to the development of technologies like internal combustion engines, cryogenics and electricity generation. Logarithms were introduced by John Napier in the early 17th century as a way to simplify calculations. They answer the question, ”How many of X number do we multipy to get Y number?” Logarithms were adopted by early navigators, scientists and engineers. Today, scientific calculators and digital computers do the work for us. The calculation shown is the definition of the derivative in differential calculus, one of calculus’ two major branches. The derivative measures the rate at which a quantity is changing — if you are walking 2 km an hour, then you will change your position by 2 km every hour. In the 1600s, Newton used calculus to develop his laws of motion and gravitation. This equation describes how the quantum state of a quantum system changes with time. Developed by Austrian physicist Erwin Schrödinger in 1926, it governs the behavior of atoms and subatomic particles in quantum mechanics. Schrodinger’s Equation paved the way for nuclear power, microchips, electron microscopes, and quantum computing. Information theory is a branch of mathematics that studies the coding of information in the form of sequences of symbols, and the speed at which that information can be transmitted. Applications of topics within information theory include data compression and channel coding. Research in the field was also instrumental in the development of the Internet and mobile phones. Chaos Theory is a branch of mathematics that studies complex systems whose behavior is extremely sensitive to slight changes in conditions. In essence, it shows how small alterations can lead to consequences of much greater scale. Chaos Theory has applications just about everywhere — meteorology, sociology, physics, computer science, engineering, economics, biology, and philosophy. In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed several years later. It also provided a theoretical justification for the introduction of several component wave functions in Pauli’s phenomenological theory of spin.
<urn:uuid:ac015dee-e1b7-4d36-821f-473e66023385>
CC-MAIN-2020-24
http://arsmagine.com/others/10-equations/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00392.warc.gz
en
0.928857
1,042
3.5625
4
During World War II the federal government launched the Manhattan Project to ensure the U.S. would possess the first atomic bomb. Seventy-five years later, America is in another contest just as vital to national security, the economy and even the future of liberal democracy. It’s the race to build the first fully operational quantum computer. America’s leading adversaries are working urgently to develop such a computer, which uses the principles of quantum mechanics to operate on data exponentially faster than traditional computers. Such a system theoretically would have enough computing power to open the encrypted secrets of every country, company and person on the planet. It would also enable a foreign creator to end America’s dominance of the information-technology industry and the global financial system. How does quantum computing work? In the bizarre world of quantum mechanics, electrons and photons can be in two states at once. All current computers process data in a linear sequence of one and zeros. Every bit, the smallest unit of data, has to be either a zero or a one. But a quantum bit, or “qubit,” can be a zero and a one at the same time, and do two computations at once. Add more qubits, and the computing power grows exponentially. This will allow quantum computers of the future to solve problems thousands of times as fast as today’s fastest supercomputer. This poses a problem for most encryption systems, because they are based on math problems that would take a conventional computer centuries to solve. The encryption that protects credit-card information and bank accounts, for instance, relies on two keys. One is the “private key,” which consists of two large prime numbers only known to the bank. The “public key” sits in cyberspace and is the product of multiplying together the two “private” primes to create a semiprime. The only way a hacker could access encrypted credit card or bank information would be by factorizing or breaking down the large “public key”—often 600 digits or longer—back to the correct two numbers of the “private key.” This Herculean task simply takes too long for current computers. A future quantum computer will be able to decrypt such systems almost instantaneously. Even Blockchain will not be able to withstand the first quantum attack if it relies on two-key encryption architecture, which protects nearly all digital information today. To understand the scale of the threat, imagine a thousand Equifax breaches happening at once. As a September article in the journal Nature noted: “Many commonly used cryptosystems will be completely broken once large quantum computers exist.” Most quantum experts believe that such a breakthrough may only be a decade away. If quantum computers will hold the key to the global future, the U.S. needs to secure that key. Scientists already know that quantum computing is possible. The problem now is engineering a system that takes full advantage of its potential. Since subatomic particles are inherently unstable, assembling enough qubits to do calculations takes persistence, time and resources. Quantum computers with 10 qubits already exist. A quantum computer capable of solving problems that would stump a classical computer is close at hand. Fifty qubits will mark the threshold of quantum supremacy. Other countries understand that. While most of the work on quantum computing in the U.S. is being done by companies like Google and Microsoft , the European Union has made quantum research a flagship project over the next 10 years and is committed to investing nearly €1 billion in the effort. Australia, the U.K. and Russia have entered the quantum race, too. But the real national leader in quantum research investment is China. This summer it launched the first satellite capable of transmitting quantum data. It’s building the world’s largest quantum research facility to develop a quantum computer specifically for code-breaking and supporting its armed forces, with quantum navigation systems for stealth submarines. Beijing is investing around $10 billion in the facility, which is to be finished in 2½ years. Today the U.S. government spends only $200 million a year on quantum research of all kinds, spread haphazardly over a variety of agencies—from the National Security Agency to the Energy Department. While IBM recently set a new benchmark with its 17-qubit processor, and Google insists it will reach the 50-qubit threshold before the end of this year, China is steadily advancing toward a 40-qubit prototype—and remains determined to reach “quantum supremacy.” At the same time, countries will need to revamp their encryption systems to keep up with the new quantum reality. The U.S. can achieve both goals through a new Manhattan Project. Call it the National Quantum Initiative. Like its atomic predecessor, the new program should marshal federal government money, the efficiencies of private industry, and the intellectual capital of the nation’s laboratories and universities, while keeping everyone focused on the essential mission: winning the quantum race. The Manhattan Project cost some $30 billion in today’s dollars. In comparison, the National Photonics Initiative has called for an additional $500 million of federal funding over five years to help the U.S. secure its grip on quantum supremacy. Recognizing this, Congress held its first hearings on a national initiative for quantum computing on Oct. 24. Congressional leaders should now pass a bill funding a National Quantum Initiative. Equally important is to make sure that America’s financial system, critical infrastructure and national-security agencies are fully quantum resistant. Companies and labs are currently developing algorithms and tamper-proof encryption based on quantum technology. But without a concerted and coherent national effort, it will take years for government and industry to agree on the standards for quantum-safe replacements for today’s encryption methods, and to make sure they are deployed in time to prevent a quantum attack. In a world of quantum proliferation, the risks are too great to ignore. Since the end of World War II, the U.S. has led the world in nuclear research, making this country stronger and safer. For three decades the U.S. has been the leader in information technology, which has made Americans more innovative and prosperous. The U.S. cannot afford to lose that leadership now—not when the future hangs in the quantum balance.
<urn:uuid:304a1062-42e8-4fd3-9ffc-d674c2dbed4c>
CC-MAIN-2020-24
https://www.hudson.org/research/13969-the-computer-that-could-rule-the-world
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00393.warc.gz
en
0.931397
1,307
3.65625
4
Teleportation is among one of the most highly anticipated and desired scientific advances of our time. The idea that one could send anything to anyone anywhere instantly is certainly appealing, but is it possible? Yes, but not as it’s been described in science fiction (sci-fi). Today, particles have been teleported several hundred kilometers away, but not physically. Instead of physically moving the particle to a destination, it is instead recreated elsewhere, while the original is altered. In contrast, sci-fi teleportation involves physically sending something, like a human being, to a predetermined location. While the technology in individual films vary, in many this is done by scanning a person’s body perfectly, down to the body’s sub-atomic particles’ quantum states, and then dematerializing that person and sending the scanned information to be rematerialized or reconstructed back into that original person somewhere else. However, this theory is fraught with multiple challenges, both philosophical and practical. The most common ethical concern is that this teleportation technology could also be used to create clones. In fact, based on the description above the clones that could be made would be so perfect that there would be no possible way to tell the difference between the two, unlike biological clones. Because these copies are so perfect, it raises a number of existential questions about what a person’s identity means – if the clone is a perfect replica of the original, then aren’t they the same person? Fortunately for philosophers, the conveniently-named No Cloning Theorem shows that it would be impossible to create these perfect clones of complex systems, like people. Much simpler systems with known attributes, like a photon, could be cloned and have been teleported by taking advantage of quantum entanglement. Entangled particles are a set of quantum particles, like photons, that have properties that are, in a sense, co-dependent. Take, for example, the quantum property of spin. For any given particle, its spin can either be “spin up” or “spin down”, but it is impossible to know for sure without making a measurement. Interestingly, once this property of the particle is measured it does not change. This is essentially like flipping a coin. While the coin is in the air, it’s landing position is uncertain, but once it lands, it will either have a heads or tails facing up, forever. Normally, the process of measuring the spin of one particle has no bearing on measuring the spin of any other particle in the universe. This is not true for entangled particles. Entangled particles are created such that the properties of one of the entangled particles, such as spin, must always be exactly opposite of the other when measured using the same procedure. While the two particles have interdependent properties, they are always unknown until the moment that they are measured. Once one of the entangled particle is measured, the other unmeasured, entangled particle will immediate assume the opposite spin orientation. The incredible phenomenon of quantum entanglement not only allows for rapid information transfer, but also makes teleportation possible. Entanglement and its properties are most easily explained with the previous example of flipping a coin. This time, imagine that you’ve flipped two, entangled quarters. As the two coins descend, you catch one and a friend catches another. Each of you catch a coin and move far away from one another without peaking at how the coins have landed. While you’re moving away from each other and have not looked at the coin, you have no way of knowing how it landed. All you can be sure of is how your friend’s coin will be the opposite of yours. As a result, when you see how the coin in your hand landed, you would immediately know how your friend’s coin landed no matter how far away your friend was or whether they had looked at their coin. Quantum Teleportation in Practice Today, teleportation works much differently than it is portrayed in many sci-fi films. As discussed above, teleportation is unable to replicate complex systems, like people, or physically transport an object. What teleportation can do, however, is transmit detailed information quickly over large distances. Though this technology is still in its early stages, it holds a lot of promise in a variety of practical applications, such as cryptography. The biggest issue with teleportation and entanglement, as discussed thus far, is that there is no ability to choose what state will be sent where. (When two particles are entangled their spin is entirely up to chance, even though they must be opposite of each other). In order to intentionally teleport information, one needs to use at least three particles, two of which (say #1 and #2) must be entangled (EP) with one another. The third particle (#3) is the one who’s information is to be teleported to someone who has possession of #2. In order for this to work, the same person must be in possession of #1 (EP) and #3 so that they have some means of sending information to the owner of #2 (EP). Without detailing the mathematical specifics, this process begins by making a measurement, called a Bell measurement, of #1 and #3 together. ( If #3 was measured directly then it would be irreversibly changed.) Because particles #1 and #2 are entangled together, making this measurement will affect both of these particles. When this measurement is made, only four outcomes are possible for a given input. Once the owner of particle #1 records the outcome of the measurement, they can communicate it to the owner of #2 through some non-quantum channel (e.g. a phone). Because all outcomes are known and #1 and #2 have opposite properties of the other, the owner of #2 can change their particle to an identical version of #3 using a relatively mathematical operation. Thus, #3 has been “teleported”.
<urn:uuid:d7826f1c-c4cf-4403-b9a0-e57442b06d04>
CC-MAIN-2020-24
https://www.findlight.net/blog/2018/07/12/teleportation-quantum-entanglement/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00191.warc.gz
en
0.952257
1,225
3.734375
4
Two scientists at the University of Central Florida have discovered how to get a solid material to act like a liquid without actually turning it into liquid, potentially opening a new world of possibilities for the electronic, optics and computing industries. When chemistry graduate student Demetrius A. Vazquez-Molina took COF-5, a nano sponge-like, non-flammable manmade material and pressed it into pellets the size of a pinkie nail, he noticed something odd when he looked at its X-ray diffraction pattern. The material’s internal crystal structure arranged in a strange pattern. He took the lab results to his chemistry professor Fernando Uribe-Romo, who suggested he turn the pellets on their side and run the X-ray analysis again. The result: The crystal structures within the material fell into precise patterns that allow for lithium ions to flow easily – like in a liquid. The findings, published in the Journal of the American Chemical Societyearlier this summer, are significant because a liquid is necessary for some electronics and other energy uses. But using current liquid materials sometimes is problematic. For example, take lithium-ion batteries. They are among the best batteries on the market, charging everything from phones to hover boards. But they tend to be big and bulky because a liquid must be used within the battery to transfer lithium ions from one side of the battery to the other. This process stores and disperses energy. That reaction creates heat, which has resulted in cell phones exploding, hover boards bursting into flames, and even the grounding of some airplanes a few years ago that relied on lithium batteries for some of its functions. But if a nontoxic solid could be used instead of a flammable liquid, industries could really change, Uribe-Romo said. “We need to do a lot more testing, but this has a lot of promise,” he said. “If we could eliminate the need for liquid and use another material that was not flammable, would require less space and less packaging, that could really change things. That would mean less weight and potentially smaller batteries.” Smaller, nontoxic and nonflammable materials could also mean smaller electronics and the ability to speed up the transfer of information via optics. And that could mean innovations to communication devices, computing power and even energy storage. “This is really exciting for me,” said Vazquez-Molina who was a pre-med student before taking one of Uribe-Romo’s classes. “I liked chemistry, but until Professor Romo’s class I was getting bored. In his class I learned how to break all the (chemistry) rules. I really fell in love with chemistry then, because it is so intellectually stimulating.” Uribe-Romo has his high school teacher in Mexico to thank for his passion for chemistry. After finishing his bachelor’s degree at Instituto Tecnológico y de Estudios Superiores de Monterrey in Mexico, Uribe-Romo earned a Ph.D. at the University of California at Los Angeles. He was a postdoctoral associate at Cornell University before joining UCF as an assistant professor in 2013. Learn more: UCF Team Tricks Solid Into Acting as Liquid The Latest on: Solid acting like a liquid via Google News The Latest on: Solid acting like a liquid - This Unknown Metal May Be A Gamechanger For Space Travelon June 4, 2020 at 5:01 am While it even lifted shares of companies that had nothing to do with it, including Tesla and Virgin Galactic Holdings, one little-known company hoping to eventually mine North America's only supply of ... - Orbital ordering triggers nucleation-growth behavior of electrons in an inorganic solidon June 1, 2020 at 7:35 am A new study by researchers from Waseda University and the University of Tokyo found that orbital ordering in a vanadate compound exhibits a clear nucleation-growth behavior. - Quantum weirdness gives radar a booston May 29, 2020 at 11:19 am The precision and efficiency of radar might be improved by harnessing quantum entanglement, the uncanny ability of particles to share a common quantum property — such as their orientation in space — ... - "Battery butter" could give solid state batteries a much-needed booston May 19, 2020 at 10:48 am Although they're still not quite ready for everyday use, a newly developed butter-like substance ... runaway. Solid state batteries attempt to address these problems by replacing the liquid ... - "Rick and Morty" review: "The Vat of Acid Episode" is a solid high-concept escape acton May 18, 2020 at 5:15 pm At the start, "The Vat of Acid Episode" seems like it's destined for bottle episode ... involves sitting at the bottom of a drum of green liquid until their would-be killers leave. - ‘Rick and Morty’ Review: ‘The Vat of Acid Episode’ Is a Solid High-Concept Escape Acton May 18, 2020 at 5:30 am Thwarted by alien mobsters in a crystal exchange gone bad, Rick’s plan to escape involves sitting at the bottom of a drum of green liquid until ... position in life like a video game. - ‘Rick and Morty’ Review: ‘The Vat of Acid Episode’ Is a Solid High-Concept Escape Acton May 18, 2020 at 1:16 am Even as it's stuck between an unexpected event episode and a decent spite-driven adventure, there are still enough existential ideas here to chew on. - Stretch and flow: Research sheds light on unusual properties of well-known materialson May 17, 2020 at 5:00 pm Researchers have taken a close look at the flow of materials that have both liquid-like and solid-like states, such as toothpaste, mayonnaise, and ketchup, using both simulations and experiments. - Probing glass-transition dynamics in liquid polymer using x-rayson May 15, 2020 at 5:02 am The potential of an X-ray spectroscopy technique to shed light on the mysterious phenomena that occur when a liquid nears a glass-like state has ... into crystalline solids. The most famous ... via Bing News
<urn:uuid:8b342882-5030-4581-b0ed-60d3fef47fb1>
CC-MAIN-2020-24
https://www.innovationtoronto.com/2016/09/ucf-team-tricks-solid-into-acting-as-liquid/?shared=email&msg=fail
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492427.71/warc/CC-MAIN-20200605014501-20200605044501-00194.warc.gz
en
0.941136
1,322
3.71875
4
1: The Strangest Force Begin your exploration of gravity with Isaac Newton and the famous story of the apple. Why was it such a breakthrough to connect a falling apple with the faraway moon? Review the essential characteristics of gravity and learn why small asteroids and large planets have such different shapes. 2: Free Fall and Inertia Review three great discoveries by the "grandfather" of gravity research, Galileo Galilei. His most famous experiment may never have happened, but his principle of inertia, law of free fall, and principle of relativity are the basis for everything that comes later in the science of gravity-including key breakthroughs by Einstein. 3: Revolution in the Heavens Drawing on ideas and observations of Nicolaus Copernicus and Tycho Brahe, Johannes Kepler achieved a great insight about gravity by discovering three laws of planetary motion, relating to the mathematics of orbits. The cause of planetary motion, he determined, must lie in the sun. 4: Universal Gravitation See how Newton was able to finish Kepler's revolution by formulating the law of universal gravitation, which says that every object exerts an attractive force on every other object. Also explore Newton's related discovery of the three laws of motion, which underlie the science of mechanics. 5: The Art of Experiment Learn how distances in the solar system were first determined. Then chart Henry Cavendish's historic experiment that found the value of Newton's gravitational constant. Cavendish's work allows almost everything in the universe to be weighed. Then see a confirmation of the equivalence principle, which says that gravitational and inertial mass are identical. 6: Escape Velocity, Energy, and Rotation Begin the first of several lectures that dig deeper into Newton's laws than Newton himself was able to go. In this lecture, apply the key concepts of energy and angular momentum to study how gravity affects motion. As an example, use simple algebra to calculate the escape velocity from Earth. 7: Stars in Their Courses-Orbital Mechanics Newton was the first to realize that objects could, in theory, be sent into orbit around Earth. Explore how this works in practice, using the ideas of energy and angular momentum to study how satellites, moons, planets, and stars move through space. 8: What Are Tides? Earth and Beyond Trace the origin of tides to the simple fact that gravity varies from point to point in space. This leads not just to the rise and fall of the ocean, but to the gradual slowing of Earth's rotation, Saturn's spectacular ring system, volcanoes on Jupiter's moon Io, and many other phenomena. 9: Nudge-Perturbations of Orbits For the next three lectures, study the effects of gravity on the motions of more than two bodies. Here, see how even very small orbital changes-small perturbations-are significant. Such effects have revealed the presence of unknown planets, both in our own solar system and around other stars. 10: Resonance-Surprises in the Intricate Dance Resonance happens whenever a small periodic force produces a large effect on a periodic motion-for example, when you push a child on a swing. Learn how resonance due to gravitational interactions between three bodies can lead to amazing phenomena with planets, asteroids, and rings of planets. 11: The Million-Body Problem Consider the problem of gravitational interactions between millions of bodies, such as the countless stars in a galaxy. Amazingly, mathematics can reveal useful information even in these complicated cases. Discover how the analysis of the motions of galaxies led to the prediction of dark matter. 12: The Billion-Year Battle Explore the physics of stars, which are balls of gas in a billion-year battle between the inward pull of gravity and the outward pressure produced by nuclear fusion. Follow this story to its ultimate finish-the triumph of gravity in massive stars that end their lives as black holes. 13: From Forces to Fields For the rest of the course, focus on the revolutionary view of gravitation launched by Albert Einstein. Review new ideas about fields that allowed physics to extend beyond Newtonian mechanics. Then see how Einstein modified Newton's laws and created the special theory of relativity. 14: The Falling Laboratory Einstein focused on gravity in his general theory of relativity. Hear about his "happiest thought"-the realization that a man in free fall perceives gravity as zero. This simple insight resolved a mystery going all the way back to Newton and led Einstein to the startling discovery that gravity affects light and time. 15: Spacetime in Zero Gravity In an influential interpretation of relativity, Einstein's former mathematics professor Hermann Minkowski reformulated the theory in terms of four-dimensional geometry, which he called spacetime. Learn how to plot events in this coordinate system in cases where gravity is zero. 16: Spacetime Tells Matter How to Move See how gravity affects Minkowski's spacetime geometry, discovering that motion in a gravitational field follows the straightest path in curved spacetime. The curvature in spacetime is not caused by gravity; it is gravity. This startling idea is the essence of Einstein's general theory of relativity. 18: Light in Curved Spacetime See how Einstein's general theory of relativity predicts the bending of light in a gravitational field, famously confirmed in 1919 by the British scientist Arthur Eddington. Learn how this phenomenon creates natural gravitational lenses-and how the bending of light reveals invisible matter in deep space. 19: Gravitomagnetism and Gravitational Waves The general theory of relativity predicts new phenomena of gravity analogous to those of electromagnetism. Discover how ultra-sensitive experiments have detected the gravitomagnetism of the Earth, and follow the search for elusive gravitational waves that travel through space. 20: Gravity's Horizon-Anatomy of a Black Hole Plunge into the subject of black holes, which are massive objects that have collapsed completely under their own gravity. Learn how black holes distort spacetime and explore the supermassive black holes that lie at the hearts of galaxies. Then ask: Are there such things as micro-black holes? 21: Which Universe Is Ours? Investigate what Einstein called his "greatest mistake"-his rejection of his own theory's prediction that spacetime should be dynamic and evolving. Chart the work of a group of scientists, including Alexander Friedman, Georges Lemaître, and Edwin Hubble, who advanced the realization that our universe is expanding from an apparent big bang. 22: Cosmic Antigravity-Inflation and Dark Energy Using everything you've learned about gravity, investigate cosmic antigravity, starting with cosmic inflation, a phenomenon that exponentially increased the size of the universe during the big bang. Then, learn why dark matter cannot be made of ordinary protons and neutrons, and explore the recent discovery that the expansion of the universe is accelerating, powered by a mysterious dark energy inh... 23: The Force of Creation Use a black hole to test the laws of thermodynamics, taking a deeper look at the capacity of gravity to pull matter together and increase entropy at the same time. Probe Stephen Hawking's most surprising discovery, and then learn that the same force that pulls the apple down and steers the stars in their courses is also nature's ultimate source of order and complexity. 24: The Next Revolution Survey the greatest unsolved problem in theoretical physics: the search for a quantum theory of gravity. Examine string theory, loop quantum gravity, and also entropic gravity, which suggests a revolutionary link with thermodynamics. Close the course with a deepened appreciation for the connection between everyday features of gravity and the most exciting questions in contemporary physics and cosm... Gravity is about both phenomena near at hand at the human scale, everyday and intuitive, and phenomena far off at an astronomical scale. About Benjamin Schumacher Dr. Benjamin Schumacher is Professor of Physics at Kenyon College, where he has taught for 20 years. He received his Ph.D. in Theoretical Physics from The University of Texas at Austin in 1990. Professor Schumacher is the author of numerous scientific papers and two books, including Physics in Spacetime: An Introduction to Special Relativity. As one of the founders of quantum information theory, he introduced the term qubit, invented quantum data compression (also known as Schumacher compression), and established several fundamental results about the information capacity of quantum systems. For his contributions, he won the 2002 Quantum Communication Award, the premier international prize in the field, and was named a Fellow of the American Physical Society. Besides working on quantum information theory, he has done physics research on black holes, thermodynamics, and statistical mechanics. Professor Schumacher has spent sabbaticals working at Los Alamos National Laboratory and as a Moore Distinguished Scholar at the Institute for Quantum Information at California Institute of Technology. He has also done research at the Isaac Newton Institute of Cambridge University, the Santa Fe Institute, the Perimeter Institute, the University of New Mexico, the University of Montreal, the University of Innsbruck, and the University of Queensland.
<urn:uuid:f5be6556-e7b5-4c0e-ab59-88b2283b2f43>
CC-MAIN-2020-24
https://www.thegreatcoursesplus.com/black-holes-tides-and-curved-spacetime-understanding-gravity?utm_source=US_TGCDaily&utm_medium=TGCDaily&utm_campaign=145245
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00395.warc.gz
en
0.920532
1,880
3.6875
4