text
stringlengths
4.06k
10.7k
id
stringlengths
47
47
dump
stringclasses
20 values
url
stringlengths
26
321
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.71
0.98
token_count
int64
1.02k
2.05k
score
float64
3.5
4.53
int_score
int64
4
5
Einstein wasn’t alone in the search of a Unified Field Theory. From Weyl’s version of metric tensors to Kaluza’s Fifth Dimension to Eddington’s Affine connection, there were many attempting to build a Unified Theory. Weyl’s Metric Tensor and Unified Field Theory The first attempt at a unified field theory wasn’t made directly by Einstein himself. Instead, it was by the German physicist and mathematician Hermann Weyl. However, Einstein and Weyl were in communication during this time, and they discussed some of the aspects of this problem together. So, at least to some extent, Einstein was involved. From Weyl’s perspective, there was one central challenge that made it so hard to combine general relativity and electromagnetism into one unified field theory. This challenge was that general relativity is a theory of geometry, while electromagnetism is not. Maxwell’s equations described the forces that act on electrically charged particles. They don’t involve any changes to the geometry of space or time. Weyl felt that if he wanted to merge these two theories together into a common framework, he would need to find a new geometrical way to formulate the theory of electromagnetism. In general relativity, the geometry of space and time is described by a mathematical object called the metric tensor. A tensor is essentially a special kind of matrix or array of numbers. In general relativity, the metric tensor is a 4×4 array of numbers, so it contains a total of sixteen entries. But of these sixteen quantities, six are redundant, so there are really only 10 independent numbers described by the metric tensor. And we need all 10 of these numbers just to describe the effects of gravity. The problem in combining general relativity with electromagnetism is that when we incorporate electromagnetism we need at least four more numbers at every point in space. This made it hard to see how one could explain both gravity and electromagnetism in terms of geometry. There just aren’t enough numbers in the metric tensor to describe both gravity and electromagnetism at the same time. To try to get around this problem, Weyl proposed a version of non-Euclidean geometry. In doing so, he argued that it was possible to construct a geometrical system that wasn’t limited to the 10 independent numbers. In addition to those 10 numbers, Weyl’s version of the metric tensor contained other additional quantities. And Weyl hoped that these additional numbers could somehow encode the effects of electromagnetism. The theory that Weyl ultimately came up with was very complicated. Although it was mathematically sound, physically, it just didn’t make much sense. After a series of exchanges with Einstein, even Weyl became convinced that his work hadn’t gotten them any closer to viable unified field theory. This is a transcript from the video series What Einstein Got Wrong. Watch it now, on The Great Courses Plus. Kaluza’s Fifth Dimension and Unified Field Theory Only a year later or so, another idea in this direction was proposed. This time by the mathematician Theodor Kaluza. Most people find Kaluza’s idea to be pretty strange and surprising. What he proposed was a unified field theory in which the space and time of our universe aren’t limited to four, but five dimensions. To see why a fifth dimension might be helpful in building a unified field theory, we need to remember metric tensor. A tensor is a 4×4 array of numbers, for a total of sixteen entries—10 of which are independent of each other. But tensor is a 4×4 array of numbers only because it was formulated in four-dimensional spacetime. If spacetime is five-dimensional, then the metric tensor will be a 5×5 array of numbers, for a total of twenty-five entries. After removing all of the redundant entries, the five-dimensional metric tensor contains fifteen independent quantities. 10 of these fifteen numbers are needed to describe gravity. And this leaves us with five others, which is more than enough to potentially encode the phenomena of electromagnetism. There is, though, one immediate and obvious objection that one might raise to Kaluza’s five-dimensional theory. As far as we can tell, our universe doesn’t have a fifth dimension. Fortunately, there is a way that a fifth dimension might be able to remain hidden in a system like Kaluza’s. In this geometrical system, the fifth dimension isn’t like the others. The three dimensions of space that we are familiar with are large, and as far as we know, they go on forever in any given direction. If there were an extra dimension like this, it would be impossible for us not to notice it. But the fifth dimension being imagined by Kaluza doesn’t go on forever. Instead, it’s wrapped up, or curled up, into a tiny circle. If something moved even a short distance along the direction of this fifth dimension, it would simply return to where it started. If the circumference of the fifth dimension is small enough, it would be almost impossible for us to perceive it. It was in 1919 that Kaluza described his idea to Einstein for the first time. And despite the fact that there were significant problems with the 5-dimensional theory, Einstein liked it a great deal. With Einstein’s help, Kaluza managed to publish his theory a couple of years later, in 1921. And only a few weeks after that, Einstein himself wrote and published an article that investigated some of the aspects of similar five-dimensional unified field theories. But, despite the enthusiasm, it was pretty clear that there were serious problems with Kaluza’s theory. Einstein, though, continued to work on this theory not because he thought it was a viable unified field theory, but because he thought it might lead to something more promising. After all, while Einstein was developing general relativity, he went through several incorrect versions of the gravitational field equations before he found the right answer. Learn more about Quantum Entanglement. Arthur Eddington’s Affine Connection and Unified Field Theory Another scientist who worked on unified field theories during this period of time was the famous astronomer and physicist Arthur Eddington. However, Eddington didn’t focus on expanding the metric tensor. In fact, he didn’t focus on the metric tensor at all. Instead, he focused on a different mathematical structure, known as the ‘affine connection’. In the end, Eddington didn’t really get any closer than Weyl or Kaluza to building a viable unified field theory. But Eddington’s work was important because his approach was quite different, and along with Kaluza, Eddington probably had the most influence on Einstein’s later efforts to develop such a theory. Learn more about what Einstein got right: Special Relativity. Einstein’s Early Work on Unified Field Theory Einstein himself began to focus on unified field theories in the early 1920s. During this period of time, he remained enthusiastic about the work that had been earlier done by both Kaluza and Eddington. In fact, a lot of Einstein’s early work in this area consisted of extending and building upon these earlier ideas. Einstein was deeply enthusiastic about this program of exploration. Although in this respect, he was relatively isolated since most physicists didn’t share his excitement. Quantum physics was developing rapidly, and that was occupying the bulk of the field’s attention during this time. Einstein was deeply unhappy with the developments occurring in quantum theory as it moved away from the predictive determinism. Einstein’s views about quantum mechanics also served to bolster his interest in unified field theories. In addition to unifying general relativity with electromagnetism, Einstein hoped that a unified field theory might also somehow be able to restore determinism and scientific realism to the quantum world. Common Questions about the Early Works on Unified Field Theory Yes, it’s possible to have a unified field theory similar to that of James Clerk Maxwell who successfully combined electric and magnetic fields into Electromagnetic theory. Unified field theory is an attempt to unify different fundamental forces and the relationships into a single theoretical framework. There have been many attempts at unified theories, some were successful, some failed. James Clerk Maxwell was the first one to create a unified field theory. He also combined electric and magnetic fields into Electromagnetic theory. The founding fathers of quantum theory are Niels Bohr, Max Planck, and, to a certain extent, Albert Einstein.
<urn:uuid:9c4bae1e-9b9e-4237-8a34-c396e49ad359>
CC-MAIN-2020-24
https://www.thegreatcoursesdaily.com/early-research-on-unified-field-theory/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396300.22/warc/CC-MAIN-20200527235451-20200528025451-00598.warc.gz
en
0.964436
1,842
3.578125
4
In 1965, Intel co-founder Gordon Moore published a remarkably prescient paper which observed that the number of transistors on an integrated circuit was doubling every two years and predicted that this pace would lead to computers becoming embedded in homes, cars and communication systems. That simple idea, known today as Moore’s Law, has helped power the digital revolution. As computing performance has become exponentially cheaper and more robust, we have been able to do a lot more with it. Even a basic smartphone today is more powerful than the supercomputers of past generations. Yet the law has been fraying for years and experts predict that it will soon reach its limits. However, I spoke to Bernie Meyerson, IBM’s Chief Innovation Officer, and he feels strongly that the end of Moore’s Law doesn’t mean the end of progress. Not by a long shot. What we’ll see though is a shift in emphasis from the microchip to the system as a whole. Going Beyond Silicon The end of Moore’s Law is not a new issue. In fact, Meyerson argues that it first began unraveling in 2003, when insulating components within transistors began failing due to quantum mechanical effects. Since then, chip manufacturers have been finding new materials that are more resistant to decay in their basic atomic properties and progress has continued. However, sometime around 2020, these workarounds will no longer suffice as the silicon itself yields to quantum mechanical reality. Some researchers, including at IBM, are pursuing strategies like carbon nanotubes and silicon photonics that have the potential to increase chip speeds even without having to shrink chips to quantum scale. Other approaches, such as quantum computing and neuromorphic chips, change the nature of computing itself and can be exponentially more efficient for certain tasks, such as pattern recognition in the case of neuromorphic chips and encryption in the case of quantum computers. Still, you wouldn’t want either of these running your word processor. As Meyerson put it, “Quite frankly, for general purpose computing all that stuff isn’t very helpful and we’ll never develop it in time to make an impact beyond specialized applications over the next 5 or 10 years. For the practical future, we need to change our focus from chip performance to how systems perform as a whole by pursuing both hardware and software strategies.” Integrating the Integrated Circuit One way of increasing performance is by decreasing distance at the level of the system. Currently, chips are designed in two dimensions to perform specific functions, such as logic chips, memory chips and networking chips. Although none of them can do much by themselves, acting in concert they allow us to do extremely complex tasks on basic devices. So one approach to increasing performance, called 3D stacking, would simply integrate those integrated circuits into a single three-dimensional chip. This is harder than it sounds, because entirely new chip designs have to be devised, but it would vastly reduce the time circuits need to wait for instructions from each other and increase speed significantly while decreasing power dramatically due to far shorter communication paths. In truth, this is not a new strategy but rather one that was deployed in the 1960’s to overcome a challenge called the tyranny of numbers. Simply put, the physical requirements of wiring thousands of transistors together was putting practical limitations on what could be designed and built. That’s what led to the invention of integrated circuits in the first place. Meyerson says, “when we moved from transistors to integrated circuits, we shrunk an entire rack measuring about 40 cubic feet down to a single board measuring 19 x 26 inches. 3D stacking will shrink that board down to less than a square inch and we can potentially get an increase in power performance of at least 10-100 fold. Building Intelligently Agile Systems In the 1980’s, chip manufacturers began building specialized types of chips, called ASICs, that were highly optimized for specific tasks, such as running complex financial models. These would significantly outperform conventional chips for those specific tasks, but ultimately, the process of hardwiring proved too expensive and unwieldy to be a viable strategy. Yet Meyerson sees vastly more potential in a newer approach called FPGA, that can be re-purposed on the fly through software. He points to Intel’s recent purchase of Altera as a strong indication that things are moving in that direction. It is well known that in specific applications FPGA’s can produce gains of ten-fold or more in computing performance, but most importantly, that system level gain is not restricted to a single application. The FPGA approach is a major improvement because rather than going through a roughly 18-month process to design and manufacture a specialized chip, the same thing can be done in a matter of weeks. However, Meyerson thinks the potential may actually be far greater than that if we can build intelligent software that can reprogram the chips autonomically. “So for example,” Meyerson says,” while you’re writing a document, your laptop would be configured to do exactly that, but if you then needed to run a simulation of some financial data for that same report, your system would re-optimize itself for deep computations required. Such “intelligent” architectures and the enabling software are the next grand challenge in IT.” “Take this idea a little further,” he continues “and you can see how new technologies like neuromorphic chips and quantum computing can deliver an enormous impact even as specialized systems in the cloud. Imagine being able to access the capabilities of a neuromorphic system for photo recognition and search while shopping, and then instantly switch to a quantum computer to facilitate the transaction with unbreakable encryption.” The Future of Technology is all too human Back in 1965, when Gordon Moore formulated his famous law, computers were enormous hunks that few people ever saw. After 20 years of continuous doubling, we got personal computers small enough to fit under our desks, but powerful enough to generate a graphical display and interact with us through a keyboard and a mouse. 20 more years gave us the mobile revolution. The future of technology is always more human and Meyerson expects that, ”by 2020, we’ll still be improving system performance exponentially, but we’ll have to change our conception of information technology once again, this time from machines that store, analyze and retrieve information to systems that are active partners in a very natural human/machine collaboration.” “The cognitive era will be ultimate bridge across the digital divide,” he notes, “spanning barriers of not only technology but that of language, education and skill level as well. IT will essentially become so advanced that it disappears along with previous institutional barriers. Even a teenager will have access to resources that only the most well-equipped research facilities have today and they will be able to access it in real time.” But perhaps the most important consequence of Meyerson’s vision of cognitive computing is not how it will change how we work with computers, but with each other. Before the industrial era, people were valued for their ability to do physical work. In the knowledge economy, those with strong cognitive skills were considered “the best and the brightest.” Now, we will likely see a new shift in value. In the future, when machines can do cognitive tasks more effectively than any human, we will likely find that competitive advantage will go to those who can collaborate effectively, with both people and machines. So the key to the future lies not so much in chips and algorithms as it does within ourselves. Wait! Before you go… Choose how you want the latest innovation content delivered to you: - Daily — RSS Feed — Email — Twitter — Facebook — Linkedin Today - Weekly — Email Newsletter — Free Magazine — Linkedin Group Greg Satell is a popular speaker and consultant. His first book, Mapping Innovation: A Playbook for Navigating a Disruptive Age, is coming out in 2017. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.
<urn:uuid:bb514873-8f6a-4070-b610-8db5356813cb>
CC-MAIN-2020-24
https://disruptorleague.com/2017/01/05/moores-law-will-soon-end-but-progress-doesnt-have-to/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347404857.23/warc/CC-MAIN-20200529121120-20200529151120-00597.warc.gz
en
0.955438
1,689
3.5
4
‘Cheerios Effect’ Forces Directly Measured for the First Time In a finding that could be useful in designing small aquatic robots, researchers have measured the forces that cause small objects to cluster together on the surface of a liquid — a phenomenon known as the “Cheerios effect.” The researchers used a custom built apparatus to measure the forces using magnetism. Credit: Harris Lab / Brown University There’s an interesting fluid dynamics phenomenon that happens every morning in millions of cereal bowls. When there are just a few bits of cereal left floating on top of the milk, they tend to cluster together in the middle or around the edges of the bowl, rather than dispersing across the surface. Now a team of Brown University researchers has developed a way to measure the forces involved in this type of clustering. It’s the first time, the researchers say, that these forces have been experimentally measured in objects at the millimeter/centimeter scale. And the implications of the work go far beyond cereal bowls — the results could be useful in guiding the self-assembly of micromachines or in designing microscale robots that operate in and around water. “There have been a lot of models describing this Cheerios effect, but it’s all been theoretical,” said Ian Ho, an undergraduate student at Brown and lead author of a paper describing the work. “Despite the fact that this is something we see every day and it’s important for things like self-assembly, no one had done any experimental measurements at this scale to validate these models. That’s what we were able to do here.” The research was published in Physical Review Letters on December 19, 2019. Ho’s co-authors were Giuseppe Pucci, a visiting scholar at Brown, and Daniel Harris, an assistant professor in Brown’s School of Engineering. The Cheerios effect arises from the interaction of gravity and surface tension — the tendency of molecules on the surface of a liquid to stick together, forming a thin film across the surface. Small objects like Cheerios aren’t heavy enough to break the surface tension of milk, so they float. Their weight, however, does create a small dent in the surface film. When one Cheerio dent gets close enough to another, they fall into each other, merging their dents and eventually forming clusters on the milk’s surface. In order to test just how strongly Cheerios — and other objects in the Cheerio size and weight range — attract each other, the researchers used a custom-built apparatus that uses magnetism to measure forces. The experiment involves two Cheerio-sized plastic disks, one of which contains a small magnet, floating in a small tub of water. Electrical coils surrounding the tub produce magnetic fields, which can pull the magnetized disk away while the other is held in place. By measuring the intensity of the magnetic field at the instant the disks begin moving away from each other, the researchers could determine the amount of attractive force. “The magnetic field gave us a non-mechanical way of applying forces to these bodies,” Harris said. “That was important because the forces we’re measuring are similar to the weight of a mosquito, so if we’re physically touching these bodies we’re going to interfere with the way they move.” The experiments revealed that a traditional mathematical model of the interaction actually under-predicts the strength of the attraction when the disks are positioned very close together. At first, the researchers weren’t sure what was happening, until they noticed that as two disks draw closer, they start to tilt toward each other. The tilt causes the disk to push harder against the surface of the liquid, which in turn increases the force by which the liquid pushes back. That extra push results in a slightly increased attractive force between the disks. “We realized that there was one extra condition that our model wasn’t satisfying, which was this tilt,” Harris said. “When we added that one ingredient to the model, we got much better agreement. That’s the value of going back and forth between theory and experiment.” The findings could be useful in the design of microscale machines and robots, the researchers say. There’s interest, for example, in using small spider-like robots that can skitter across the surface of water to do environmental monitoring. This work sheds light on the kinds of forces these robots would encounter. “If you have multiple little machines moving around or two or more legs of a robot, you need to know what forces they exert on each other,” Harris said. “It’s an interesting area of research, and the fact that that we could contribute something new to it is exciting.” Reference: “Direct Measurement of Capillary Attraction between Floating Disks” by Ian Ho, Giuseppe Pucci and Daniel M. Harris, 19 December 2019, Physical Review Letters. - MIT Engineers Explain Why Puddles Stop Spreading - Scientists Reveal Blueprint for How to Construct a Large Scale Quantum Computer - 2D Material, Just 3 Atoms Thick, Has Potential for Use in Quantum Computing - Solving the Mystery of Quantum Light in Thin Layers – Exotic Phenomenon Finally Explained - OLYMPUS Experiment Shows Two Photons Are Exchanged During Electron-Proton Interactions - Yale Physicists Discover Signs of a Time Crystal - Hubble Image of the Week – Irregular Dwarf Galaxy NGC 4789A - Physicists Present a New Theory on the Origin of Dark Matter - Inhibition of FKBP51 Protein Reduces Obesity and Diabetes - Hubble Image of the Week – MCG+01-38-004 and MCG+01-38-005 - Image Shows a Staggeringly Powerful Event Occurred Near Center of the Milky Way - Martian Rocks May Harbor Signs of Life From 4 Billion Years Ago - Earth from Space: Tromsø, Norway [Video] - Two Potentially Habitable Super-Earths and ‘Cold Neptune’ Found Orbiting Nearby Stars
<urn:uuid:cf0472be-8ef1-449d-9be9-e7eaf1b8a5ef>
CC-MAIN-2020-24
http://xianso.com/Article/2928
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390448.11/warc/CC-MAIN-20200526050333-20200526080333-00399.warc.gz
en
0.926995
1,297
3.828125
4
Even though quantum computers are still in their crawling phase, computer scientists continue to push their limits. Recently, a group of scientists used a two-qubit quantum system to model the energies of a hydrogen molecule and found that using an iterative algorithm to calculate each digit of the phase shift gave very accurate results. Their system, while not directly extensible, has the potential to help map the energies of more complex molecules and could result in significant time and power savings compared to classical computers. There are some situations, like quantum states of particles, that classical computers can only approximate, and they often do so quite poorly, with high degrees of uncertainty despite extensive computing time. For modeling quantum situations, there's no better tool than another quantum system that can be used to store and process the relevant data, as quantum computers can explore many possible states at once (though only one state can be measured as the outcome). First, a quick rundown on quantum computers: while a regular computer processes and uses bits comprised of zeroes and ones, a quantum computer processes qubits, or bits that can store a superposition of both zero and one that will be in only one of these states when read out. In other words, when a qubit is measured, its superposition collapses to one of its available states (in this case, zero or one). Modeling the energy levels of molecular hydrogen requires calculating the distance between the two atoms and the effect of different levels of excitation. A group of scientists designed a three-step method to handle this: encode the wave function of the molecule using qubits, simulate time evolution with quantum logic gates, and then use an iterative phase estimation algorithm to reduce the error, using the output of each trial as the input for the next. To get the energy, they calculated the phase shift of the molecule's wave function as a series of bits, and calculated one bit at a time with the qubit system. To model a hydrogen molecule (two bonded hydrogen atoms), scientists injected two photons into an optical circuit, with each photon's polarization representing the encoding for a "control" qubit and a "register" qubit. The register represents an eigenstate, or one accepted energy configuration of the hydrogen molecule, and the control is in an equal superposition of a vertical and horizontal polarization. The photons are then passed through a logic gate that represents an evolution of the wave function over time. The gate polarizes the control photon, forcing it to collapse into either a vertical or horizontal state. It also performs an operation on the register photon if the control comes out of the gate horizontally polarized, or leaves the register photon alone if the control becomes vertically polarized. The position of the control photon is measured and converted to a bit—0 for horizontal, and 1 for vertical. This represents one pass through the optical circuit. The algorithm used 31 samples, or photon pairs, for each bit, and a "majority vote" was taken using all the samples—the resulting number is used as one digit in the binary expansion of the phase shift. The next iteration put all the photon pairs through the same circuit again, using the output of the first iteration as input for the second. With each new time around, the output was used to simulate a different time point in the evolution of the hydrogen molecule. The least significant digit was always calculated first, then the next most significant, as this order allows for the best estimation of the most significant digits. The finished product looks something like 0.01001011101011100000, and varies depending on the excitation of the atoms and their distance from each other. Researchers found that they could calculate the phase shift out to 20 significant digits before the least significant digit stopped strongly favoring one value over another. The results of the experiment mirrored very closely the energy curves of a hydrogen molecule as a function of the atomic separation, indicating that this is an excellent method for studying the energies of molecules. While the general approach to the problem, in particular the use of an iterative algorithm to estimate the phase of the wave function, proved accurate, the system that was used is only applicable to the hydrogen molecule. Simulating larger molecules requires more qubits and logic gates, which decrease the accuracy of measurements. The precision in the hydrogen molecule system is high because the error introduced by one gate is always a constant, and can be corrected for by the classical method of the majority vote. If it were possible to look at the system after each gate, correct the error, and continue on, large systems requiring multiple logic gates would work pretty well. However, a quantum system can only be observed once it has run its course, and each logic gate roughly doubles the error each time the photons pass through it. Therefore, some new quantum correction techniques will have to be introduced before quantum computers can take on larger molecules. Despite these limitations, the work is an important demonstration of the promise of quantum computing, and shows that the techniques we already have can be put to practical use. Nature, 2010. DOI: 10.1038/NCHEM.483
<urn:uuid:b4465dfe-a824-4520-892f-ee41e4b54a53>
CC-MAIN-2020-24
https://arstechnica.com/science/2010/01/2-qubit-quantum-system-used-to-model-the-hydrogen-molecule/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388427.15/warc/CC-MAIN-20200525095005-20200525125005-00001.warc.gz
en
0.924351
1,033
4
4
A compass needle made by a Magnetoreceptor and assisted by Cryptochromes Here it is: In yellow, a polymer made of five molecules of the protein termed Magneto-Receptor (MagR) - five loops hosting iron. It is surrounded by cryptochrome proteins Credit: S. Qin et al. Nature Mater. http://dx.doi.org/10.1038/nmat4484 (2015). "Together with Cry, it forms a nanoscale ‘needle’: a rod-like core of CG8198 (MagR) polymers with an outer layer of Cry proteins that twists around the core (see 'Protein biocompass')." "Using an electron microscope, Xie’s team saw assemblies of these rods orienting themselves in a weak magnetic field in the same way as compass needles." "Many birds have a compass in their eyes. Their retinas are loaded with a protein called cryptochrome, which is sensitive to the Earth’s magnetic fields. It’s possible that the birds can literally see these fields, overlaid on top of their normal vision. This remarkable sense allows them to keep their bearings when no other landmarks are visible." Following the experiments of Researcher Lauren Foley from the University of Massachusetts Medical School we can create artificial magnetic fields resembling the Earth's field and place for instance the fly food always in the south. Flies would learn to go to the south to find food. Transgenic flies that do not have cryptochromes in their retinas could not find the south. When the human gene was inserted in their genome, their capacity to find the pole with the food was restored. Excerpts from an article by The Guardian “The nanoscale biocompass has the tendency to align itself along geomagnetic field lines and to obtain navigation cues from a geomagnetic field,” said Xie. “We propose that any disturbance in this alignment may be captured by connected cellular machinery, which would channel information to the downstream neural system, forming the animal’s magnetic sense.” “It has been well documented that cryptochromes, which are crucial to the compass proposed in this new paper, may harness significant quantum effects to convert the Earth’s weak magnetic field into a signal in the animal’s brain. This is a tantalising possibility since the new UK quantum technology hubs are focusing about a quarter of their £150M on sensor systems. It would be remarkable if we can learn some tricks from Mother Nature in this highly-advanced field of physics,” he added. 3 examples from The Guardian Enzymes: "enzymes make use of a remarkable trick called quantum tunnelling to accelerate biochemical reactions. Essentially, the enzyme encourages electrons and protons to vanish from one position in a biomolecule and instantly rematerialise in another, without passing through the gap in between – a kind of quantum teleportation." Photosynthesis: "Energy packet was not hopping haphazardly about, but performing a neat quantum trick. Instead of behaving like a localised particle travelling along a single route, it behaves quantum mechanically, like a spread-out wave, and samples all possible routes at once to find the quickest way." European robin: "an internal chemical compass that utilises an astonishing quantum concept called entanglement, which Einstein dismissed as “spooky action at a distance”. This phenomenon describes how two separated particles can remain instantaneously connected via a weird quantum link. The current best guess is that this takes place inside a protein in the bird’s eye, where quantum entanglement makes a pair of electrons highly sensitive to the angle of orientation of the Earth’s magnetic field, allowing the bird to “see” which way it needs to fly." Specialised/Mechanistic article Magnetoreception and the radical pair mechanism: The discovery of magnetite (Fe3O4) in the human brain in 1992 by Dr. Joesph Kirschvink at CalTech Using an ultrasensitive superconducting magnetometer in a clean-lab environment, we have detected the presence of ferromagnetic material in a variety of tissues from the human brain. These magnetic and high-resolution transmission electron microscopy measurements imply the presence of a minimum of 5 million single-domain crystals per gram for most tissues in the brain and greater than 100 million crystals per gram for pia and dura. Biogenic magnetite in the human brain may account for high-field saturation effects observed in the T1 and T2 values of magnetic resonance imaging and, perhaps, for a variety of biological effects of low-frequency magnetic fields. Another study by Dr. Joesph Kirschvink Magnetite Minerals in the Human Brain: What Is Their Role? Production of single-domain magnetite throughout life by sockeye salmon, Oncorhynchus nerka. Similar results in humans
<urn:uuid:8020ebf4-e576-4052-b021-f278e19b419a>
CC-MAIN-2020-24
https://www.information-book.com/biology-medicine/magnetoreception/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394074.44/warc/CC-MAIN-20200527110649-20200527140649-00003.warc.gz
en
0.916471
1,033
3.515625
4
In the last section we covered the basics of qubits and introduced the concept of logical and physical qubits. In this section we will look at some of the most popular ways to create qubits and discuss their advantages. It’s important to remember that the goal is to find a method that can be scaled up into a large system, since the power of a quantum computer can grow exponentially with size. We can’t just measure the success of one method on it’s performance today, but also the challenges we might face when attempting to build a larger machine. This implementation of qubit is seeing a lot of success, being the main focus of IBM and Google’s large universal quantum computers. These qubits use phenomena found in electric superconducting circuits to create a quantum two-state system. Advantages: Superconducting qubits have fast gate times (faster operation time), meaning similar computations can be performed much more quickly than on other qubits (e.g. ion trap), this is important since useful computations will likely have millions of logical gates (operations). Additionally, the technology behind superconducting qubits can take advantage of existing methods and processes (such as printable circuits) that we have already spent years improving. As a result it is much easier to envisage a scalable superconducting quantum computer than with other existing methods. Disadvantages: Superconducting qubits have fast decoherence times, meaning their ‘memory’ is very short lived and we need more error correcting qubits to compensate. Since superconducting qubits can normally only interact with the handful of qubits next to them on the device, we need extra operations to perform most algorithms. They also must be kept very cold (below 100mK, or 0.1 degrees above absolute zero) which can be expensive and inconvenient. Finally, each superconducting qubit is slightly different and must be calibrated which could cause problems on larger systems. Fun Fact: The qubits being used in IBM’s chips are superconducting transmon qubits. ‘Transmon’ comes from plasmon qubits and the transmission line added to disperse troublesome low frequencies (the transmission line was actually dropped for a capacitor, but the name was too catchy to change!). Ion Trap Qubits Ion trap quantum computing is perhaps easier to understand for the average layperson than with superconducting quantum computers. Ion trap computers literally trap ions (charged atoms) using magnetic fields and hold them in place, the outermost electron orbiting the nucleus can then be put in different states and used as a qubit. Advantages: A big feature of ion trap computers is their stability; the qubits have much longer decoherence times than those in a superconducting quantum computer. While an ion trap computer can operate at room temperature, to get the best performance the ions need to be cooled. Fortunately around 4K (four degrees above absolute zero) seems to be sufficient which is much cheaper and easier than the 0.1K needed by superconducting qubits. Finally, the connections between ion trap qubits can be reconfigured meaning each qubit can interact with each other qubit in the computer, avoiding some of the computational overhead found with superconducting chips. Disadvantages: Ion trap computers are generally significantly slower than their superconducting counterparts, which is a big problem. While they do not need to be kept as cold, the ions do need to be in a high vacuum. The technology involved in creating ion traps is not as mature as with superconducting qubits, we will need to see large improvements in the area before we can imagine a scalable system. Other Types of Qubits Superconducting and Ion trap quantum computers are currently the most serious and viable attempts to creating a useful, universal quantum computer. There are other technologies capable of creating usable qubits and we cover a couple here: Photonic qubits (made from particles of light) can theoretically be used to create a universal quantum computer, but in practice this is hard to achieve. Instead they could be good candidates for quantum key distribution. Quantum key distribution is a non-computational application of quantum information used to securely exchange cryptographic keys. Quantum key distribution is still in it’s infancy and faces the difficult problem of transporting qubits between the two parties, photons are very stable over long distances and so far seem the best candidates for the job. Topological Qubits operate on quite a different principle to the other qubits we have seen in this article. Topological quantum computation uses anyons, (a type of particle that occurs in 2D systems) to create stabler quantum systems. Anyons can be moved around in relation to each other to affect their state. Importantly, the state is affected by the number of rotations around each other, this can create a type of braid which is where the ‘topological’ part of the name comes from. Unfortunately we can’t confirm that the type of anyon needed to create a universal quantum computer have not been experimentally confirmed yet, and this model of quantum computation will remain theoretical for the time being.
<urn:uuid:c3753d47-dc76-4d28-b019-1eca9930595d>
CC-MAIN-2020-24
https://thequantumdaily.com/2019/11/03/introduction-to-qubits-part-2/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348526471.98/warc/CC-MAIN-20200607075929-20200607105929-00004.warc.gz
en
0.937272
1,072
4.09375
4
Some people want to move mountains. Kunal Das, Ph.D., assistant professor of physics, wants to move electrons. Das is a theoretical physicist researching an area where the classical rules of physics no longer apply—the nanoscale universe of quantum physics, a submicroscopic world where particles defy common sense. In that mysterious world of the ultra-small, Das is searching for new ways to move the currents that power computers. “When the first computers came along in the 1960s, they were huge objects which filled up an entire room and had miniscule computing power,” Das says, as he gestures to his computer in his Freeman Hall office. “How is it that today we have something this compact and with this much more power? Today, every two years computers become twice as fast and half as big.” Computers are powered by electronic circuitry in which currents move large clusters of electrons at a time to feed a tiny computer chip. The number of electrons needed for each operation has gotten smaller with time. But within 20 years, Das says, computers will reach a point where each operation could be done by just one electron, and thus won’t be able to get any faster or any smaller. What then? Where will technology go? Already, scientists are experimenting with storing information not in bits, but in qubits (or quantum bits), which can potentially store much larger amount of information than traditional bits. Can a “quantumchip” be in the offing? That’s where quantum mechanics come in. Das has focused his research on adiabatic electron pumps, which can be used to control the flow of individual or entangled pairs of electrons in order to power quantum computers. Quantum computers, which are still in their infancy, have the potential to perform certain calculations significantly faster than any silicon-based computer. Quantum mechanics have become very important partly because, at the qubit level, individual particles of matter play essential roles. The current that powers the computer no longer flows as a cluster of electrons, but as one electron at a time; and such motion is governed by quantum mechanics. “In classical physics, we talk about currents flowing continuously, like water,” Das says. “At the nanoscale, your current is comprised of individual electrons, and it is discrete as opposed to continuous.” In other words, if you were to look at water flowing through a pipe, you would discover that at the submicroscopic level it is made of molecules that are discrete from one another, like individual grains of sand. The problem is that the super-small world of quantum mechanics is notoriously unpredictable. In fact, an electron at the quantum level has a few weird characteristics that stem from the fact that quantum mechanics is all about probabilities, not absolutes. “An electron, from a quantum mechanical perspective, does not behave like it does in classic physics, where it always acts like a particle,” Das says. “Here, it acts like a particle some of the time and like a wave some of the time. It has wave-particle duality, and it becomes probabilistic, meaning you cannot say for sure that the electron is definitely here. It might have some probability of it being here, or some probability of it being there. That’s what makes quantum mechanics strange and confusing to the layperson.” An adiabatic electron pumping system is complex, but Das describes it as a mechanism that manipulates the shape of the “quantum wavefunction” of an electron, by varying such things as voltage or a magnetic field at the nanoscale. Das is researching how to apply the pumping system to single electrons and also to pairs of “entangled” electrons in which one electron can affect another even when separated by vast distances. He hopes that his research will ultimately lead to a dependable system of moving currents of electrons in a precisely controlled way without destroying their fragile quantum state, which is essential to powering quantum computers. “Once we start using the wave nature of electrons and the probabilistic nature of quantum mechanics, we can potentially do certain computations tremendously faster,” he says. At this point, quantum computers have not yet been built, although some experiments have been carried out. Research is being done at a frantic pace, however, as such systems would be invaluable to national security, Das says. “All existing encryption systems are based upon the fact that we cannot crack them with the computers that we have available now,” says Das. “With a quantum mechanical algorithm, you could crack encryption methods very fast.” There are also potential applications to teleportation, Das says, but not of the Star Trek variety—at least not yet. What you could teleport is the state of an electron,” he says. “We could transfer those properties to a location which is far away, but not the physical object itself. So, in a sense, in quantum mechanics, you can be in two places at the same time.”
<urn:uuid:f0415986-dc70-483b-81f4-cba412042ec0>
CC-MAIN-2020-24
https://news.fordham.edu/science/physicist-studies-nature-of-quantum-mechanics-and-the-submicroscopic-world-of-qubits/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347426801.75/warc/CC-MAIN-20200602193431-20200602223431-00405.warc.gz
en
0.965836
1,060
3.609375
4
Physicists were stunned when two twisted sheets of graphene showed signs of superconductivity. Now Stanford scientists have shown that the wonder material also generates a type of magnetism once only dreamed of theoretically. By Ker Than Sometimes the best discoveries happen when scientists least expect it. While trying to replicate another team’s finding, Stanford physicists recently stumbled upon a novel form of magnetism, predicted but never seen before, that is generated when two honeycomb-shaped lattices of carbon are carefully stacked and rotated to a special angle. The authors suggest the magnetism, called orbital ferromagnetism, could prove useful for certain applications, such as quantum computing. The group describes their finding in the July 25 issue of the journal Science. “We were not aiming for magnetism. We found what may be the most exciting thing in my career to date through partially targeted and partially accidental exploration,” said study leader David Goldhaber-Gordon, a professor of physics at Stanford’s School of Humanities and Sciences. “Our discovery shows that the most interesting things turn out to be surprises sometimes.” The Stanford researchers inadvertently made their discovery while trying to reproduce a finding that was sending shockwaves through the physics community. In early 2018, Pablo Jarillo-Herrero’s group at MIT announced that they had coaxed a stack of two subtly misaligned sheets of carbon atoms – twisted bilayer graphene – to conduct electricity without resistance, a property known as superconductivity. The discovery was a stunning confirmation of a nearly decade-old prediction that graphene sheets rotated to a very particular angle should exhibit interesting phenomena. When stacked and twisted, graphene forms a superlattice with a repeating interference, or moiré, pattern. “It’s like when you play two musical tones that are slightly different frequencies,” Goldhaber-Gordon said. “You’ll get a beat between the two that’s related to the difference between their frequencies. That’s similar to what you get if you stack two lattices atop each other and twist them so they’re not perfectly aligned.” Physicists theorized that the particular superlattice formed when graphene rotated to 1.1 degrees causes the normally varied energy states of electrons in the material to collapse, creating what they call a flat band where the speed at which electrons move drops to nearly zero. Thus slowed, the motions of any one electron becomes highly dependent on those of others in its vicinity. These interactions lie at the heart of many exotic quantum states of matter. “I thought the discovery of superconductivity in this system was amazing. It was more than anyone had a right to expect,” Goldhaber-Gordon said. “But I also felt that there was a lot more to explore and many more questions to answer, so we set out to try to reproduce the work and then see how we could build upon it.” A series of fortunate events While attempting to duplicate the MIT team’s results, Goldhaber-Gordon and his group introduced two seemingly unimportant changes. First, while encapsulating the honeycomb-shaped carbon lattices in thin layers of hexagonal boron nitride, the researchers inadvertently rotated one of the protective layers into near alignment with the twisted bilayer graphene. “It turns out that if you nearly align the boron nitride lattice with the lattice of the graphene, you dramatically change the electrical properties of the twisted bilayer graphene,” said study co-first author Aaron Sharpe, a graduate student in Goldhaber-Gordon’s lab. Secondly, the group intentionally overshot the angle of rotation between the two graphene sheets. Instead of 1.1 degrees, they aimed for 1.17 degrees because others had recently shown that twisted graphene sheets tend to settle into smaller angles during the manufacturing process. “We figured if we aim for 1.17 degrees, then it will go back toward 1.1 degrees, and we’ll be happy,” Goldhaber-Gordon said. “Instead, we got 1.2 degrees.” An anomalous signal The consequences of these small changes didn’t become apparent until the Stanford researchers began testing the properties of their twisted graphene sample. In particular, they wanted to study how its magnetic properties changed as its flat band – that collection of states where electrons slow to nearly zero – was filled or emptied of electrons. While pumping electrons into a sample that had been cooled close to absolute zero, Sharpe detected a large electrical voltage perpendicular to the flow of the current when the flat band was three-quarters full. Known as a Hall voltage, such a voltage typically only appears in the presence of an external magnetic field – but in this case, the voltage persisted even after the external magnetic field had been switched off. This anomalous Hall effect could only be explained if the graphene sample was generating its own internal magnetic field. Furthermore, this magnetic field couldn’t be the result of aligning the up or down spin state of electrons, as is typically the case for magnetic materials, but instead must have arisen from their coordinated orbital motions. “To our knowledge, this is the first known example of orbital ferromagnetism in a material,” Goldhaber-Gordon said. “If the magnetism were due to spin polarization, you wouldn’t expect to see a Hall effect. We not only see a Hall effect, but a huge Hall effect.” Strength in weakness The researchers estimate that the magnetic field near the surface of their twisted graphene sample is about a million times weaker than that of a conventional refrigerator magnet, but this weakness could be a strength in certain scenarios, such as building memory for quantum computers. “Our magnetic bilayer graphene can be switched on with very low power and can be read electronically very easily,” Goldhaber-Gordon said. “The fact that there’s not a large magnetic field extending outward from the material means you can pack magnetic bits very close together without worrying about interference.” Goldhaber-Gordon’s lab isn’t done exploring twisted bilayer graphene yet. The group plans to make more samples using recently improved fabrication techniques in order to further investigate the orbital magnetism.
<urn:uuid:d77dc554-b727-4bec-8950-8cfc69818c87>
CC-MAIN-2020-24
https://www.miragenews.com/physicists-discover-new-quantum-trick-for-graphene-magnetism/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00411.warc.gz
en
0.952239
1,317
3.625
4
In a simple experiment, when a pair of flashlights is shone in a dark room such that their light beams cross each other, does one notice anything strange? The rather anticlimactic answer is maybe not. The reason for this is the individual photons that make up light merely pass each other by – similar to indifferent spirits in the night – and do not interact in any way. However, what will happen if light particles are allowed to interact, repelling and attracting each other similar to atoms in ordinary matter? Light sabers provide one interesting, although sci-fi possibility. These are beams of light that are capable of pulling and pushing on each other, making for stunning, epic confrontations. Or, in a more likely case, two light beams could meet and combine into a single, luminous stream. This would mean that the rules of physics may need to be tweaked to realize such optical behavior, but actually, researchers at Harvard University, MIT, and elsewhere, have now shown that photons can certainly be made to interact — a major breakthrough that could pave the way for applying photons in quantum computing, if not in light sabers. The research team, headed by Vladan Vuletic, the Lester Wolfe Professor of Physics at MIT, and Professor Mikhail Lukin from Harvard University, published the results of the study in the Science journal. The scientists reported that they have viewed groups of three photons interacting and, in effect, binding together to create an entirely new kind of photonic matter. During controlled experiments, the team observed that when a very weak laser beam is shone via a dense cloud of ultracold rubidium atoms, the photons stick together in triplets or pairs instead of exiting the cloud as single, arbitrarily spaced photons. This indicates that some kind of interaction — in this case, attraction — is occurring among them. Usually, photons lack mass and can travel at 300,000 kilometers per second (the speed of light), the research team noted that the bound photons have in fact attained a fraction of an electron’s mass. The newly weighed-down light particles were also quite sluggish, and compared to normal non-interacting photons, travel about 100,000 times slower. According to Vuletic, the results show that photons can certainly entangle, or attract, each other. If the photons can be made to interact in other different ways, they may be harnessed to perform very fast, extraordinarily complex quantum computations. “The interaction of individual photons has been a very long dream for decades,” Vuletic says. Co-authors of Vuletic include Sergio Cantu, Qi-Yung Liang, and Travis Nicholson from MIT, Aditya Venkatramani and Lukin of Harvard University, Alexey Gorshkov and Michael Gullans of the University of Maryland, Cheng Ching of the University of Chicago, and Jeff Thompson from Princeton University. The MIT-Harvard Center for Ultracold Atoms is headed by Vuletic and Lukin and together they have been exploring ways – both experimental and theoretical – to promote interactions between photons. The effort finally paid off in 2013, because for the first time, the scientists observed the interaction and binding between pairs of photons, producing a whole new state of matter. In their latest study, the team wondered if interactions could occur between not just two photons, but more. For example, you can combine oxygen molecules to form O2 and O3 (ozone), but not O4, and for some molecules you can’t form even a three-particle molecule. So it was an open question: Can you add more photons to a molecule to make bigger and bigger things? In order to find out, the researchers used the same experimental method which they utilized to observe the interactions between two photons. In this process, a cloud of rubidium atoms is first cooled to ultracold temperatures, i.e., just a millionth of a degree above absolute zero. When the atoms are cooled, they slow down to a near standstill. The researchers then shone an extremely weak laser beam through this cloud of immobilized atoms — the laser beam was so weak that only a handful of photons were able to pass through the cloud at any single time. They then determined the photons as they exit the other side of the atom cloud. In the latest experiment, it was observed that the photons streamed out as triplets and pairs, instead of coming out of the cloud at haphazard intervals, since single photons have nothing to do with each other. Besides tracking the rate and number of photons, the researchers measured the photons’ phase, both before and after passing through the cloud of immobilized atoms. The phase of a photon suggests its frequency of oscillation. “The phase tells you how strongly they’re interacting, and the larger the phase, the stronger they are bound together,” Venkatramani explains. The researchers noted that when three-photon particles simultaneously exited the atom cloud, their phase was moved compared to what it was before when there was no interaction between the photons, and was in fact three times larger than the phase shift of two-photon particles. “This means these photons are not just each of them independently interacting, but they’re all together interacting strongly.” The team then came up with a theory to describe what might have made the photons to interact in the first place. The researchers’ model, based on physical principles, presents the following scenario: As one photon moves via the cloud of rubidium atoms, it shortly lands on an adjoining atom prior to skipping to next atom, similar to a bee flitting from one flower to another, until it comes to the other end. Similarly, if another photon is concurrently traveling through the cloud of rubidium atoms, it can also briefly land on a rubidium atom and form a polariton, a hybrid that is part atom and part photon. The two polaritons can then interact with each other through their atomic component. The atoms remain where they are at the edge of the cloud, whilst the photons exit, still bound together. The team noted that this same phenomenon can take place with three photons, producing an even stronger bond than the two-photon interactions. “What was interesting was that these triplets formed at all,” Vuletic says. “It was also not known whether they would be equally, less, or more strongly bound compared with photon pairs.” The whole interaction inside the atom cloud takes place over a millionth of a second, and this interaction activates the photons to stay bound together, even after they have exited the cloud. “What’s neat about this is, when photons go through the medium, anything that happens in the medium, they ‘remember’ when they get out,” Cantu says. This means that photons that have interacted with one another, in this case via an attraction between them, can be believed to be as strongly entangled, or correlated — an important property for any quantum computing bit. Photons can travel very fast over long distances, and people have been using light to transmit information, such as in optical fibers. If photons can influence one another, then if you can entangle these photons, and we’ve done that, you can use them to distribute quantum information in an interesting and useful way. In the future, the researchers will investigate ways to force other interactions, for example, repulsion, where photons might scatter off each other similar to billiard balls. “It’s completely novel in the sense that we don’t even know sometimes qualitatively what to expect,” Vuletic says. “With repulsion of photons, can they be such that they form a regular pattern, like a crystal of light? Or will something else happen? It’s very uncharted territory.” The National Science Foundation partly supported the research.
<urn:uuid:1389850a-c28e-4a8c-91de-c63bc73b8c40>
CC-MAIN-2020-24
https://www.azooptics.com/News.aspx?newsID=23710
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410535.45/warc/CC-MAIN-20200530231809-20200531021809-00015.warc.gz
en
0.94921
1,639
3.796875
4
Table Of Contents Quantum Computing is the breakthrough technology that is growing at a fast pace. Today, each passing day it sees new development and the day will not be too far when you will be able to purchase Quantum computers from the market. But, in today’s date, nothing can be said about this technology development. Read more> Share this post [DISPLAY_ULTIMATE_SOCIAL_ICONS] Quantum Computing & Challenges to Develop Quantum Computers What is Quantum Computing? The physics at microscopic levels of atoms or subatomic particles is completely different as compared at macroscopic levels. At those small scales, or at quantum levels, the concept of dual nature comes into play. Dual nature refers to the existence of both particle nature and wave nature, for example, an electron can exhibit both particle nature as well as wave nature. The branch of physics which deals at these smallest scales is termed as Quantum physics. The computation that can be done exploiting concepts of quantum physics is termed as Quantum computing. Quantum computing started with Richard Feynman and afterward many scientists contributed to the development of Quantum computing. The computers you use nowadays use classical physics concept, i.e. the physics at macroscopic or large levels. At macroscopic levels, the physics is much simpler and the algorithms used in computers for calculations uses general physics laws – Newton laws, Maxwell equations etc. But, at the quantum level, these classical laws and equations fails and are not valid. This is because of the appearance of Planck’s constant in quantum equations and an increase of the uncertainty at microscopic levels. Now, you can imagine how difficult it is to solve quantum problems. How do Quantum Computers Work? In digital electronics, you must have studied bits, i.e. 0 and 1. The classical computers, that you use today are completely based on these bits for calculation and all processes or tasks completed by them. ‘0’ means the electrical signal is ‘OFF’ and ‘1’ means the electrical signal is ‘ON’. Similarly, Qubits are the quantum bits which help in quantum computing. What are the Qubits? Qubits are bits ‘0’, ‘1’ with a coherent superposition of 0 and 1. The superposition concept comes into play because of the wave nature at the smallest scale. This can be explained with the help of electron spin. The electron has two spins – spin up and spin down. Spin up refers to classical bit ‘0’ and spin down refers to classical bit ‘1’. But, quantum allows them another state also, i.e. superposition of 0 and 1. Quantum computers work with Qubits but as you can see above, as the number of Qubits increases, it becomes more and more difficult to reach to the solution of a computation problem. The external environment also disturbs the system, therefore Quantum computing becomes almost impossible. The state of the computer become unpredictable due to this, which is called as Decoherence. Also, if we try to calculate the state of one particle in a two-particle system. Consequently, the state of another particle gets affected due to interactions between the particles. This phenomenon is termed as Quantum Entanglement. Unlike, Classical computers that we use today, we cannot manufacture Quantum computers using Transistors and Diodes. You can say this is a big problem in the manufacture of Quantum computers. But, there are other technologies which can solve this problem. One such technology is to use Quantum dots in the manufacture of Quantum computers. In Quantum dots, a single electron is trapped in between the atoms which looks like a cage of atoms. What are the challenges in Quantum Computing & to Develop Quantum Computers? There are several challenges in the field of Quantum computing. Also, there are many problems which need to be solved to help Quantum computers not remain confined to a few big laboratories. These are – - Decoherence Prevention. - Research on Quantum Algorithms that Increase the speed of Quantum Computers. - To create quantum computers portable – In Today’s date, the size of the Quantum computers is too large that it can only be used in big laboratories. - Quantum Error Correction. - Designing of better Quantum circuitry. - A search of other technologies to manufacture Quantum computers. - Designing better processors to store more Qubits. Do Quantum Computers Exist and Where? Yes, Quantum Computers exist but it is only confined to big laboratories for research and development purposes. The big names in Quantum computers manufacturers are – IBM, D-Wave systems, and Google. Recently this year, in 2018, Google get a lead in the development of 72 Qubit Processor which is the best Qubit Processor till now. Previously, the highest processor that could store Qubits is 20 Qubits processor which was developed by IBM. But, now its Google’s 72 Qubit Processor which they named Bristlecone. We hope that you understood much about Quantum computing. To get daily updates about our published posts, follow us on Facebook, Twitter or subscribe with us to receive emails of published posts. Also, don’t forget to like and comment below. You may also like>
<urn:uuid:ef6fc88a-6582-448f-8351-ca992c39c7b3>
CC-MAIN-2020-24
https://classytec.com/quantum-computing/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439928.61/warc/CC-MAIN-20200604094848-20200604124848-00216.warc.gz
en
0.932186
1,104
3.890625
4
A team of researchers from MIT, Google, the University of Sydney, and Cornell University present a new quantum error correcting code that requires measurements of only a few quantum bits at a time to ensure consistency between one stage of a computation and the next. Quantum computers are largely theoretical devices that could perform some computations exponentially faster than conventional computers can. Crucial to most designs for quantum computers is quantum error correction, which helps preserve the fragile quantum states on which quantum computation depends. The ideal quantum error correction code would correct any errors in quantum data, and it would require measurement of only a few quantum bits, or qubits, at a time. But until now, codes that could make do with limited measurements could correct only a limited number of errors — one roughly equal to the square root of the total number of qubits. So they could correct eight errors in a 64-qubit quantum computer, for instance, but not 10. In a paper they’re presenting at the Association for Computing Machinery’s Symposium on Theory of Computing in June, researchers from MIT, Google, the University of Sydney, and Cornell University present a new code that can correct errors afflicting — almost — a specified fraction of a computer’s qubits, not just the square root of their number. And for reasonably sized quantum computers, that fraction can be arbitrarily large — although the larger it is, the more qubits the computer requires. “There were many, many different proposals, all of which seemed to get stuck at this square-root point,” says Aram Harrow, an assistant professor of physics at MIT, who led the research. “So going above that is one of the reasons we’re excited about this work.” Like a bit in a conventional computer, a qubit can represent 1 or 0, but it can also inhabit a state known as “quantum superposition,” where it represents 1 and 0 simultaneously. This is the reason for quantum computers’ potential advantages: A string of qubits in superposition could, in some sense, perform a huge number of computations in parallel. Once you perform a measurement on the qubits, however, the superposition collapses, and the qubits take on definite values. The key to quantum algorithm design is manipulating the quantum state of the qubits so that when the superposition collapses, the result is (with high probability) the solution to a problem. But the need to preserve superposition makes error correction difficult. “People thought that error correction was impossible in the ’90s,” Harrow explains. “It seemed that to figure out what the error was you had to measure, and measurement destroys your quantum information.” The first quantum error correction code was invented in 1994 by Peter Shor, now the Morss Professor of Applied Mathematics at MIT, with an office just down the hall from Harrow’s. Shor is also responsible for the theoretical result that put quantum computing on the map, an algorithm that would enable a quantum computer to factor large numbers exponentially faster than a conventional computer can. In fact, his error-correction code was a response to skepticism about the feasibility of implementing his factoring algorithm. Shor’s insight was that it’s possible to measure relationships between qubits without measuring the values stored by the qubits themselves. A simple error-correcting code could, for instance, instantiate a single qubit of data as three physical qubits. It’s possible to determine whether the first and second qubit have the same value, and whether the second and third qubit have the same value, without determining what that value is. If one of the qubits turns out to disagree with the other two, it can be reset to their value. In quantum error correction, Harrow explains, “These measurement always have the form ‘Does A disagree with B?’ Except it might be, instead of A and B, A B C D E F G, a whole block of things. Those types of measurements, in a real system, can be very hard to do. That’s why it’s really desirable to reduce the number of qubits you have to measure at once.” A quantum computation is a succession of states of quantum bits. The bits are in some state; then they’re modified, so that they assume another state; then they’re modified again; and so on. The final state represents the result of the computation. In their paper, Harrow and his colleagues assign each state of the computation its own bank of qubits; it’s like turning the time dimension of the computation into a spatial dimension. Suppose that the state of qubit 8 at time 5 has implications for the states of both qubit 8 and qubit 11 at time 6. The researchers’ protocol performs one of those agreement measurements on all three qubits, modifying the state of any qubit that’s out of alignment with the other two. Since the measurement doesn’t reveal the state of any of the qubits, modification of a misaligned qubit could actually introduce an error where none existed previously. But that’s by design: The purpose of the protocol is to ensure that errors spread through the qubits in a lawful way. That way, measurements made on the final state of the qubits are guaranteed to reveal relationships between qubits without revealing their values. If an error is detected, the protocol can trace it back to its origin and correct it. It may be possible to implement the researchers’ scheme without actually duplicating banks of qubits. But, Harrow says, some redundancy in the hardware will probably be necessary to make the scheme efficient. How much redundancy remains to be seen: Certainly, if each state of a computation required its own bank of qubits, the computer might become so complex as to offset the advantages of good error correction. But, Harrow says, “Almost all of the sparse schemes started out with not very many logical qubits, and then people figured out how to get a lot more. Usually, it’s been easier to increase the number of logical qubits than to increase the distance — the number of errors you can correct. So we’re hoping that will be the case for ours, too.” Stephen Bartlett, a physics professor at the University of Sydney who studies quantum computing, doesn’t find the additional qubits required by Harrow and his colleagues’ scheme particularly daunting. “It looks like a lot,” Bartlett says, “but compared with existing structures, it’s a massive reduction. So one of the highlights of this construction is that they actually got that down a lot.” “People had all of these examples of codes that were pretty bad, limited by that square root ‘N,’” Bartlett adds. “But people try to put bounds on what may be possible, and those bounds suggested that maybe you could do way better. But we didn’t have constructive examples of getting here. And that’s what’s really got people excited. We know we can get there now, and it’s now a matter of making it a bit more practical.” PDF Copy of the Study: Sparse Quantum Codes from Quantum Circuits Image: Jose-Luis Olivares/MIT
<urn:uuid:3b89e7ba-f4f0-454f-81f2-56e77e8d4261>
CC-MAIN-2020-24
https://scitechdaily.com/researchers-develop-a-new-quantum-error-correcting-code/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348500712.83/warc/CC-MAIN-20200605111910-20200605141910-00417.warc.gz
en
0.940029
1,550
3.625
4
Quantum Internet can be used to send messages that cannot be hacked, increase GPS accuracy and enable cloud quantum computing. For more than twenty years, dreams of creating such a quantum network have remained largely unattainable due to the difficulty of sending quantum signals over long distances without loss. Now, researchers at Harvard and the Massachusetts Institute of Technology have found a way to fix signal loss with a prototype quantum node that can capture, store, and confuse bits of quantum information. Research is the missing link to practical quantum internet and an important step forward in the development of distant quantum networks. “This demonstration is a conceptual breakthrough that can expand the maximum possible range of quantum networks and potentially open many new applications in a way that is impossible using any existing technologies,” said Mikhail Lukin, professor of physics named after George Fasmer Leverett and co-director of the Harvard Quantum Initiative. “This is the realization of the goal that was achieved by our quantum science and the engineering community for over two decades. " Study published in Nature, All types of communication technologies – from the first telegraph to the modern fiber-optic Internet – should have taken into account the fact that signals deteriorate and are lost when transmitting over long distances. The first transponders that receive and amplify signals to correct this loss were designed to amplify telegraph signals with wire fading in the mid-1800s. Two hundred years later, repeaters are an integral part of our long-distance communications infrastructure. In a classic network, if Alice in New York wants to send Bob a message in California, the message moves from coast to coast more or less in a straight line. Along the way, the signal passes through repeaters, where it is read, amplified, and corrected for errors. The whole process is vulnerable to attacks at any time. However, if Alice wants to send a quantum message, the process will be different. Quantum networks use quantum particles of light – individual photons – to transmit quantum states of light over long distances. These networks have a trick that classical systems do not have: entanglement. Obfuscation – what Einstein called "eerie action at a distance" – allows bits of information to be perfectly correlated at any distance. Since quantum systems cannot be observed without change, Alice could use entanglement to tell Bob without fear of eavesdroppers. This concept is the basis for applications such as quantum cryptography – security, which is guaranteed by the laws of quantum physics. However, long-distance quantum communication is also affected by the usual loss of photons, which is one of the main obstacles to the implementation of large-scale quantum Internet. But the same physical principle that makes quantum communication super-safe also makes it impossible to use existing classical repeaters to eliminate information loss. How can you amplify and correct a signal if you cannot read it? The solution to this seemingly impossible task involves the so-called quantum repeater. Unlike classical repeaters, which amplify a signal through an existing network, quantum repeaters create a network of entangled particles through which a message can be transmitted. In essence, a quantum repeater is a small specialized quantum computer. At each stage of such a network, quantum repeaters should be able to capture and process the quantum bits of quantum information to correct errors and store them long enough for the rest of network be ready. Until now, this has been impossible for two reasons: firstly, single photons are very difficult to capture. Secondly, quantum information is notoriously fragile, which makes it very difficult to process and store for long periods of time. Lukin Laboratory, in collaboration with Marco Loncar, Professor of Electrical Engineering Tiancai Lin at the Harvard School of Engineering and Applied Sciences John A. Paulson (SEAS), Hongkun Park, Mark Hyman Jr., professor of chemistry at Harvard School of Arts and Sciences (FAS), and Dirk Englund, associate professor of electrical engineering and computer science at the Massachusetts Institute of Technology (MIT), are working to use a system that can both perform well. tasks are silicon vacancies of color centers in diamonds. These centers are tiny defects in the atomic structure of diamond that can absorb and emit light, causing the brilliant colors of diamond. “Over the past few years, our laboratories have worked to understand and control individual silicon color centers, especially how to use them as quantum memory devices for single photons,” said Mihir Bhaskar, a graduate student of the Lukina group. Researchers have integrated a separate color center into the cavity of a diamond nanotube, which limits information photons and forces them to interact with a single color center. They then placed the device in a dilution refrigerator, which reaches a temperature close to absolute zero, and sent individual photons via fiber optic cables to the refrigerator, where they were effectively captured and caught by the center of color. A device can store quantum information in milliseconds — long enough to transmit information over thousands of kilometers. Electrodes embedded around the cavity were used to supply control signals for processing and storing information stored in memory. “This device combines the three most important elements of a quantum repeater – long memory, the ability to efficiently capture information from photons and the way it is processed locally,” said Bart Machielse, a graduate student at the Laboratory for Nanoscale Optics. "Each of these problems was resolved separately, but not one device combined all three." “We are currently working on expanding this study by incorporating our quantum memories into real urban fiber optic channels,” said Ralph Reading, Ph.D. in Lukin’s group. “We plan to create large networks of entangled quantum memories and explore the first applications of quantum Internet.” “This is the first demonstration at the system level, combining the main achievements in the field of nanotechnology, photonics and quantum control, which demonstrates a clear quantum advantage in the transmission of information using quantum repeater nodes. We look forward to exploring new unique applications using these methods, ”Lukin said. Experimental memory demonstration of enhanced quantum communication, Nature (2020). DOI: 10.1038 / s41586-020-2103-5 https://nature.com/articles/s41586-020-2103-5 Researchers Demonstrate Missing Link For Quantum Internet (2020, March 23) restored March 23, 2020 This document is protected by copyright. Other than honest deals for private study or research, no Part may be reproduced without written permission. Content is provided for informational purposes only.
<urn:uuid:28041ec0-f823-49f5-a208-28defa7e3504>
CC-MAIN-2020-24
https://www.newsround.net/researchers-demonstrate-missing-link-for-quantum-internet/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00219.warc.gz
en
0.91612
1,346
3.9375
4
Physicists have theorized that a new type of material, called a three-dimensional (3D) topological insulator (TI), could be a good candidate from which to create qubits that will be resilient from these errors and protected from losing their quantum information. This material has both an insulating interior and metallic top and bottom surfaces that conduct electricity. The most important property of 3D topological insulators is that the conductive surfaces are predicted to be protected from the influence of the surroundings. Few studies exist that have experimentally tested how TIs behave in real life. A new study from the University of Utah found that in fact, when the insulating layers are as thin as 16 quintuple atomic layers across, the top and bottom metallic surfaces begin to influence each other and destroy their metallic properties. The experiment demonstrates that the opposite surfaces begin influencing each other at a much thicker insulating interior than previous studies had shown, possibly approaching a rare theoretical phenomenon in which the metallic surfaces also become insulating as the interior thins out. "Topological insulators could be an important material in future quantum computing. Our findings have uncovered a new limitation in this system", stated Vikram Deshpande, assistant professor of physics at the University of Utah and corresponding author of the study. "People working with topological insulators need to know what their limits are. It turns out that as you approach that limit, when these surfaces start 'talking' to each other, new physics shows up, which is also pretty cool by itself." The new study published on July 16, 2019 in the journal Physics Review Letters . Imagine a hardcover textbook as a 3D topological insulator, Vikram Deshpande said. The bulk of the book are the pages, which is an insulator layer - it can't conduct electricity. The hardcovers themselves represent the metallic surfaces. Ten years ago, physicists discovered that these surfaces could conduct electricity, and a new topological field was born. Vikram Deshpande and his team created devices using 3D TIs by stacking five few-atom-thin layers of various materials into sloppy sandwich-like structures. The bulk core of the sandwich is the topological insulator, made from a few quintuple layers of bismuth antimony tellurium selenide (Bi2-xSbxTe3-ySey). This core is sandwiched by a few layers of boron nitride, and is topped off with two layers of graphite, above and below. The graphite works like metallic gates, essentially creating two transistors that control conductivity. Last year Vikram Deshpande led a study that showed that this topological recipe built a device that behaved like you would expect - bulk insulators that protect the metallic surfaces from the surrounding environment. In this study, they manipulated the 3D TI devices to see how the properties changed. First, they built van der Waal heterostructures - those sloppy sandwiches - and exposed them to a magnetic field. Vikram Deshpande's team tested many at his lab at the University of Utah and first author Su Kong Chong, doctoral candidate at the U, traveled to the National High Magnetic Field Lab in Tallahassee to perform the same experiments there using one of the highest magnetic fields in the country. In the presence of the magnetic field, a checkerboard pattern emerged from the metallic surfaces, showing the pathways by which electrical current will move on the surface. The checkerboards, consisting of quantized conductivities versus voltages on the two gates, are well-defined, with the grid intersecting at neat intersection points, allowing the researchers to track any distortion on the surface. They began with the insulator layer at 100 nanometers thick, about a thousandth of the diameter of a human hair, and progressively got thinner down to 10 nanometers. The pattern started distorting until the insulator layer was at 16 nanometers thick, when the intersection points began to break up, creating a gap that indicated that the surfaces were no longer conductive. "Essentially, we've made something that was metallic into something insulating in that parameter space. The point of this experiment is that we can controllably change the interaction between these surfaces", stated Vikram Deshpande. "We start out with them being completely independent and metallic, and then start getting them closer and closer until they start 'talking', and when they're really close, they are essentially gapped out and become insulating." Previous experiments in 2010 and 2012 had also observed the energy gap on the metallic surfaces as the insulating material thins out. But those studies concluded that the energy gap appeared with much thinner insulating layers - five nanometers in size. This study observed the metallic surface properties break down at much larger interior thickness, up to 16 nanometers. The other experiments used different "surface science" methods where they observed the materials through a microscope with a very sharp metallic tip to look at every atom individually or studied them with highly energetic light. "These were extremely involved experiments which are pretty far removed from the device-creation that we are doing", stated Vikram Deshpande. Next, Vikram Deshpande and the team will look more closely into the physics creating that energy gap on the surfaces. He predicts that these gaps can be positive or negative depending on material thickness. Other authors who contributed to the study are Kyu Bum Han and Taylor Sparks from the University's Department of Materials Science and Engineering.
<urn:uuid:f3f39f4c-c862-4fd2-90e1-d13e1576a092>
CC-MAIN-2020-24
http://primeurmagazine.com/weekly/AE-PR-08-19-100.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407667.28/warc/CC-MAIN-20200530071741-20200530101741-00019.warc.gz
en
0.956467
1,129
3.734375
4
Results are first to suggest how to engineer even warmer superconductors with atom-by-atom control A study at the Department of Energy’s SLAC National Accelerator Laboratory suggests for the first time how scientists might deliberately engineer superconductors that work at higher temperatures. In their report, a team led by SLAC and Stanford University researchers explains why a thin layer of iron selenide superconducts — carries electricity with 100 percent efficiency — at much higher temperatures when placed atop another material, which is called STO for its main ingredients strontium, titanium and oxygen. These findings, described today in the journal Nature, open a new chapter in the 30-year quest to develop superconductors that operate at room temperature, which could revolutionize society by making virtually everything that runs on electricity much more efficient. Although today’s high-temperature superconductors operate at much warmer temperatures than conventional superconductors do, they still work only when chilled to minus 135 degrees Celsius or below. In the new study, the scientists concluded that natural trillion-times-per-second vibrations in the STO travel up into the iron selenide film in distinct packets, like volleys of water droplets shaken off by a wet dog. These vibrations give electrons the energy they need to pair up and superconduct at higher temperatures than they would on their own. “Our simulations indicate that this approach – using natural vibrations in one material to boost superconductivity in another – could be used to raise the operating temperature of iron-based superconductors by at least 50 percent,” said Zhi-Xun Shen, a professor at SLAC and Stanford University and senior author of the study. While that’s still nowhere close to room temperature, he added, “We now have the first example of a mechanism that could be used to engineer high-temperature superconductors with atom-by-atom control and make them better.” Spying on Electrons The study probed a happy combination of materials developed two years ago by scientists in China. They discovered that when a single layer of iron selenide film is placed atop STO, its maximum superconducting temperature shoots up from 8 degrees to nearly 77 degrees above absolute zero (minus 196 degrees Celsius). While this was a huge and welcome leap, it would be hard to build on this advance without understanding what, exactly, was going on. The Latest on: Superconductor via Google News The Latest on: Superconductor - American Superconductor: Fiscal 4Q Earnings Snapshoton June 2, 2020 at 2:10 pm AYER, Mass. (AP) _ American Superconductor Corp. (AMSC) on Tuesday reported a loss of $5.9 million in its fiscal fourth quarter. The Ayer, Massachusetts-based company said it had a loss of 27 cents ... - American Superconductor EPS beats by $0.01, beats on revenueon June 2, 2020 at 1:11 pm GAAP EPS of -$0.27 beats by $0.01. Revenue of $18.14M (+24.3% Y/Y) beats by $0.25M. Shares +2.4%. Press Release ... - American Superconductor (AMSC) Presents At Craig-Hallum Institutional Investor Conference - Slideshowon May 30, 2020 at 8:20 am The following slide deck was published by American Superconductor Corporation in conjunction with this event. Download PDF 120 Click to enlarge Notes: ... - A predicted superconductor might work at a record-breaking 200° Celsiuson May 29, 2020 at 1:07 pm - AMSC American Superconductor (NASDAQ:AMSC) Downgraded by Zacks Investment Researchon May 27, 2020 at 8:01 pm Moving Average Technical Analysis 5 day Moving Average is $$6.74 And 5 day price change is $1.39 (22.79%) with average volume for 5 day average is 267,261. While technical analysis for average 20 ... - Earnings Preview: American Superconductor (AMSC) Q4 Earnings Expected to Declineon May 27, 2020 at 10:18 am American Superconductor (AMSC) is expected to deliver a year-over-year decline in earnings on higher revenues when it reports results for the quarter ended March 2020. This widely-known consensus ... - Counterintuitive Superconductivity and Quantum Computing Breakthrough: Using Pressure to Make Liquid Magnetismon May 23, 2020 at 9:06 pm Using two flat-top diamonds and a lot of pressure, scientists have forced a magnetic crystal into a spin liquid state, which may lead to insights into high-temperature superconductivity and quantum ... - Electrons break rotational symmetry in exotic low-temp superconductoron May 22, 2020 at 11:06 am Scientists have discovered that the transport of electronic charge in a metallic superconductor containing strontium, ruthenium, and oxygen breaks the rotationa ... - Australian quantum technology could become a $4 billion industry and create 16,000 jobson May 21, 2020 at 1:00 pm A quantum technology boom is coming, and Australia must act to avoid missing out. A new CSIRO roadmap plots a course for this new industry. - Accelerated Supercurrents Give Scientists Access To “Forbidden” Lighton May 20, 2020 at 10:26 am In what is described as “a fundamental discovery of quantum matter,” a team of American researchers have accessed forbidden light emissions that could one ... via Bing News
<urn:uuid:a375df94-dbc7-46e9-b48b-358c00a855d3>
CC-MAIN-2020-24
https://www.innovationtoronto.com/2014/11/warmer-superconductors-could-make-virtually-everything-that-runs-on-electricity-much-more-efficient/?responsive=false
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347426801.75/warc/CC-MAIN-20200602193431-20200602223431-00419.warc.gz
en
0.910912
1,170
3.890625
4
We live in a nuclear age. We’ve harnessed the power of the atom to feed our thirst for energy, but new uses of the atom have the possibility to expand the annals of knowledge. Modern scientists are now turning to the power of the atom for its unbridled promise in the realm of computation. The quantum computer has the potential to radically change the electronic age as we know it. A quantum computer is a theoretical construct for an advanced computing system that harnesses atomic properties for processing. Current computers will soon max out in speed, due to the limits of miniaturization; transistors and electrical wiring cannot be made slimmer than the width of an atom . Quantum computing offers an alternative in the manufacturing of atom-wide circuits, resulting in much faster processing. Considering the massive calculations that quantum computers could perform, the possibilities seem endless. Programs could be made to simulate the quantum environment, something modern computers cannot even begin to model. These programs could simulate the experiments conducted in billion dollar facilities that are currently being constructed in order to help us better understand the universe. In addition, medical programs could greatly benefit from quantum computing. Doctors could explore the human body and experiment on simulated environments, advancing medical research enormously. Another area of computing possibilities is the prime factorization of large numbers. Prime factorization is a mathematical algorithm that many organizations use for encryption. It is very hard to calculate in reverse; a modern computer might spend millions of years trying to perform the necessary calculations, rendering any hacking attempts laughable . A quantum computer, however, might complete the required calculations in less than a year. On the other hand, quantum computers could be used to generate more powerful encryption techniques. Just as hacking becomes more powerful with greater resources, so does security. The only danger here is if one party has access to quantum computing and the other does not. When the technology for quantum computing is achieved, it is imperative that it be accessible to everyone. How do they work? Clearly quantum computers offer huge potential when compared to the desktop PC of today, but why are they so much faster? What causes our modern machines to seem so uselessly sluggish in comparison? A basic explanation of how the two differ will illustrate. Modern computers manipulate information in the form of on and off signals. Ones (on) and zeroes (off) form a binary mathematics that is the fundamental basis of current computing. Two bits can form four combinations of on and off states. In a standard PC you might have 8 billion bits, providing a fairly large potential for information (see Fig. 1). Quantum computing accomplishes this task differently. Quantum bits, often referred to as quebits, can attain multiple states simultaneously-each state having a probability. So each combination of on and off would require a probability. The amount of combinations grows rapidly: for n quebits there are 2n different states, each having a probability associated with it . An example of how the two perform a task can help us visualize the process. A good example comes from Scientific American, illustrating how a modern computer and a quantum computer would find the right combination for a lock. Take a lock with 4 numbers: 0, 1, 2, 3; and any one number needed to unlock it. A modern computer would try each number in turn: is ‘1’ correct? Is ‘2’ correct? And so on. It would potentially try all 4 numbers, until it found the correct number. A quantum computer would test multiple numbers at the same time and get a unique answer for each potential correct answer. The modern computer averages n/2 guess, whereas the quantum computer needs only the square root of n . Some find the idea of simultaneous opposing states hard to swallow. Quantum mechanics, which is the basis for our current theories of the standard model, has a few basic definitions that are helpful when understanding this ‘strange’ coexistence characteristic: Quantization: observable quantities do not vary continuously but come in discrete chunks, or quanta. This characteristic makes computation, classical or quantum, possible. Interference: the outcome of a quantum process, in general, depends on all the possible histories of that process. This makes quantum computers qualitatively more powerful than classical ones. Entanglement: the properties of a composite system, even when the components are distant and non-interacting, cannot in general be fully expressed by descriptions of the properties of all the component systems. This makes quantum cryptography possible . How do you Make a Quantum Computer? Even with an understanding of how quantum computers would work, how does one go about building one? This idea is to build logic gates and then use quantum circuits to implement them . It turns out that by reading the states of liquid molecules, we form a rough interface with their quantum properties. Using a technique known as nuclear magnetic resonance (NMR), it becomes possible to manipulate quantum characteristics . This method requires that the liquid molecules be held in a secluded state so that they are not influenced by outside molecules. Two magnets are used to suspend the molecules in an environment of their own. Then using NMR (see Fig. 2) their states are read. The states can be altered using factors like magnetic fields and radio frequencies. One big problem is that the NMR techniques used to read the state of the molecule disrupt that state. So as soon as data is read, it is lost. A method used to correct this is by using lots of molecules, and reading only one. This way the others will do work to return the one disturbed molecule to its original state. Another current setback is the need for these operations to be performed at near absolute zero temperatures. Researchers Isaac Chuang of Los Almos National Laboratory, Neil Gershenfeld of The Massachusettes Institute of Technology, and Mark Kubinec of The University of California Berkeley are attempting to solve this problem by using foreign atoms. They are finding promise in the use of chloroform to implement some standard algorithms that would be used by quantum computers . Another problem lies in the task of sustaining several quantum bits at the same time. As it is, the magnetic fields required to sustain a quantum bit interfere with each other in when in close proximity. Either another method of sustaining or a way of accounting for the interference must be developed. Yasnuobu Nakamura and his co-workers at the NEC Fundamental Research Laboratories in Tsukuba, Japan, have designed a quantum bit sustained on a silicon chip. This chip is basically a small piece of aluminum placed between two pieces of silicon. This chip holds great advantages, as the magnetic fields required for the NMR technique are no longer a problem. However, the chip must be held at temperatures near absolute zero to insure its superconducting state. These chips have to be held and read with the utmost precision. They are appended by adding electrons to their existing structures. Scientific American describes the process: “Two small junctions connect the dot to a larger aluminum reservoir, and an applied voltage aligns the energy levels in dot and reservoir so that a single Cooper pair can tunnel back and forth from reservoir to dot. This forms the 0 and 1 of the device; the absence or presence of one extra Cooper pair in the finger, which is then called a single-Cooper-pair box” . One must add extremely small amounts of voltage to change the state from on to off. Much more research needs to be done in order to make the process practical and repeatable. Many advances have been and are being made in the field of quantum computing. With years of research, the sheer computational power that it affords will one day not only be a quantum possibility, but a quantum reality.
<urn:uuid:8d25bcdd-4338-422d-b3cf-371abfdf09d4>
CC-MAIN-2020-24
https://illumin.usc.edu/another-atomic-age/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00022.warc.gz
en
0.940937
1,567
3.890625
4
Today, we are on the edge of a quantum revolution. The advent of quantum computers in the next decade will give mankind access to unparalleled processing power with all the advantages that this brings, however this also creates challenges as they will render much of today’s cybersecurity useless. So how can Quantum Key Distribution (QKD) help? Quantum cryptography is a technology that uses quantum physics to secure the distribution of symmetric encryption keys. A more accurate name for it is quantum key distribution (QKD). It works by sending photons, which are “quantum particles” of light, across an optical link. The principles of quantum physics stipulate that observation of a quantum state causes perturbation. The various QKD protocols are designed to ensure that any attempt by an eavesdropper to observe the transmitted photons will indeed perturb the transmission. This perturbation will lead to transmission errors, which can be detected by the legitimate users. This is used to verify the security of the distributed keys. QKD implementation requires interactions between the legitimate users. These interactions need to be authenticated. This can be achieved through various cryptographic means. The end-result is that QKD can utilize an authenticated communication channel and transform it into a secure communication channel.In theory, QKD should be combined with One-Time Pad (OTP) encryption to achieve provable security. However, an OTP requires keys, which are as long as the data to be encrypted, and can be used only once. This would impose strong limitations on the available bandwidth, due to the fact that the key distribution rate of QKD is typically 1’000 to 10’000 times lower than conventional optical communications. Therefore, in practice, QKD is often combined with conventional symmetric encryption, such as AES, and used to frequently refresh short encryption keys. This is sufficient to provide quantum-safe security. Our cybersecurity infrastructure requires two different functions: authentication and confidentiality. Authentication allows distant users to trust their counterpart and validate the content of their exchanges. It is mostly implemented by public-key signature schemes. Confidentiality is required for any exchange of private information. It is often performed in a two-step process. First the users have to exchange a common secret key. This relies on another public-key protocol, the key exchange mechanism. The secret key is then used in a symmetric key encryption scheme. Both functions therefore depend on similar cryptographic techniques, known as asymmetric or public-key cryptography. Cybersecurity is much more than the underlying cryptography. All current hacks and security failures do not come from a weak cryptography, but rather from faulty implementation, social engineering and the like. Today, we trust the cryptography, and fight to get the implementation right. Unfortunately, this is about to change. The point of cryptographic vulnerability today is public-key cryptography, based on algorithms such as RSA or Elliptic Curve, which are used both to authenticate data and to securely exchange data encryption keys. The very processing power of the quantum computer can solve these mathematical problems exponentially faster than classical computers and break public-key cryptography. This means that the currently used public-key cryptosystems are not appropriate to secure data that require long-term confidentiality. An adversary could indeed record encrypted data and wait until a quantum computer is available to decrypt it, by attacking the public keys. We need quantum-safe cryptography today. The greatest threat is to public cryptography – or asymmetric algorithms – used for digital signatures and key exchange. There are already quantum algorithms, such as the famous Shor algorithm, which can break RSA and Elliptic Curve algorithms, once a universal quantum computer is available. Another famous quantum algorithm, the Grover algorithm, attacks symmetric cryptography. Fortunately, Grover can be countered by a simple expansion of the key size. For example, AES symmetric encryption scheme with 256 bit keys is considered as quantum-safe. Countering the quantum computer threat will rely on two pillars. One is the development of new classical algorithms, which should resist the quantum computer. These are known as Post-Quantum or Quantum-Resistant algorithms. We already encountered the example of AES above for encryption. We can also mention some signature schemes (LMS and XMSS), based on so-called hash functions.Many other algorithms, for both signature and key exchange are being developed in the framework of the NIST process. Their properties and quantum resistance are still under test. Standardisation is expected by 2023-2024. The second pillar, which is available today, is Quantum Key Distribution (QKD), which provide quantum-safe key exchange, based on very different principles. A security solution is as secure as its weakest link and in network encryption, the current weakest link with respect to the quantum computer threat is the secret key distribution based on public key cryptography. As its name says, QKD is used to distribute encryption keys, whose security is based on quantum physics and is thus guaranteed for the long-term. Most QKD solutions currently consist of key distribution appliances combined with link encryptors. The QKD appliances distribute the secret keys to the link encryptors. The link encryptors use the keys to encrypt large amounts of data, typically up to 100 Gb/s. In the simplest case, two QKD appliances are connected through an optical fibre and continuously distribute key material, which they store at each end-point, until it is requested by the encryptors. These solutions work up to an optical attenuation in the fibre of 18 dB, which corresponds to a range of about 80km, depending on the quality of the optical network. These systems are thus typically deployed in Local Area Networks or Metropolitan Area Networks, such as corporate campuses or datacenter interconnects. These applications have been extended to much longer distances, through the use of so-called Trusted Nodes. These trusted Nodes perform key hopping, whereby keys are generated at a starting node and transferred securely from node to node until the end node. Instead of relying on the security of the whole transmission channel, security has to be provided at each node only. Using a similar technology, it is also possible to build various types of QKD networks, such as ring networks and star networks. This requires more complex Key Management Schemes, which distribute the keys from and to any node in the network. For global reach, the Trusted Nodes can be implemented in satellites, with free-space QKD. Thanks to the rapid development of QKD solutions, many encryptor manufacturers now offer “quantum enabled” devices, which accept keys from QKD appliances. These encryptors are compatible with Ethernet and Fibre Channel with link bandwidth up to 10Gbps and aggregated bandwidth up to 100Gbps. In addition, a standard QKD interface has been developed by the ETSI (European Telecommunication Standards Institute). This will facilitate the introduction of QKD for OTN vendors. IDQ has deployed QKD systems commercially since 2007. One of the first QKD implementations was to secure elections in Geneva (see Geneva Government use case) in 2007, and this installation has been working reliably since its installation. QKD users include banks and governments worldwide. Quantum cryptography, or more correctly QKD, is now a well-established commercial solution. Standardisation work on QKD is also taking place at an increasing pace. In addition to the ETSI mentioned above, the ITU, ISO and IEEE organisations have all started working on quantum communication and QKD. Industry is getting organized for full-scale deployment of this technology. Contrary to classical key distribution techniques, which rely on unproven assumptions and thus do not fulfil the first criterion, the security of QKD is based on the laws of quantum physics and can be rigorously proven. This having been said, it is then important to make sure that the practical embodiment of a QKD system also fulfils the second criterion and does not have any implementation flaws. IDQ actively participates in quantum hacking projects with well-respected academic partners, with the goal of understanding quantum-specific side channel attacks and of improving implementation security of QKD devices. All the announcements about QKD having been hacked actually dealt with implementation flaws. These flaws are important but are inherent to any technological system. Moreover such quantum hacking projects use open QKD systems, designed for R&D research. The quantum hacks which have been discovered to date are not viable attacks on commercial QKD systems with anti-tamper proofing and other standard security features. In summary, the security of QKD is based on sound principles and, if properly implemented, it guarantees absolute security for key distribution. Quantum Technologies are creating a world of opportunities across almost every aspect of modern life. IDQ helps you build a trusted future by preparing your organisation now. Data security is a never-ending marathon. Adding quantum gives you a step ahead in this race. Getting prepared must be considered as a journey where every step completed adds a layer of trust and preparedness.
<urn:uuid:31634909-c805-467c-ba15-5b4ff4e2db5a>
CC-MAIN-2020-24
https://www.idquantique.com/quantum-safe-security/overview/quantum-key-distribution/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00425.warc.gz
en
0.933877
1,894
3.65625
4
Northeastern researchers have used a powerful computer model to probe a puzzling class of copper-based materials that can be turned into superconductors. Their findings offer tantalizing clues for a decades-old mystery, and a step forward for quantum computing. The ability of a material to let electricity flow comes from the way electrons within their atoms are arranged. Depending on these arrangements, or configurations, all materials out there are either insulators or conductors of electricity. But cuprates, a class of mysterious materials that are made from copper oxides, are famous in the scientific community for having somewhat of an identity issue that can make them both insulators and conductors. Under normal conditions, cuprates are insulators: materials that inhibit the flow of electrons. But with tweaks to their composition, they can transform into the world’s best superconductors. The finding of this kind of superconductivity in 1986 won its discoverers a Nobel Prize in 1987, and fascinated the scientific community with a world of possibilities for improvements to supercomputing and other crucial technologies. But with fascination came 30 years of bewilderment: Scientists have not been able to fully decipher the arrangement of electrons that encodes for superconductivity in cuprates. Mapping the electronic configuration of these materials is arguably one of the toughest challenges in theoretical physics, says Arun Bansil, University Distinguished Professor of physics at Northeastern. And, he says, because superconductivity is a weird phenomenon that only happens at temperatures as low as -300 F (or about as cold as it gets on Uranus), figuring out the mechanisms that make it possible in the first place could help researchers make superconductors that work at room temperature. Now, a team of researchers that includes Bansil and Robert Markiewicz, a professor of physics at Northeastern, is presenting a new way to model these strange mechanisms that lead to superconductivity in cuprates. In a study published in Proceedings of the National Academy of Sciences, the team accurately predicted the behavior of electrons as they move to enable superconductivity in a group of cuprates known as yttrium barium copper oxides. In these cuprates, the study finds, superconductivity emerges from many types of electron configurations. A whopping 26 of them, to be specific. “During this transition phase, the material will in essence become some kind of a soup of different phases,” Bansil says. “The split personalities of these wonderful materials are being now revealed for the first time.” The physics within cuprate superconductors are intrinsically weird. Markiewicz thinks of that complexity as the classical Indian myth of the blind men and the elephant, which has been a joke for decades among theoretical physicists who study cuprates. According to the myth, blind men meet an elephant for the first time, and try to understand what the animal is by touching it. But because each of them touches only one part of its body—the trunk, tail, or legs, for example—they all have a different (and limited) concept of what an elephant is. “In the beginning, we all looked [at cuprates] in different ways,” Markiewicz says. “But we knew that, sooner or later, the right way was going to show up.” The mechanisms behind cuprates could also help explain the puzzling physics behind other materials that turn into superconductors at extreme temperatures , Markiewicz says, and revolutionize the way they can be used to enable quantum computing and other technologies that process data at ultra-fast speeds. “We’re trying to understand how they come together in the real cuprates that are used in experiments,” Markiewicz says. The challenge of modeling cuprate superconductors comes down to the weird field of quantum mechanics, which studies the behavior and movement of the tiniest bits of matter—and the strange physical rules that govern everything at the scale of atoms. In any given material—say, the metal in your smartphone—electrons contained within just the space of a fingertip could amount to the number one followed by 22 zeros, Bansil says. Modeling the physics of such a massive number of electrons has been extremely challenging ever since the field of quantum mechanics was born. Bansil likes to think of this complexity as butterflies inside a jar flying fast and cleverly to avoid colliding with each other. In a conducting material, electrons also move around. And because of a combination of physical forces, they also avoid each other. Those characteristics are at the core of what makes it hard to model cuprate materials. “The problem with the cuprates is that they are at the border between being a metal and an insulator, and you need a calculation that is so good that it can systematically capture that crossover,” Markiewicz says. “Our new modeling can capture this behavior.” The team includes researchers from Tulane University, Lappeenranta University of Technology in Finland, and Temple University. The researchers are the first to model the electronic states in the cuprates without adding parameters by hand to their computations, which physicists have had to do in the past. To do that, the researchers modeled the energy of atoms of yttrium barium copper oxides at their lowest levels. Doing that allows researchers to trace electrons as they excite and move around, which in turn helps describe the mechanisms supporting the critical transition into superconductivity. That transition, known as the pseudogap phase in the material, could be described simply as a door, Bansil says. In an insulator, the structure of the material is like a closed door that lets no one through. If the door is wide open—as it would be for a conductor—electrons pass through easily. But in materials that experience this pseudogap phase, that door would be slightly open. The dynamics of what transforms that door into a really wide open door (or, superconductor) remains a mystery, but the new model captures 26 electron configurations that could do it. “With our ability to now do this first-principles-parameter-free-type of modeling, we are in a position to actually go further, and hopefully begin to understand this pseudogap phase a bit better,” Bansil says.
<urn:uuid:693cf66e-552b-4eb0-846d-d55114c2278c>
CC-MAIN-2020-24
https://news.northeastern.edu/2020/01/02/superconductor-or-not-theyre-exploring-the-identity-of-this-weird-quantum-material/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407667.28/warc/CC-MAIN-20200530071741-20200530101741-00025.warc.gz
en
0.944816
1,311
3.984375
4
New paradigm for "auto-tuning" quantum bits could overcome major engineering hurdle. A high-end race car engine needs all its components tuned and working together precisely to deliver top-quality performance. The same can be said about the processor inside a quantum computer, whose delicate bits must be adjusted in just the right way before it can perform a calculation. This artist's conception shows how the research team used artificial intelligence (AI) and other computational techniques to tune a quantum dot device for use as a qubit. The dot's electrons are corralled by electrical gates, whose adjustable voltages raise and lower the "peaks" and "valleys" in the large circles. As the gates push the electrons around, sensitive measurement of the moving electrons creates telltale lines in the black and white images, which the AI uses to judge the state of the dot and then make successive adjustments to the gate voltages. Eventually the AI converts a single dot (leftmost large circle) to a double dot (rightmost), a process that takes tedious hours for a human operator. Credit: B. Hayes / NIST Who's the right mechanic for this quantum tuneup job? According to a team that includes scientists at the National Institute of Standards and Technology (NIST), it's an artificial intelligence, that's who. The team's paper in the journal Physical Review Applied outlines a way to teach an AI to make an interconnected set of adjustments to tiny quantum dots, which are among the many promising devices for creating the quantum bits, or "qubits," that would form the switches in a quantum computer's processor. Precisely tweaking the dots is crucial for transforming them into properly functioning qubits, and until now the job had to be done painstakingly by human operators, requiring hours of work to create even a small handful of qubits for a single calculation. A practical quantum computer with many interacting qubits would require far more dots -- and adjustments -- than a human could manage, so the team's accomplishment might bring quantum dot-based processing closer from the realm of theory to engineered reality. "Quantum computer theorists imagine what they could do with hundreds or thousands of qubits, but the elephant in the room is that we can actually make only a handful of them work at a time," said Justyna Zwolak, a NIST mathematician. "Now we have a path forward to making this real." A quantum dot typically contains electrons that are confined to a tight boxlike space in a semiconductor material. Forming the box's walls are several metallic electrodes (so-called gates) above the semiconductor surface that have electric voltage applied to them, influencing the quantum dot's position and number of electrons. Depending on their position relative to the dot, the gates control the electrons in different ways. To make the dots do what you want -- act as one sort of qubit logic switch or another, for example -- the gate voltages must be tuned to just the right values. This tuning is done manually, by measuring currents flowing through the quantum dot system, then changing the gate voltages a bit, then checking the current again. And the more dots (and gates) you involve, the harder it is to tune them all simultaneously so that you get qubits that work together properly. In short, this isn't a gig that any human mechanic would feel bad about losing to a machine. "It's usually a job done by a graduate student," said graduate student Tom McJunkin of the University of Wisconsin-Madison's physics department and a co-author on the paper. "I could tune one dot in a few hours, and two might take a day of twiddling knobs. I could do four, but not if I need to go home and sleep. As this field grows, we can't spend weeks getting the system ready -- we need to take the human out of the picture." Pictures, though, are just what McJunkin was used to looking at while tuning the dots: The data he worked with came in the form of visual images, which the team realized that AI is good at recognizing. AI algorithms called convolutional neural networks have become the go-to technique for automated image classification, as long as they are exposed to lots of examples of what they need to recognize. So the team's Sandesh Kalantre, under supervision from Jake Taylor at the Joint Quantum Institute, created a simulator that would generate thousands of images of quantum dot measurements they could feed to the AI as a training exercise. "We simulate the qubit setup we want and run it overnight, and in the morning we have all the data we need to train the AI to tune the system automatically," Zwolak said. "And we designed it to be usable on any quantum dot-based system, not just our own." The team started small, using a setup of two quantum dots, and they verified that within certain constraints their trained AI could auto-tune the system to the setup they desired. It wasn't perfect -- they identified several areas they need to work on to improve the approach's reliability -- and they can't use it to tune thousands of interconnected quantum dots as yet. But even at this early stage its practical power is undeniable, allowing a skilled researcher to spend valuable time elsewhere. "It's a way to use machine learning to save labor, and -- eventually -- to do something that human beings aren't good at doing," Zwolak said. "We can all recognize a three-dimensional cat, and that's basically what a single dot with a few properly-tuned gates is. Lots of dots and gates are like a 10-dimensional cat. A human can't even see a 10D cat. But we can train an AI to recognize one." Chad Boutin | EurekAlert! Smart machine maintenance: New AI system also detects unknown faults 25.05.2020 | Universität des Saarlandes Artificial Intelligence for optimized mobile communication 25.05.2020 | Fraunhofer-Institut für Angewandte Festkörperphysik IAF Microelectronics as a key technology enables numerous innovations in the field of intelligent medical technology. The Fraunhofer Institute for Biomedical Engineering IBMT coordinates the BMBF cooperative project "I-call" realizing the first electronic system for ultrasound-based, safe and interference-resistant data transmission between implants in the human body. When microelectronic systems are used for medical applications, they have to meet high requirements in terms of biocompatibility, reliability, energy... Thomas Heine, Professor of Theoretical Chemistry at TU Dresden, together with his team, first predicted a topological 2D polymer in 2019. Only one year later, an international team led by Italian researchers was able to synthesize these materials and experimentally prove their topological properties. For the renowned journal Nature Materials, this was the occasion to invite Thomas Heine to a News and Views article, which was published this week. Under the title "Making 2D Topological Polymers a reality" Prof. Heine describes how his theory became a reality. Ultrathin materials are extremely interesting as building blocks for next generation nano electronic devices, as it is much easier to make circuits and other... Scientists took a leukocyte as the blueprint and developed a microrobot that has the size, shape and moving capabilities of a white blood cell. Simulating a blood vessel in a laboratory setting, they succeeded in magnetically navigating the ball-shaped microroller through this dynamic and dense environment. The drug-delivery vehicle withstood the simulated blood flow, pushing the developments in targeted drug delivery a step further: inside the body, there is no better access route to all tissues and organs than the circulatory system. A robot that could actually travel through this finely woven web would revolutionize the minimally-invasive treatment of illnesses. A team of scientists from the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart invented a tiny microrobot that resembles a white blood cell... By studying the chemical elements on Mars today -- including carbon and oxygen -- scientists can work backwards to piece together the history of a planet that once had the conditions necessary to support life. Weaving this story, element by element, from roughly 140 million miles (225 million kilometers) away is a painstaking process. But scientists aren't the type... Study co-led by Berkeley Lab reveals how wavelike plasmons could power up a new class of sensing and photochemical technologies at the nanoscale Wavelike, collective oscillations of electrons known as "plasmons" are very important for determining the optical and electronic properties of metals. 19.05.2020 | Event News 07.04.2020 | Event News 06.04.2020 | Event News 25.05.2020 | Medical Engineering 25.05.2020 | Information Technology 25.05.2020 | Information Technology
<urn:uuid:548a4a4e-f0d2-4901-8f8e-0f11c9401972>
CC-MAIN-2020-24
https://www.innovations-report.com/html/reports/information-technology/to-tune-up-your-quantum-computer-better-call-an-ai-mechanic.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00025.warc.gz
en
0.936407
1,862
3.75
4
The computers of today have just about hit their limits, and scientists around the world are scrambling to build the first viable quantum computer - a machine that could increase processing speeds 100-million-fold. The biggest challenge in scaling up a quantum computer is figuring out how to entangle enough quantum bits (qubits) to perform calculations, but a team of engineers in the US say they might finally have a solution. Quantum computers are set to revolutionise how we process data in the future, because they’re not limited to the 1s and 0s of binary code that today’s computers rely on. That binary code is holding us back, because if you can only use a combination of 1s and 0s, there’s a finite amount of data that can be processed, no matter how fast you go. Instead, quantum computers use qubits, which can essentially take the state of 0, 1, or a 'superposition' of the two. So rather than having bits that can only be 1 or 0 at any given moment, qubits can be anything and everything. "Quantum computers exploit three very unusual features that operate at the quantum scale - that electrons can be both particles and waves, that objects can be in many places at once, and that they can maintain an instantaneous connection even when separated by vast distances (a property called 'entanglement’)." This means that quantum computers can perform many calculations simultaneously, giving them - quite literally - limitless potential. But we have to figure out how to build them first. Despite what Google’s been saying about its controversial new D-Wave 2X quantum computing machine, no one’s been able to build a 'proper' quantum computer, because of how difficult it is to entangle a large number of qubits on a circuit, and control them in any reliable way. Once derided by Einstein himself as "spooky action at a distance", quantum entanglement is a strange phenomenon where two quantum particles interact in such a way that they become deeply linked, and essentially 'share' an existence. This means that what happens to one particle will directly and instantaneously affect the other - even if it’s many light-years away. Getting a bunch of entangled particles in the one place is crucial to the development of quantum computers, and researchers from Penn State University say they’ve come up with a technique that could get this done. First they used beams of laser light to build a three-dimensional lattice array, which could trap and hold onto a bunch of quantum particles, forcing them into a cubic arrangement of five stacked planes. Think of it like a five-layer sandwich with grids of atoms held inside each layer, says Katherine Noyes from PC World. Each layer in the circuit could hold 25 equally spaced atoms, and once they were all in position, microwaves were used to switch individual qubits from one quantum state to another without altering the states of the other atoms in the cubic array. "The scientists filled some of the possible locations in the array with qubits consisting of neutral caesium atoms possessing no positive or negative charge. Then, they used crossed beams of laser light to target individual atoms in the lattice, causing a shift in the energy levels of those atoms. When the scientists then bathed the whole array with a uniform wash of microwaves, the state of the atoms with the shifted energy levels changed, while the states of all the other atoms did not." The team, led by physicist David S. Weiss, tested their ability to change the quantum state of these individual atoms by switching the states of selected atoms across three of the stacked planes to spell out the letters P, S, and U (for Penn State University). "We changed the quantum superposition of the PSU atoms to be different from the quantum superposition of the other atoms in the array," Weiss says in a press release. "We have a pretty high-fidelity system. We can do targeted selections with a reliability of about 99.7 percent, and we have a plan for making that more like 99.99 percent." So... next step, quantum computers? Unfortunately, there are two major limitations here - the system needs to be seriously scaled up, because 125 atoms aren't going to do us much good in the real world, and the quantum particles used in the system hadn't been entangled. As we found out last month, when Chinese physicists quantum entangled 10 photon pairs to set a new world record, entangling multiple particles is hard. But Weiss's team is confident that they can build on the system they have, both in teams of scale and spooky entanglement action. "Filling the cube with exactly one atom per site and setting up entanglements between atoms at any of the sites that we choose are among our nearer-term research goals," he says. Our fingers are crossed for the computers of the future. The results have been published in Science.
<urn:uuid:734ee2af-3691-46c7-ab2b-9ab86dae5b00>
CC-MAIN-2017-47
http://www.sciencealert.com/scientists-have-figured-out-how-to-build-circuits-for-the-world-s-first-proper-quantum-computers
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805023.14/warc/CC-MAIN-20171118190229-20171118210229-00517.warc.gz
en
0.939282
1,028
3.96875
4
Virtually Virtual Reality Virtually virtual reality (abbreviated VVR) describes a set of technologies which can be used to virtually simulate virtual environments. VVR has been deployed extensively on the world wide web for commercial, entertainment, cultural, and educational purposes. VVR Technologies and Principles VVR utilizes quantum computing and physics standard modeling to create an immersive environment. These technologies are combined with virtually simulated neural nets to create fully interactive avatars (called people) within these environments. By utilizing PPP (photon particle protocol) and HFP (Higgs field protocol) over the internet, a VVR world can be transmitted in full to an end user anywhere in the world. VVR implementations incorporate virtually artificial visual, aural, olfactory, and tactile stimuli to create a remarkable level of immersion and interactivity for the end user. Early Implementations of VVR The earliest VVR environment was developed at Argonne National Laboratory in 1965 and was demonstrated to the public in 1966 in nearby Harvey, Illinois. This first VVR environment was dubbed Dixie Square Shopping Center. Users within this environment could interact with each other using IVC (interactive vocal chat), a precursor to IRC. Users could also enter virtually virtual store environments, where they could find and purchase products which would then be delivered immediately. Inventory data was presented in real-time; merchants would see virtually virtual representations of merchandise that was in stock. Dixie Square Shopping Center represented an early commercial success of VVR technologies. Other early VVR experiments included Fallbrook Mall of West Hills, California, developed by researchers at Caltech in 1968; and Mohawk Mall, located in Schenectady, New York, developed by researchers at Stony Brook University in 1970. Applications of VVR VVR technology became popular with the widespread adoption of the internet. One of its first applications was in education. Virtually virtual campuses came to be used by thousands of students. This type of education came to be called zero-distance education. On a virtually virtual campus, a directory of academic subjects is presented as a set of virtually virtual buildings. To study a subject, a student need only enter one of the buildings (after entering the virtually virtual registrar’s office where tuition, used to maintain the school’s VVR server, is paid). Within each virtually virtual building, a student can encounter teaching avatars. In virtually virtual lecture halls, students can interact with each other and the teaching avatar using natural spoken language. Some schools also allow students to interact individually with a teaching avatar. During these virtually virtual sessions, or office hours, students can enjoy a personal educational experience. VVR vendors have created appealing VVR spaces designed to sell a wide range of goods and services. In virtually virtual stores, merchants keep track of inventory in real time using actual visual representations of goods. Customers receive multisensory virtually virtual simulations of products and can receive information from sales avatars. Many customers prefer the personal interaction made possible in VVR stores. One of the first major vendors in VVR was Wal-Mart. Wal-Mart’s VVR experience, with its large number of products available and instant delivery, allowed Wal-Mart to make a profit of $8.4 billion in 2004. VVR has also proven to be popular in the entertainment industry. The large bandwidth provided by quantum computing and photon particle protocol allows for compelling interactive experiences. Many enjoy purchasing virtually virtual pets in VVR. These virtually virtual pets are fed virtually virtual food and play with virtually virtual toys (which both may be purchased with additional virtually virtual money). Virtually virtual dogs need to be virtually virtually walked at regular times at each day. Virtually virtual pets can die if they are denied virtually virtual care; however, should this happen, a new virtually virtual pet can be purchased with little difficulty. Others enjoy virtually virtual cinema. Virtually virtual cinema not only virtually simulates a movie, but also an audience, popcorn, and the long lines that are part of the movie-going experience. Comedy clubs, gambling, and interactive chat rooms are extremely popular as VVR experiences. VVR communities started to become popular in 1995. Among the earliest VVR communities were Quakers.org and Amish.org. In these virtually virtual communities, clusters of commerce sites, educational sites, and other special interest sites are brought together in a single virtually virtual location. These sites are hypolinked using virtually virtual roads and virtually virtual sidewalks. Users can travel within a VVR community using virtually virtual walking or virtually virtual cars. Within a VVR community, users can communicate with the avatars of other users, sales avatars, and adminstrator avatars using natural language and gestures. Many users join VVR communities for the rich types of social interactions which are available within. In 2002, the IEEE released a set of standards that allowed VVR communities to be linked together. Today, millions of people daily use virtually virtual airplanes and virtually virtual trains to travel between the wide range of VVR communities. This world wide web of VVR communities promises to have a significant impact on the future development of technology and culture. A VVR Experience Thanks to the administrators of Uncyclopedia (and in particular the hard work of Carlb, Elvis, and Rcmurphy), the Uncylopedia now hosts a small VVR server. By downloading a VVRML plugin, you can experience VVR for yourself. The VVRML plugin requires runs on any Windows system or better (better includes Mac OS X and Linux). It requires a G3 or 386 processor (or better). Click on the link below to download the VVRML plugin. Once the plugin has been downloaded, initialized, and calibrated, you will be returned to a VVR representation of this page. Using your IP address, extrapolations from collected data of quantum effects of the immediate environment on the CPU of your computer, and Google, the VVR server will create a representation of your environs and abode. The Uncyclopedia VVR server will send commands to your computer that will generate a special pattern on your monitor. This pattern will create interference patterns of photons and immerse you in a holographic image. The first hologram that you see will look like the area around your computer. This is your home portal. The VVR representation of this page will look like an Uncyclopedia entry on your virtually virtual monitor. - In the VVR representation of your kitchen, find a large sharp knife. You will need the knife later. - Find a copy of the Yellow Pages. The VVR server will list some of the immediate hypolinks available. Find a listing of department stores in your virtually virtual community. - Choose a method to get to a virtually virtual department store. You may choose virtually virtual walking, virtually virtual driving, or a virtually virtual taxi. - Look at the sky. The Uncyclopedia VVR server uses real-time data to render the sky according to the time of day. - The virtually virtual store will maintain real-time data on inventory. You may ask the sales avatars for information using a natural language interface. - To complete a conversation, run the knife through the avatar. This gesture indicates that you want to close the chat. You can use this same gesture to close chats with other users and avatars you encounter. - Once you have completed your desired purchases, return to your home portal. - Close your eyes, click your heels together three times, and repeat the phrase “There is no place like home.” The VVR server will recognize this gesture and exit you back to your web browser. VVR Issues and Controversies The proliferation of malware on VVR sites has been of great concern. As of 2005, over 4 million types of VVR viruses had been detected. Entertainment sites in particular have tended to be a cause of concern with respect to viruses. Spyware has also been problematic on VVR sites. VVR has also been the subject of many legal controversies. National governments have imposed their own laws and regulations upon visitors to VVR sites hosted within their jurisdictions. The long range impact of VVR on society has been of a concern to many sociologists. Many youth show signs of “addiction” to VVR, spending many hours each day on virtually virtual reality sites. Some psychologists suggest that with the prevalence of VVR, certain people may become unable to distinguish between VVR and reality. Such people may engage in antisocial and psychopathic behavior. Future Prospects for VVR Already, VVR has had a large impact on the social fabric of modern society. Elements of VVR, such as virtually virtual hip hop, have entered the popular culture. In 2004, VVR communities had a large impact upon the Presidential election in the United States. As the number of VVR users increase, society and culture and certain to undergo small and large changes. - The Matrix - Virtual Realty - Virtuous Reality - Vitreous Reality - Vitriol Reality - Actual Reality - Virtual Virtual Reality Uncyclopedia Entry
<urn:uuid:c358664d-50a5-47fa-a49f-af8f862e2250>
CC-MAIN-2017-47
http://en.uncyclopedia.co/wiki/Virtual_reality
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806609.33/warc/CC-MAIN-20171122141600-20171122161600-00118.warc.gz
en
0.913506
1,877
3.890625
4
Researchers at University of B.C. announced Wednesday that they’ve made a major advance in dealing with one of the biggest obstacles to development of a radical new kind of computer. In a paper published online today by Nature, the world’s top research journal, researchers at UBC and University of California Santa Barbara announced they’ve found a way to deal with decoherence – the tendency of atomic-scale particles to get quickly tangled up with the larger physical world we live in. Their work opens up a whole new area for researchers who are investigating the potential for development of quantum computers. In an interview, UBC physics professor Phil Stamp said the university published a theory in 2006 that pointed to the solution and the Santa Barbara researchers found a way to make it work in a lab. Particles such as electrons don’t have to play by the same rules we do when they’re moving around in the quantum-scale universe – a universe with some peculiar rules that seem to defy common sense, like being in two places at once. When physicists and chemists figure out a way to keep these tiny objects from getting tangled up, or ‘entangled’, in the ‘classical’ world, then they can begin to research ways to put them to work in a radically faster new generation of computers. It’s anticipated to be a much bigger jump in computer technology than the one between the warehouse-sized mechanical IBM punch card computers of the 1950s and the Internet-browsing mobile devices that consumers covet in 2011. So far, quantum computing is pretty much restricted to laboratories. Quantum computers are not expected to do anything that a conventional, or ‘classical’, computer can’t do – but they are expected to do it faster. A working quantum computer could solve in seconds a problem that would take a classical computer years to work out. To make a calculation in a classical computer, electrons flow through a series of switches that can be set to one of two positions, on or off. In a quantum machine there can be three positions, on, off, and on-plus-off – and a particle can be in one, two, or all three of those positions. It’s like a coin that can be heads, tails, or both, at the same time. That extra position on the switch would make computing “exponentially” faster according to Stamp. Military and government security agencies such as the NSA are interested because they’d gain the capacity to break codes that classical machines can’t figure out. There’s a fear factor as well. The first classical-scale quantum computer will be able to crack every code ever written, and figure out every secret message ever archived. Of course there are positive social implications as well. One quantum computer could probably replace an entire server farm for cloud-based businesses. That would be better for the environment. The discovery won’t spark a wave of quantum computer manufacturing just yet. But it will provide researchers with an unprecedented road map for research aimed at building them. Here are some details from UBC’s news release this morning. Discovery may overcome obstacle for quantum computing: UBC, California researchers Researchers have made a major advance in predicting and quashing environmental decoherence, a phenomenon that has proven to be one of the most formidable obstacles standing in the way of quantum computing. The findings – based on theoretical work conducted at the University of British Columbia and confirmed by experiments at the University of California Santa Barbara – are published July 20 in the online version of the journal Nature. Quantum mechanics states that matter can be in more than one physical state at the same time – like a coin simultaneously showing heads and tails. In small objects like electrons, physicists have had success in observing and controlling these simultaneous states, called “state superposition.” Larger, more complex physical systems appear to be in one consistent physical state because they interact and “entangle” with other objects in their environment. This entanglement makes these complex objects “decay” into a single state – a process called decoherence. Quantum computing’s potential to be exponentially faster and more powerful than any conventional computer technology depends on switches that are capable of state superposition – that is, being in the “on” and “off” positions at the same time. Until now, all efforts to achieve such superposition with many molecules at once were blocked by decoherence. “For the first time we’ve been able to predict and control all the environmental decoherence mechanisms in a very complex system, in this case a large magnetic molecule called the ‘Iron-8 molecule,’” said Phil Stamp, UBC professor of physics and astronomy and director of the Pacific Institute of Theoretical Physics. “Our theory also predicted that we could suppress the decoherence, and push the decoherence rate in the experiment to levels far below the threshold necessary for quantum information processing, by applying high magnetic fields.” In the experiment, California researchers prepared a crystalline array of Iron-8 molecules in a quantum superposition, where the net magnetization of each molecule was simultaneously oriented up and down. The decay of this superposition by decoherence was then observed in time – and the decay was spectacularly slow, behaving exactly as the UBC researchers predicted. “Magnetic molecules now suddenly appear to have serious potential as candidates for quantum computing hardware,” said Susumu Takahashi, assistant professor of chemistry and physics at the University of Southern California. “This opens up a whole new area of experimental investigation with sizeable potential in applications, as well as for fundamental work.” Takahashi conducted the experiments while at UC Santa Barbara and analyzed the data while at UC Santa Barbara and the University of Southern California. “Decoherence helps bridge the quantum universe of the atom and the classical universe of the everyday objects we interact with,” Stamp said. “Our ability to understand everything from the atom to the Big Bang depends on understanding decoherence, and advances in quantum computing depend on our ability to control it.” The research was supported by the Pacific Institute of Theoretical Physics at UBC, the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute for Advanced Research, the Keck Foundation, and the National Science Foundation.
<urn:uuid:ff67f15e-7928-40a2-8960-1e70c7b507af>
CC-MAIN-2017-47
http://vancouversun.com/news/staff-blogs/ubc-scientists-announcing-major-advance-in-quantum-computing-research
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00120.warc.gz
en
0.93323
1,351
3.515625
4
This study marks the coldest temperature ever reached by laser-cooling of an object of that size, and the technique holds promise that it will experimentally confirm, for the first time, that large objects obey the laws of quantum mechanics just as atoms do. Although the research team has not yet achieved temperatures low enough to observe quantum effects, "the most important thing is that we have found a technique that could allow us to get (large objects) to ultimately show their quantum behavior for the first time," said MIT Assistant Professor of Physics Nergis Mavalvala, leader of the team. The MIT researchers and colleagues at Caltech and the Albert Einstein Institute in Germany will report their findings in an upcoming issue of Physical Review Letters. Quantum theory was developed in the early 20th century to account for unexpected atomic behavior that could not be explained by classical mechanics. But at larger scales, objects' heat and motion blur out quantum effects, and interactions are ruled by classical mechanics, including gravitational forces and electromagnetism. "You always learn in high school physics that large objects don't behave according to quantum mechanics because they're just too hot, and the thermal energy obscures their quantum behavior," said Thomas Corbitt, an MIT graduate student in physics and lead author of the paper. "Nobody's demonstrated quantum mechanics at that kind of (macroscopic) scale." To see quantum effects in large objects, they must be cooled to near absolute zero. Such low temperatures can only be reached by keeping objects as motionless as possible. At absolute zero (0 degrees Kelvin, -237 degrees Celsius or -460 degrees Fahrenheit), atoms lose all thermal energy and have only their quantum motion. In their upcoming paper, the researchers report that they lowered the temperature of a dime-sized mirror to 0.8 degrees Kelvin. At that temperature, the 1 gram mirror moves so slowly that it would take 13 billion years (the age of the universe) to circle the Earth, said Mavalvala, whose group is part of MIT's LIGO (Laser Interferometer Gravitational-wave Observatory) Laboratory. The team continues to refine the technique and has subsequently achieved much lower temperatures. But in order to observe quantum behavior in an object of that size, the researchers need to attain a temperature that is still many orders of magnitude colder, Mavalvala said. To reach such extreme temperatures, the researchers are combining two previously demonstrated techniques-optical trapping and optical damping. Two laser beams strike the suspended mirror, one to trap the mirror in place, as a spring would (by restoring the object to its equilibrium position when it moves), and one to slow (or damp) the object and take away its thermal energy. Combined, the two lasers generate a powerful force-stronger than a diamond rod of the same shape and size as the laser beams-that reduces the motion of the object to near nothing. Using light to hold the mirror in place avoids the problems raised by confining it with another object, such as a spring, Mavalvala said. Mechanical springs are made of atoms that have their own thermal energy and thus would interfere with cooling. As the researchers get closer and closer to reaching the cold temperature they need to see quantum behavior, it will get more difficult to reach the final goal, Mavalvala predicted. Several technical issues still stand in the way, such as interference from fluctuations in the laser frequency. "That last factor of 100 will be heroic," she said. Once the objects get cold enough, quantum effects such as squeezed state generation, quantum information storage and quantum entanglement between the light and the mirror should be observable, Mavalvala said. Other authors on the paper are Christopher Wipf, MIT graduate student in physics; David Ottaway, research scientist at MIT LIGO; Edith Innerhofer (formerly a postdoctoral fellow at MIT); Yanbei Chen, leader of the Max Planck (Albert Einstein Institute) group; Helge Muller-Ebhardt and Henning Rehbein, graduate students at the Albert Einstein Institute; and research scientists Daniel Sigg of LIGO Hanford Observatory and Stanley Whitcomb of Caltech. The research was funded by the National Science Foundation and the German Federal Ministry of Education and Research. MIT News Office | Elizabeth A. Thomson NASA detects solar flare pulses at Sun and Earth 17.11.2017 | NASA/Goddard Space Flight Center Pluto's hydrocarbon haze keeps dwarf planet colder than expected 16.11.2017 | University of California - Santa Cruz The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake. Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a... Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching. That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these... Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm. During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles.... The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications. Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,... Pillared graphene would transfer heat better if the theoretical material had a few asymmetric junctions that caused wrinkles, according to Rice University... 15.11.2017 | Event News 15.11.2017 | Event News 30.10.2017 | Event News 17.11.2017 | Physics and Astronomy 17.11.2017 | Health and Medicine 17.11.2017 | Studies and Analyses
<urn:uuid:c93ec73b-7268-49db-99bd-57ef81132b57>
CC-MAIN-2017-47
http://www.innovations-report.com/html/reports/physics-astronomy/report-82269.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805242.68/warc/CC-MAIN-20171119004302-20171119024302-00123.warc.gz
en
0.929123
1,483
3.953125
4
The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun. These two threads—smarter people and smarter machines—will inevitably intersect. Just as machines will be much smarter in 2050, we can expect that the humans who design, build, and program them will also be smarter. Naively, one would expect the rate of advance of machine intelligence to outstrip that of biological intelligence. Tinkering with a machine seems easier than modifying a living species, one generation at a time. But advances in genomics—both in our ability to relate complex traits to the underlying genetic codes, and the ability to make direct edits to genomes—will allow rapid advances in biologically-based cognition. Also, once machines reach human levels of intelligence, our ability to tinker starts to be limited by ethical considerations. Rebooting an operating system is one thing, but what about a sentient being with memories and a sense of free will? It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability. Alan Turing and John von Neumann both contributed to the realization of computers whose program is stored in memory and can be modified during execution. This idea appeared originally in the form of the Turing Machine, and was given practical realization in the so-called von Neumann architecture of the first electronic computers, such as the EDVAC. While this computing design seems natural, even obvious, to us now, it was at the time a significant conceptual leap. Turing and von Neumann were special, and far beyond peers of their era. Both played an essential role in the Allied victory in WWII. Turing famously broke the German Enigma codes, but not before conceptualizing the notion of “mechanized thought” in his Turing Machine, which was to become the main theoretical construct in modern computer science. Before the war, von Neumann placed the new quantum theory on a rigorous mathematical foundation. AI research also pushes even very bright humans to their limits. The frontier machine intelligence architecture of the moment uses deep neural nets: multilayered networks of simulated neurons inspired by their biological counterparts. Silicon brains of this kind, running on huge clusters of GPUs (graphical processor units made cheap by research and development and economies of scale in the video game industry), have recently surpassed human performance on a number of narrowly defined tasks, such as image or character recognition. We are learning how to tune deep neural nets using large samples of training data, but the resulting structures are mysterious to us. The detailed inner workings of a complex machine intelligence (or of a biological brain) may turn out to be incomprehensible to our human minds—or at least the human minds of today. While one can imagine a researcher “getting lucky” by stumbling on an architecture or design whose performance surpasses her own capability to understand it, it is hard to imagine systematic improvements without deeper comprehension. Perhaps we will experience a positive feedback loop: Better human minds invent better machine learning methods, which in turn accelerate our ability to improve human DNA and create even better minds. The feedback loop between algorithms and genomes will result in a rich and complex world, with myriad types of intelligences at play: the ordinary human (rapidly losing the ability to comprehend what is going on around them); the enhanced human (the driver of change over the next 100 years, but perhaps eventually surpassed); and all around them vast machine intellects, some alien (evolved completely in silico) and some strangely familiar (hybrids). Rather than the standard science-fiction scenario of relatively unchanged, familiar humans interacting with ever-improving computer minds, we will experience a future with a diversity of both human and machine intelligences. There will also be many kinds of quantum computers. Currently there are over dozen approaches to quantum computing. There will be many kinds of neuromorphic machines. There will be optical computers. Many different approaches to computing will be useful for different kinds of problems. Superintelligence is not required to develop molecular nanotechnology. There is already advanced DNA nanotechnology and there has been some experiments proving the controlled movement of molecules. The fact that molecular nanotechnology has been underfunded for a couple of decades does not mean superintelligence is required to solve it or make it happen. Superintelligence is not required to solve climate change or air pollution. France has cleaner air than most other countries. They have 80% of their electricity from nuclear power. Europe also has stringent standards on their car engines which reduces the amount of particulates and air pollution. The dynamics and interaction of people is allowing problems to remain unsolved. This can be seen in the disfunction of the US political system. The solutions used in other countries for many problems could be adopted or emulated and improved upon. They also show that solutions exist. If we see the emergence of significantly superior superintelligent humans and superintelligent machines, it will be interesting to see what true surprises will be developed. SOURCE – Nautilus, Infoproc
<urn:uuid:3eaec760-e9ec-4ac1-a712-d8cc33650110>
CC-MAIN-2017-47
https://www.nextbigfuture.com/2015/09/smarter-humans-and-smarter-machines.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805541.30/warc/CC-MAIN-20171119095916-20171119115916-00124.warc.gz
en
0.947877
1,128
3.6875
4
| Part of the series on| Logic and rhetoric “”It is the mark of an educated mind to be able to entertain a thought without accepting it. |—Attributed to Aristotle| An open mind is a mind that is receptive to new ideas and information. It is often compared to a closed mind which will reject ideas without any consideration. While there is some philosophical validity to the distinction between open and closed minds, particularly in the case of empiricism, when used in an argument on the internet it's almost always a form of whining. Being told to be "open minded" about something — like being made to listen to Michelle Malkin for example — is usually a code for "you're not going to like this, but I think you should consider subjecting yourself to it anyway". Conversely, being told that you are "closed-minded" is generally a means of asserting that "I don't like the fact that you're proving me wrong, so I will pretend that your failure to agree with my argument is a philosophical deficiency". Being told you are "close-minded" simply shows that the one writing is confused about the difference between "open" and "far" (or is simply lazy in their writing). “”I have steadily endeavored to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject), as soon as the facts are shown to be opposed to it. The scientific method demands open mindedness because it requires consistency with available data and evidence, regardless of where that leads. Sometimes evidence will lead to a conclusion that defies common sense, which an otherwise closed mind would have trouble with. Quantum mechanics is the most illustrative example of this process, as practically every aspect of quantum physics appears to defy ordinary common sense; duality, slit diffraction and quantum entanglement are all completely at odds with the way that we have evolved to view the world. Without the open-mindedness to reject our common sense view of the world, some of the most complex and counter-intuitive science would be off limits to human endeavour. To make any advances in science and technology, new ideas need to be (and are) presented constantly. Through open minded consideration of these new ideas and thorough studies of them, science can winnow out the bad ideas and keep the good ones. (Weak ideas shall perish!) A closed minded researcher, unwilling to consider alternatives to their own pet theory or hypothesis, will not advance very far or contribute much to science. Science is often accused of being "closed-minded". Almost without exception, this is code for "scientists won't blindly accept my crackpot idea". It's certainly not a matter of closed-mindedness and unwillingness that these ideas aren't accepted by the scientific community, because the individuals within it have a lot of motivation to prove each other wrong. In writing a scientific article, this is known as peer review; in experimental science, people must be able to replicate one's results, within the limits imposed by inherent limitation of equipment. There would be money and fame in a scientist showing, with the evidence to back it up and convince the rest, things such as the existence of ghosts, the (non-placebo) workings of homeopathy or even a young earth. Often proponents of woo cry that science hasn't taken them seriously and has refused to do the research, imploring scientists and skeptics to "have an open mind". This is blatantly false; science has very often tried out these ideas or trialed that piece of alternative medicine, it's just that the results have come back negative, or that the idea has been proposed and disproved before. Consequently, calling accusing someone of being closed-minded is very often — but not always — an indication that the accuser themselves is the one who is closed-minded. What it is not “”Suppose you are a chef, cooking soup for two hundred diners. You say to yourself ‘Well, I know if I put arsenic in this soup it’ll kill everyone. But hey! Gotta be open-minded!’ And you go ahead and add the deadly metalloid to the goats’ cheese crostini and float it atop the watercress and mint broth. Are you being open-minded or… just ignoring important information? All that open mindedness requires is that one considers an idea or proposal and does not reject it outright before any considerations or evaluations are made. Having an open mind does not mean accepting any new idea as true as soon as it is presented. If you consider an idea, and then reject it based on evidence or similar criteria, you do not have a closed mind. Lack of an "open mind", based on the misunderstanding between consideration and acceptance of new ideas, is therefore a common but groundless criticism of skeptics. Skeptical open-mindedness differs from credulous open-mindedness in that the skeptic has effective mechanisms for assessing ideas and rejecting those that are found wanting. Usually, and quite ironically, people who go around accusing others of being closed minded are actually more closed minded themselves. A conspiracy theorist, for example, may say "no one can convince me" that the World Trade Center wasn't blown up by the government" or a similar phrase. This is a clear example of a closed mind: a mind unwilling to accept that, perhaps, everything about an event was actually how it was officially presented—without aliens, reptilian humanoids, government plots, second gunmen, Illuminati, men in black, and so on. This archetypal conspiracy theorist will then go on to accuse anyone who rejects their crackpot ideas of being closed minded. This sort of thing is, unfortunately, extremely common. That bloody quote If you hang around in skeptic circles, sooner or later you'll hear (or more probably, read) a variation of the following: “”Keep an open mind – but not so open that your brain falls out. | —Origin unknown, attributed to (and used by) various people;| dates at least back to 1937. As Pedantic, Humorless Rationalists™, we'd like to point out that a mind is not a skull and the phrase doesn't make any sense. So, please, don't repeat it mindlessly. That other bloody quote “”A mind is like an umbrella — it works best when it's open. |—attrib. James Jeans, Max Gropius. Take your pick| The riposte to this annoying pseudo-aphorism is that an umbrella only needs to be open under certain circumstances. At other times, e.g. any time you're indoors, an open umbrella is a bloody nuisance. This quote works better if you remember that the function of an umbrella is to keep stuff away from you (commonly rainwater), not to collect stuff, and in that sense, it only works when it is indeed open (i.e., when the deflective device is in a position to actually deflect stuff you don't want touching you). The paradox of an "open mind" QualiaSoup performs a brief and educational examination of some of the flawed thinking that prompts people who believe in certain non-scientific concepts to advise others who don't to be more open-minded — although even QualiaSoup is apparently confused about having a closed (not open) versus a close (not far) mind. - Essay:Quantifying Openmindedness - A look at Andrew Schlafly's "open mindedness test". - If You Open Your Mind Too Much..., Tim Minchin's take on the question (Take His Wife!) - “Closed-minded”: the phrase that loses every argument by Martin Wagner, The Atheist Experience - Jonathan E. Adler's essay on the subject, over at the Committee for Sceptical Inquiry - "Quotes, Stories, and Bon Mots related to Pseudoscience & Mistaken Pronouncements". Scientific Review of Mental Health Practice. - Or, if your preferences run to vivid metaphors, the difference is that they will both take a taste of anything, within reasonable limits; but the credulous person will swallow what the skeptic will spit out. - Contrast the cynical phrase "I'll never believe it" with the skeptical phrase "I'll believe it when I see it" to see that there is nothing closed-minded about being a skeptic.
<urn:uuid:58dcef17-b975-42e7-ae23-d5d83c4862d5>
CC-MAIN-2017-47
https://rationalwiki.org/wiki/Open_mind
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805362.48/warc/CC-MAIN-20171119042717-20171119062717-00531.warc.gz
en
0.948913
1,756
3.609375
4
Focus: Detecting Photons With a Thermometer Detecting single photons is common practice with visible light, but it has proven much harder to do with lower-energy microwaves. A group of Finnish researchers has now built a small electronic circuit that detects microwave photons based on the heat they produce. In a demonstration, the device detected as few as 200 photons, which is 10 times more sensitive than previous thermally based photodetection techniques. The main motivation for measuring microwave photons comes from the field of circuit QED, where single microwave photons are corralled in a two-dimensional, metal structure on a microchip. The photons interact through their electric fields with tiny devices called superconducting qubits, which have discrete energy states like atoms for absorbing and emitting photons. These systems offer a unique way to study light-matter interactions and to perform quantum computer simulations. For certain experiments, researchers want a direct count of the number of photons propagating through the system. While microwave detectors are common (in cell phones, for example), they traditionally measure electric field amplitude and therefore can’t give a precise photon count. “We don’t have a way of measuring single photons in the microwave region,” says Joonas Govenius of Aalto University in Finland. Several strategies are being pursued to detect single microwave photons. A recent experiment succeeded using a superconducting qubit, but that technique requires closely matching the microwave frequency to the qubit . A more widely applicable approach is to use thermal photodetectors, in which incoming photons are absorbed by a thermal mass that exhibits an observable temperature increase. The thermal photodetector created by Govenius and his colleagues is an electrical circuit that at its core consists of a gold-palladium nanowire intersecting a series of superconducting islands arranged like railroad ties. Conceptually, the circuit can be separated into two pieces: a resistor and a resonator. The resistor is the thermal mass that absorbs the energy from an incoming microwave pulse, and the resonator is a thermometer that registers the heat input through a lowering of its resonant frequency. The team showed that they could read out the circuit’s temperature by using a low-power, “probe” pulse—the amount of probe energy absorbed was proportional to the temperature. Unfortunately, the temperature spike expected for a weak microwave pulse is too small to detect. To improve sensitivity to the smallest pulses, the researchers used a feedback effect between the probe pulse and the resonator. This effect requires increasing the probe pulse power and tuning its frequency to around 760 MHz, so that it is resonant with the thermometer circuit at relatively high temperatures. When the probe pulse is turned on, it starts to heat the resonator, causing the resonator to absorb more of the probe pulse and heat up even more. As a result of this feedback, the circuit is driven into one of two metastable temperature states. In this bistable condition, the circuit can be made extremely sensitive to any additional input. If a small microwave pulse arrives when the system is in the lower temperature state, the circuit will be pushed into the higher temperature state. In this case, the device no longer acts like a heat gauge but instead becomes a threshold detector that only triggers in the presence of a sufficiently energetic microwave pulse. In experimental trials, the team was able to detect the arrival of 8 GHz microwave pulses consisting of 200 photons (the equivalent of joules, or 1 zeptojoule) with a signal-to-noise ratio of roughly 1. To improve sensitivity toward the single-photon goal, Govenius says one could switch to a material with a smaller heat capacity, such as graphene. Another option would be to design the system to work at higher frequency, since 200 photons at 8 GHz is equivalent to a single photon at 1.6 THz. “This is excellent work that is pushing the frontiers of sensitivity for thermal detectors,” says Joel Ullom of the National Institute of Standards and Technology in Boulder, Colorado. Kunihiro Inomata of the RIKEN Center for Emergent Matter Science in Saitama, Japan, agrees that this is a significant advance, but he says it will be challenging to improve the sensitivity of these detectors while also avoiding problems from thermal noise. This research is published in Physical Review Letters. Correction (15 July 2016): The text was corrected to clarify that single microwave photons have been detected with other techniques. The caption was also revised to correctly identify the artist’s intensions. Michael Schirber is a Corresponding Editor for Physics based in Lyon, France. - K. Inomata, Z. Lin, K. Koshino, W. D. Oliver, J.-S. Tsai, T. Yamamoto, and Y. Nakamura, “Single Microwave-Photon Detector Using an Artificial -Type Three-Level System,” arXiv:1601.05513.
<urn:uuid:30b7932f-2cc1-4a68-a92f-184fd5952c4b>
CC-MAIN-2017-47
https://physics.aps.org/articles/v9/81
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806447.28/warc/CC-MAIN-20171122012409-20171122032409-00735.warc.gz
en
0.927038
1,038
3.65625
4
Are we alone? 1. We have strong evidence that that our solar system is not the only one; we know there are many other Suns with planets orbiting them. Improved telescopes and detectors have led to the detection of dozens of new planetary systems within the past decade, including several systems containing multiple planets. One giant leap for bug-kind 2. Some organisms can survive in space without any kind of protective enclosure. In a European Space Agency experiment conducted in 2005, two species of lichen were carried aboard a Russian Soyuz rocket and exposed to the space environment for nearly 15 days. They were then resealed in a capsule and returned to Earth, where they were found in exactly the same shape as before the flight. The lichen survived exposure to the vacuum of space as well as the glaring ultraviolet radiation of the Sun. Hot real estate 3. Organisms have been found living happily in scalding water with temperatures as high as 235 degrees F. More than 50 heat-loving microorganisms, or hyperthermophiles, have been found thriving at very high temperatures in such locations as hot springs in WyomingÕs Yellowstone National Park and on the walls of deep-sea hydrothermal vents. Some of these species multiply best at 221 degrees F, and can reproduce at up to 235 degrees F. Has E.T. already phoned home? 4. We now have evidence that some form of life exists beyond Earth, at least in primitive form. While many scientists speculate that extraterrestrial life exists, so far there is no conclusive evidence to prove it. Future missions to Mars, the Jovian moon Europa and future space telescopes such as the Terrestrial Planet Finder will search for definitive answers to this ageless question. To infinity, and beyond! 5. We currently have the technology necessary to send astronauts to another star system within a reasonable timespan. The only problem is that such a mission would be overwhelmingly expensive. Even the the unmanned Voyager spacecraft, which left our solar system years ago at a breathtaking 37,000 miles per hour, would take 76,000 years to reach the nearest star. Because the distances involved are so vast, interstellar travel to another star within a practical timescale would require, among other things, the ability the move a vehicle at or near the speed of light. This is beyond the reach of today's spacecraft -- regardless of funding. Fellowship of the rings 6. All of the gas giant planets in our solar system (Jupiter, Saturn, Uranus and Neptune) have rings. Saturn's rings are the most pronounced and visible, but they aren't the only ones. May the force be with you 7. In the "Star Wars" films, the Imperial TIE Fighters are propelled by ion engines (TIE stands for Twin Ion Engine). While these spacecraft are fictional, real ion engines power some of todayÕs spacecraft. Ion propulsion has long been a staple of science fiction novels, but in recent years it has been successfully tested on a number of unmanned spacecraft, most notably NASAÕs Deep Space 1. Launched in 1998, Deep Space 1 rendezvoused with a distant asteroid and then with a comet, proving that ion propulsion could be used for interplanetary travel. A question of gravity 8. There is no gravity in deep space. If this were true, the moon would float away from the Earth, and our entire solar system would drift apart. While itÕs true that gravity gets weaker with distance, it can never be escaped completely, no matter how far you travel in space. Astronauts appear to experience "zero-gravity" because they are in continuous free-fall around the Earth. 9. The basic premise of teleportation -- made famous in TVÕs "Star Trek" -- is theoretically sound. In fact, scientists have already ÒteleportedÓ the quantum state of individual atoms from one location to another. As early as the late 1990s, scientists proved they could teleport data using photons, but the photons were absorbed by whatever surface they struck. More recently, physicists at the University of Innsbruck in Austria and at the National Institute of Standards and Technology in Boulder, Colorado, for the first time teleported individual atoms using the principle of quantum entanglement. Experts say this technology eventually could enable the invention of superfast "quantum computers." But the bad news, at least for sci-fi fans, is that experts donÕt foresee being able to teleport people in this manner. Good day, Suns-shine 10. Tatooine, Luke Skywalker's home planet in the "Star Wars" films, has two Suns -- what astronomers would call a binary star system. Scientists have discovered recently that planets really can form within such systems. Double-stars, or binary systems, are common in our Milky Way galaxy. Among the more than 100 new planets discovered in recent years, some have been found in binary systems, including16 Cygni B and 55 Cancri A. (But so far, no one has found a habitable planet like Luke Skywalker's Tatooine.)
<urn:uuid:6b1af9d8-9199-4377-ab5c-0d74e8e1f0a1>
CC-MAIN-2017-47
https://www.nasa.gov/multimedia/mmgallery/fact_fiction_nonflash.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804019.50/warc/CC-MAIN-20171117223659-20171118003659-00338.warc.gz
en
0.937163
1,059
3.953125
4
When one hears about Quantum Cryptography, the first thought that comes to mind is, how can there be any relation between physics and codes? It actually appears to be one of the newest ideas in the cipher world to use physics and has been declared as the ultimate goal in security. In this short introductory text we will try to explain how these two, from first sight totally unrelated things fit together, how quantum cryptography works and what makes it so secure, and therefore important. What is Cryptography? Classical cryptography was always about constructing and analysing protocols in order to protect information against the influence of adversaries. Modern cryptography is composed of disciplines such as mathematics, computer science and electrical engineering. All it needs to ensure is the creation of a safe, complex and indecipherable code to third parties. With secret key cryptography, a single key is used for encryption and decryption. The sender uses the key to encrypt the plain text and sends it to the receiver. The receiver applies the same key to decrypt the message and recover the plain text. Cryptography includes everyday things like computer passwords, ATM cards, electronic commerce and much more. All of the current day classical computer cryptography are based on certain class of mathematical operations that are easy to perform in one direction but are extremely difficult in the opposite direction. Example of such a problem is prime number multiplication. It is very easy to multiply two prime numbers of any length (one direction). However, if you are given a long two million digits number and told that this number is a product of two primes, even with the help of modern computers it would take hundreds of years to find its constitutes-prime factors. This is the basis for the well known RSA (Rivest-Shamir-Adleman, 1977) cryptosystem , the importance of which is obvious since nowadays the internet is used by and provides essential communication between hundreds of millions of people. New Age Methods Differently from the classical version of cryptography which uses key, based on the assumption that there are no sufficiently fast mathematical algorithms for deciphering, quantum version of cryptography is based purely on the laws of quantum physics. Currently for deciphering, mathematical algorithms are based on computing power and brute force methods. Usually this kind of deciphering is not worth anything, since user can change the key frequently enough, so as to not to give enough time for decipherers to decrypt the key. If one decides to use faster computers and more advanced methods for decryption, another can just simply increase the length of the key used for encryption. When the idea of quantum computing became omnipresent, it soon became obvious that quantum computers could provide unprecedented ability to encrypt secret information. With the use of quantum, it is possible to create devices which allow detection of whether data transmission channel is being spied. Devices which are based on quantum physics phenomena, usually use one of the following: Heisenberg's uncertainty principle or quantum entanglement. In its modern form, the Uncertainty Principle tells that the measurement process itself is a part of physical system, and not a passive process, like it is in classical version of physics. The Uncertainty Principle implies that there exist such properties of particles which are not possible to measure exactly at the same time: measurement of one property will inevitably disturb the measurement of the other. Entanglement, on the other hand is a superposition of two or more particles when their states correlate. Entangled particles cannot be described by the use of states of individual particle. This can be used to exchange information in a way that cannot be seen when experimenting with single particle. Entanglement can be observed independently of how far particles are from one another. Based on these two phenomena, several quantum cryptography protocols were created. In the first method, bits of information are coded based on the polarization of photon and on the use of the Uncertainty Principle to try to prevent the eavesdropper (known as Eve) to steal and decipher the information. The second method uses entangled states of photon, and information is revealed only when the state of a photon is measured by Alice (sender) and Bob (receiver) . The correlation of quantum entanglement can not be explained simply using the concepts of classical physics. [caption id="attachment_419" align="aligncenter" width="494"] Every type of polarization can code one bit of information [/caption][caption id="attachment_420" align="aligncenter" width="296"] Quantum cryptography systems are safe against "Man-in-the-middle" attacks [/caption] Scheme of quantum cryptography known as BB84 protocol (Bennet&Brassard, 1984), uses pulses of polarised light. Two types of polarisation are used: linear and circular. Linearly polarised light can be vertically or horizontally polarised, whereas circularly polarised light can be left or right handed. Every type of polarisation can code one bit of information, for example horizontal polarisation := 1, left handed := 1, vertical := 0, right handed := 0. To generate a key, random sequence of vertically (or left handed) and horizontally (or right handed) light is sent through a channel with an equal probability in order to mislead a spy. Simple quantum cryptography protocol can be described as follows: 1. Light source creates light pulses of very low intensity. Then, Alice (sender) modulates polarization of these light pulses in a random order of one to four possible states described above. 2. Bob (receiver) measures polarization of photons received in a randomly selected bases (linear or circular). Here it should be noted that quantum systems are very fragile by their nature. Therefore Bob has only one chance to perform a measurement before a quantum state is destroyed. Investigation of non-destructive quantum state measurement techniques is currently very wide field, and in the future could have huge benefits in quantum cryptography. 3. Bob publicly announces what was the sequence of his bases used for measurements. 4. Alice publicly announces which bases were chosen successfully and are the same as sent by her when modulating light pulses. 5. Alice and Bob disregards results of incorrectly chosen bases. 6. Results are interpreted using binary system: horizontal or left handed polarization corresponds to 1, vertical or right handed polarization corresponds to 0. Entangled pairs scheme uses entangled states of photons. These photons can be generated by Alice, Bob and Eve. However, in any case photons should be distributed in such a way that Alice and Bob have one photon from each pair generated. Ideally correlated states can be created, such that when measuring polarization of correlated states Alice and Bob always get the opposite values. On the other hand, when measuring individually, result is always random: it is not possible to predict what will be the polarization of the next photon. These states have what is known as a non-locality property. Non-locality property does not have an analogue in classical physics. During communication, the results of measurements of states by Alice and Bob will correlate at some level, and if Eve tries to disrupt their connection she will disrupt the correlation, which can be easily detected. In other words quantum cryptography systems are safe against "Man-in-the-middle" attacks. Specifically, a pair of entangled photons has opposite rotational directions or spin states with the total spin of the system being zero. The important implication of this property is that the measurement of spin of one immediately gives the spin of the other. The measurement of any measurable property of a photon disturbs its state. This is the the measurement problem. However, this fact provides the advantage that the presence of an eavesdropper can be detected. Quantum computing has become a reality. And even though it is still in its infancy, there is already a threat of using classical cryptographic coding schemes because quantum tools could be able to quickly crack almost any code. In order to avoid this, we need new breakthroughs, new cryptography ideas, new tools. Quantum cryptography sounds like a solution. Currently there already exist few companies selling quantum key distribution systems, examples include IDQuantique and MagiQ. This type of technique provides a possibility of extremely safe data transmission, as well as avoiding any influence of third parties because the interference can not be overlooked and "Man-in-the-middle" attacks can be prevented. Seemingly it is fair to say that quantum future will bring us new, safer and more reliable tools for protecting our secrets and all this would be impossible without physics. R. Rivest, A. Shamir, L. Adleman, A Method for Obtaining Digital Signatures and Public-Key Cryptosystems, Communications of the ACM 21(2), 120-126 (1978), DOI:10.1145/359340.359342. G. Brassard, C. Crépeau, R. Jozsa, L. Denis, A Quantum Bit Commitment Scheme Provably Unbreakable by both Parties, FOCS IEEE, 362-371 (1993).
<urn:uuid:fafa767e-62d3-4604-8f13-84d9bc4f0158>
CC-MAIN-2017-47
http://jiaps.org/article/quantum-meets-cryptography.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805362.48/warc/CC-MAIN-20171119042717-20171119062717-00540.warc.gz
en
0.932565
1,835
3.546875
4
Their recent report* is a major step towards a capability to capture, cool and manipulate individual atoms of erbium, an element with unique optical properties that promises highly sensitive nanoscale force or magnetic sensors, as well as single-photon sources and amplifiers at telecommunications wavelengths. It also may have applications in quantum computing devices. The strongly counterintuitive technique of “laser cooling” to slow down atoms to very low speeds—temperatures close to absolute zero—has become a platform technology of atomic physics. Laser cooling combined with specially arranged magnetic fields—a so-called magneto-optical trap (MOT)—has enabled the creation of Bose-Einstein condensates, the capture of neutral atoms for experiments in quantum computing and ultra-precise time-keeping and spectroscopy experiments. The technique originally focused on atoms that were only weakly magnetic and had relatively simple energy structures that could be exploited for cooling, but two years ago a NIST team showed that the far more complex energy structures of erbium, a strongly magnetic element, also could be manipulated for laser cooling. The typical MOT uses a combination of six tuned laser beams converging on a point that is in a low magnetic field but surrounded by stronger fields. Originally, the lasers were tuned near a strong natural energy oscillation or resonance in the atom, a condition that provides efficient cooling but to only moderately low temperatures. In the new work, the research team instead used much gentler forces applied through a very weak resonance in order to bring erbium atoms to within a few millionths of a degree of absolute zero. Such weak resonances are only available in atoms with complex energy structures, and previously have been used only with a select group of non-magnetic atoms. When a strongly magnetic atom like erbium is used, the combination of strong magnetic forces and weak absorption of laser photons makes a traditional MOT unstable. To beat this, the NIST/UM team turned classic MOT principles on their heads. Rather than shifting the laser frequency towards the red end of the spectrum—to impact fast, high-temperature atoms more than slow, cold ones—they shifted the laser towards the blue side to take advantage of the effects of the magnetic field on the highly magnetic erbium. Magnetism holds the atoms stably trapped while the lasers gently pushed them against the field, all the while extracting energy and cooling them. The delicate balancing act not only cools and traps the elusive erbium atoms, it does it more efficiently. The team’s modified trap design uses only a single laser and can cool erbium atoms to within two millionths of a degree of absolute zero. By contrast, a conventional MOT only brings rubidium atoms to about one ten-thousandth of a degree. Erbium commonly is used in optical communications components for its convenient magneto-optical properties. The new trapping technique raises the possibility of using erbium and similar lanthanide elements for unique nanoscale magnetic field detectors, atomic resolution metrology, optical computing systems and quantum computing. Michael Baum | EurekAlert! NASA detects solar flare pulses at Sun and Earth 17.11.2017 | NASA/Goddard Space Flight Center Pluto's hydrocarbon haze keeps dwarf planet colder than expected 16.11.2017 | University of California - Santa Cruz The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake. Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a... Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching. That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these... Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm. During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles.... The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications. Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,... Pillared graphene would transfer heat better if the theoretical material had a few asymmetric junctions that caused wrinkles, according to Rice University... 15.11.2017 | Event News 15.11.2017 | Event News 30.10.2017 | Event News 17.11.2017 | Physics and Astronomy 17.11.2017 | Health and Medicine 17.11.2017 | Studies and Analyses
<urn:uuid:84863598-428e-479f-a75c-7e81a4c12e75>
CC-MAIN-2017-47
http://www.innovations-report.com/html/reports/physics-astronomy/report-106946.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805923.26/warc/CC-MAIN-20171120071401-20171120091401-00344.warc.gz
en
0.915445
1,230
3.5
4
The latest news from academia, regulators research labs and other things of interest Posted: Apr 01, 2013 A solid state ultrafast logic gate on a photon (Nanowerk News) If you could peek at the inner workings of a computer processor you would see billions of transistors switching back and forth between two states. In optical communications, information from the switches can be encoded onto light, which then travels long distances through glass fiber. Researchers at the Joint Quantum Institute and the Department of Electrical and Computer Engineering are working to harness the quantum nature of light and semiconductors to expand the capabilities of computers in remarkable ways. All computers, even the future quantum versions, use logic operations or “gates,” which are the fundamental building blocks of computational processes. JQI scientists, led by Professor Edo Waks, have performed an ultrafast logic gate on a photon, using a semiconductor quantum dot. This research is described in the March 31 Advance Online Publication of Nature Photonics ("A quantum logic gate between a solid-state quantum bit and a photon"). Illustration of a CNOT gate with a semiconductor quantum dot and a photon. Photons are a proven transit system for information. In quantum devices, they are the ideal information carriers that relay messages between quantum bits (qubits) such as neutral atoms, ion traps, superconducting circuits, nitrogen vacancy centers, and of course the device used here: quantum dots. A quantum dot (QD) is a semiconductor structure that acts like an atom. This means it has allowed energy levels that can be populated and even shifted around using lasers and magnetic fields. Quantum dots are an attractive platform for quantum processing because they live inside a semiconductor material, thus the technology for integration with modern electronics already exists. The Waks team has implemented a conditional logic gate called a Controlled-NOT (CNOT). Here’s how a generic CNOT gate works: if a control qubit is in what we will call state 1, then the gate flips the state of a second qubit. If the control qubit is in state 0, nothing happens. Waks explains the importance of this gate, “Although this logic operation sounds simple, the CNOT gate has the important property that it is universal, which means that all computational algorithms can be performed using only this simple operation. This powerful gate can thus be seen as important step towards implementing any quantum information protocol.” In this experiment, a quantum dot plays the role of the control qubit. The second qubit is a photon that has two polarization states. Polarization can be thought of as an orientation of the traveling light waves. For instance, polarized sunglasses can shield your eyes from light having certain orientations. Here, photons can be oriented horizontally or vertically with respect to a defined direction. Just like energy levels for a quantum dot constitute a qubit, the two available polarizations make up a photonic qubit. Light is injected into a photonic crystal cavity (see sidebar in gallery) containing a quantum dot. Quantum dots have been trapped in photonic crystals before, but the difference here is an added large external magnetic field. The magnetic field shifts around the energy levels of the quantum dot enabling it to simultaneously act as both a stable qubit and a highly efficient photon absorber. Due to the unique energy level structure of the system, changing the qubit state of the quantum dot can render it completely invisible to the light. This property makes the CNOT gate possible (see figure above). Light trapped in a cavity that does not see a QD (QD in qubit state 1) will eventually leak out, with its polarization flipped. However, if the quantum dot is in qubit state 0, the light is strongly modified such that incoming and outgoing polarizations actually remain the same. In this case the photonic qubit is not flipped. A sensitive camera collects a fraction of the light that leaks back out of the cavity after its polarization is analyzed using special optics. Thus, the team can see if a photon’s polarization was flipped by the QD. The state of the QD qubit is not random: the team controls it. Another key feature of this protocol is that the photons are from an external laser and are not intrinsically connected to the QD through absorption/emission processes. “Using an external photon source has an advantage that the quantum dot state is not destroyed during the process. Currently, we use a strongly attenuated laser as the photon source, but eventually this can be replaced with true single photon sources,” says lead author Dr. Hyochul Kim. This quantum dot-photon gate happens in a flash--picosecond or 1/ trillionth of a second. Ultrafast gates are important when increasing the number of qubits and operations so that a calculation completes before the system’s quantum behavior is lost. (This is called decoherence--scientists can shield the qubit from the disruptive environment but every so often something sneaks in and destroys the quantum states.) The team’s proof-of-principle gate demonstration paves the way for the next generation of devices that will improve light collection and QD qubit coherence times. “To improve coherence time, we need to trap the electron or hole in the quantum dot and use their spin as a qubit. This is more challenging, and we are currently working on this,” Kim says. Additionally, they will use truly single photons as the light source. “Quantum dots are also excellent single photon sources. We consider such a system where single photons are periodically emitted from the neighbor quantum dot, which are then connected to logic devices on the same semiconductor chip,” adds Kim.
<urn:uuid:7d54bae1-367d-43bb-93cd-7d0827739322>
CC-MAIN-2017-47
https://www.nanowerk.com/news2/newsid=29804.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00346.warc.gz
en
0.915357
1,188
3.59375
4
May 12, 2011 A small firm based in Canada that aims to build a commercially viable quantum computer has shown that an important part of its technology works. D-Wave Systems, which was spun-out of the University of British Columbia in 1999, has shown that a technique called quantum annealing can be used to make eight coupled quantum bits – or qubits – find their ground state. According to the firm’s chief technology officer Geordie Rose, the announcement is the first of several scientific results that D-Wave will be unveiling – including one that he claims is "mind blowing". Based in Vancouver, D-Wave was set up with the aim of creating a quantum computer that uses loops of superconducting wire as qubits. As the electrical current circulating within such a "flux qubit" is quantized, the two lowest states (i.e. electrons travelling clockwise and anticlockwise) can be assigned data values of "0" or "1". The magnetic field associated with the currents is also quantized – pointing up and down for currents moving in opposite directions – and can be flipped using an external magnetic field. Resisting heat and noise Quantum computers could outperform a classical computer at some tasks – at least in principle – thanks to two key quantum properties. These are that a qubit can be in a superposition of two or more quantum states and that two or more qubits can be entangled. But the big challenge for D-Wave – and for everyone else trying to build a quantum computer – is how to create qubits and computing processes that are resistant to the destructive effects of heat and noise. Using flux qubits is attractive in that quest because they are macroscopic structures that can be created using semiconductor-manufacturing processes and can be controlled using applied currents and voltages. A downside is that they have a multitude of quantum states, not just two. The task for D-Wave is how to place each qubit in a well-defined and useful quantum state without it being corrupted by heat or noise – essentially the analogue of writing data to a classical computer. The method chosen by the firm to do this is called "quantum annealing" – and now D-Wave has shown that it can use this technique to place eight coupled qubits into the appropriate lowest energy state. The researchers began with eight superconducting flux qubits within one of D-Wave’s integrated circuits. These contain 128 flux qubits arranged into 16 units of eight. The system is then cooled to a temperature of 10 mK, which puts each qubit into a superposition of two quantum states with identical energy, i.e. current circulating anticlockwise (spin-up) and clockwise (spin-down). Raising the barrier This superposition is not, however, particularly useful and the next step is to manipulate each qubit into a pure spin-up or spin-down state. Each loop is broken by a structure containing two Josephson junctions and a magnetic coil. When a current is applied to the coil, an energy barrier rises between the spin-up and spin-down states. In a classical system, the loop would be forced into either the up or down state and could hop between states by absorbing heat from the surroundings. A qubit however, remains in a superposition of up and down as long as the barrier rises slowly enough. Each qubit has a second magnetic coil, which is used to "tip" the qubit into the desired pure state. If the field is applied in the up direction, for example, the energy of the spin-up state drops below that of the spin-down state, thereby making it more likely that the qubit will become pure spin-up. The problem facing D-Wave is that this transition occurs both by quantum-mechanical tunnelling and by absorbing heat (thermal excitation). Thermal excitation destroys the quantum nature of the qubit, and so must be avoided during quantum annealing. The two processes can be distinguished by raising the barrier until both tunnelling and heat-driven transitions stop (the qubit "freezes") – and then repeating this process at different temperatures. The research team found that below about 45 mK, freezing is affected primarily by barrier height and not temperature, which is what is expected if annealing occurs by tunnelling alone. The team then showed that it could anneal a unit of eight qubits. The researchers did this by adjusting the interactions between the qubits to simulate a 1D chain of magnets in which each qubit wants to point in the same direction as its two neighbours. The qubit at the right-hand end of the chain is set in the up direction and the qubit at the left-hand end in the down direction. The six qubits in the middle are then allowed to orient their spins according to that of their neighbours. The result is a "frustrated" ferromagnetic arrangement in which two neighbours must have opposing spins. Finally, the qubits are all tilted in the same direction while the barrier is raised. This should result in the system moving towards one specific arrangement of frustrated spins – the ground state. Again, below about 45 mK, the system found its way to the ground state in a manner consistent with the spins flipping because of quantum-mechanical tunnelling, not thermal activation. "We're very excited to see the remarkable agreement between what quantum mechanics predicts and what we see in these circuits," says D-Wave’s Mark Johnson, who was lead scientist on the project. Finding the ground state of an eight-spin system is a simple quantum calculation and therefore the D-Wave team has shown that its combination of hardware and annealing process is capable of the job. "Important" first step "This is the first time that the D-Wave system has been shown to exhibit quantum mechanical behaviour," says William Oliver of the Massachusetts Institute of Technology, who was not involved in the research. Oliver told physicsworld.com that when combined with D-Wave’s ability to control precisely important parameters of the qubits, this latest work is "a technical achievement and an important first step". Looking beyond quantum computing, David Feder of the University of Calgary also sees the system as an effective quantum simulation of how electron spins interact in magnetic materials. "This work describes a nice approach to simulating the (ferromagnetic or antiferromagnetic) quantum Ising model, and this is interesting in its own right," explains Feder. "I think that there is a lot of promise in the D-Wave architecture for simulating frustrated magnetic systems, and maybe more general strongly correlated systems, and this will benefit everyone. So, to me, it is a good step in the right direction." D-Wave currently employs about 60 scientists and engineers, of whom about 20 work on developing algorithms and 40 work on building hardware, according to Rose. This latest research was carried out by 25 of D-Wave’s employees along with researchers at the University of Agder in Norway and Simon Fraser University in Canada. "This is the first time we’ve been able to open up the black box and show how [D-Wave’s devices] are harnessing quantum mechanics in solving problems," says Rose. He told physicsworld.com that the firm now plans to do similar quantum-annealing experiments involving much larger numbers of qubits. He also says that the researchers will apply the process to "real problems" such as machine learning and artificial technology. Rose is adamant that D-Wave’s systems could be used in commercial settings as well as for doing basic research in quantum computing. "Our sales team is out selling at the moment," he says. According to Rose, the company will soon publish a number of journal papers about its research. However, he was unable to provide more details because the work is currently being peer-reviewed.
<urn:uuid:0e964c7d-3e01-4568-8e53-9e98b161e353>
CC-MAIN-2017-47
http://seqre.net/quantum-computing-firm-opens-box
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805023.14/warc/CC-MAIN-20171118190229-20171118210229-00546.warc.gz
en
0.956461
1,641
3.859375
4
lasers made easy Technology Research News Tight control of photons in the form of laser beams is a key ingredient in technologies ranging from the Internet and long-distance telephone lines to CD and DVD players. Tightly controlling atoms in similar ways could also have far-reaching impact. For several years researchers have been able to make groups of atoms behave like one atomic entity by chilling certain gases to just above absolute zero, and they have been able to produce beams of atoms by shining lasers at these Bose-Einstein Condensates. But the process requires a complicated combination of laser beams, magnetic fields and radio waves, and the cumbersome laboratory equipment involved makes it difficult to study coherent matter, let alone make useful devices out of it. A team of researchers at the Georgia Institute of Technology, however, has sidestepped the problem by finding a way to make condensed gas using Atom lasers could be used to deposit material atom by atom on a surface to, for instance, produce extremely fine lines on a computer chip. They could also make extremely sensitive measuring devices because atom waves, like light waves, can interfere with each other and the interference patterns are affected by tiny changes in forces like acceleration and gravity. Condensed atoms could also open the way for quantum mechanically linking thousands of atoms, which could yield extraordinarily powerful quantum "Researchers have been trying to achieve atomic Bose-Einstein Condensates using all-optical techniques for about 15 years," said Michael Chapman, an assistant professor of physics at Georgia Tech. "What we showed is that not only is it possible, it's downright easy. Better yet, the technique is faster than the magnetic trapping techniques," he said. The researchers trapped 30 million rubidium atoms in three intersecting low-power laser beams, then transferred the atoms to a trap made of two intersecting high-power laser beams. The transfer left 2 million atoms in the second laser trap. The researchers allowed many of those atoms to evaporate out of the trap, leaving 660,000 much colder atoms, then decreased the power of the lasers, which caused a second round of evaporative The researchers were able to make this last step happen in about 2 1/2 seconds, which was fast enough for the remaining 3,500 atoms to form a "This is a marvelous piece of work. It is significant because it highlights an efficient and robust route to the production of Bose-Einstein condensed atoms, or atom lasers," said Mark Kasevich, an associate professor of applied physics at Yale University. Previous Bose-Einstein Condensate experiments trapped atoms with large magnets and cooled them by generating a radio frequency electric field, said Michael G. Moore, a physicist at the Harvard-Smithsonian Center for The Georgia Tech experiment replaced both with a commercial carbon dioxide laser. "The increase in simplicity is therefore enormous. The decrease in cost is probably quite significant as well," he said. The all-optical techniques for producing Bose-Einstein Condensates is a significant step toward using condensed matter in practical devices, said Moore. The Georgia Tech researchers plan to experiment with using the laser-produced Bose-Einstein Condensates for quantum computing, said Chapman. One problem in quantum computing is information transfer. Atoms are useful for storing and manipulating quantum information but are difficult to transport, while photons are hard to store but could be used to transfer quantum information within and between quantum computers. "A particularly intriguing possibility is to combine the condensates with optical cavities, which are two facing mirrors that trap photons, to exchange quantum information between the photons and atomic condensates," Chapman said. It it is likely to be more than 10 years before Bose-Einstein Condensates are used in practical applications, said Chapman. Chapman's research colleagues were Murray B. Barrett and Jacob A. Sauer of Georgia Tech. They published the research in the July 2, 2001 issue of the journal Physical Review Letters. The research was funded by the National Security Agency and the Advanced Research and Development Activity, which is a joint NSA-Department of Defense funding organization. Timeline: >10 years TRN Categories: Optical Computing, Optoelectronics and Photonics; Quantum Computing Story Type: News Related Elements: Technical paper, "All-Optical Formation of an Atomic Bose-Einstein Condensate," Physical Review Letters, July Atom lasers made easy Molecule makes mini memory Does heavy volume smooth Net traffic? Mind game smooths for chipmaking confirmed Research News Roundup Research Watch blog View from the High Ground Q&A How It Works News | Blog Buy an ad link
<urn:uuid:d393a32d-f220-42dd-8847-f1abbdaef701>
CC-MAIN-2017-47
http://www.trnmag.com/Stories/081501/Atom_lasers_made_easy_081501.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808254.76/warc/CC-MAIN-20171124142303-20171124162303-00347.warc.gz
en
0.902806
1,032
3.90625
4
Credit: Courtesy of F. Brandão The Power of Entanglement: A Conversation with Fernando Brandão Computers are a ubiquitous part of modern technology, utilized in smartphones, cars, kitchen appliances, and more. But there are limits to their power. New faculty member Fernando Brandão, the Bren Professor of Theoretical Physics, studies how quantum computers may someday revolutionize computing and change the world's cryptographic systems. What do you do? My research is in quantum information science, a field which seeks to merge two of the biggest discoveries of the last century: quantum mechanics and computer science. Particularly, I am interested in studying quantum entanglement. Entanglement is a special kind of correlations only found in quantum mechanics. We are all familiar with the concept of correlations. For example, the weather in Southern California is pretty well-correlated from one day to the next—if it is sunny today, it will likely be sunny tomorrow. Quantum systems can be correlated in an even stronger way. Entanglement was first seen as a weird feature of quantum mechanics—Einstein famously referred to it as a "spooky action at a distance." But with the advancement of quantum information science, entanglement is now seen as a physical resource that can be used in information processing, such as in quantum cryptography and quantum computing. One part of my research is to develop methods to characterize and quantify entanglement. Another is to find new applications of entanglement, both in quantum information science and in other areas of physics. What is a quantum computer? At the most basic level, computers are made up of millions of simple switches called transistors. Transistors have two states—on or off—which can be represented as the zeroes or ones that make up binary code. With a quantum computer, its basic building blocks (called qubits) can be either a one or a zero, or they can simultaneously exist as a one and a zero. This property is called the superposition principle and, together with entanglement and quantum interference, it is what allows quantum computers to, theoretically, solve certain problems much faster than normal, or "classical," computers could. It will take a long time until we actually have quantum computers, but we are already trying to figure out what they can do. What is an example of a problem only solvable by a quantum computer? It is a mathematical fact that any integer number can be factored into the product of prime numbers. For example, 21 can be written as 3 x 7, which are both prime numbers. Factoring a number is pretty straightforward when it is a small number, but factoring a number with a thousand digits would actually take a classical computer billions and billions of years—more time than the age of the universe! However, in 1994 Peter Shor showed that quantum computers would be so powerful that they would be able to factor numbers very quickly. This is important because many current cryptographic systems—the algorithms that protect your credit card information when you make a purchase online, for example—are based on factoring large numbers with the assumption that some codes cannot be cracked for billions of years. Quantum computing would change the way we do cryptography. What got you interested in quantum information? During my undergraduate education, I was looking online for interesting things to read, and found some lecture notes about quantum computation which turned out to be by Caltech's John Preskill [Richard P. Feynman Professor of Theoretical Physics]. They are a beautiful set of lecture notes and they were really my first contact with quantum information and, in fact, with quantum mechanics. I have been working in quantum information science ever since. And now that I'm on the Caltech faculty, I have an office right down the hall from Preskill! What is your background? I am originally from Brazil. I did my bachelors and masters degrees there in physics, and my PhD at Imperial College London. After that, I moved among London, Brazil, and Switzerland for various postdocs. Then I became faculty at University College London. Last year I was working with the research group at Microsoft, and now I am here at Caltech. The types of problems I have worked on have varied with time, but they are all within quantum information theory. It is stimulating to see how the field has progressed in the past 10 years since I started working on it. What are you particularly excited about now that you are at Caltech? I can't think of a better place than Caltech to do quantum information. There are many people working on it from different angles, for example, in the intersection of quantum information and condensed-matter physics, or high-energy physics. I am very excited that I get to collaborate with them. What do you like to do in your free time? I used to go traveling a lot, but six months ago my wife and I had a baby, so he is keeping us busy. Along with work and exercise, that basically takes up all my time.
<urn:uuid:602f09d9-b736-41f6-a81f-a7afd5b2330b>
CC-MAIN-2017-47
http://www.caltech.edu/news/power-entanglement-conversation-fernando-brand-o-50770
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805761.46/warc/CC-MAIN-20171119191646-20171119211646-00147.warc.gz
en
0.958757
1,029
3.640625
4
Quantum computers promise to revolutionise the digital world, but how do you tell if a computer really is harnessing the power quantum mechanics? It’s a question that has plagued the only computer manufacturer claiming to produce quantum-powered machines – D-Wave Systems of Burnaby in British Columbia, Canada – since they went on sale. Today, the publication of further inconclusive tests of the machines is a reminder of just how difficult it is to get an answer. We explain why it’s so hard to test a quantum computer – and whether we’ll ever get an answer to the D-Wave conundrum. What is quantum computing and why should I care? Quantum objects can be in multiple states at once, a property known as superposition. This means a quantum bit (qubit), the basic unit of information in computing, can be both a 0 and a 1 at the same time. Theoretically a computer with a large number of these qubits should be able to complete certain tasks, such as factoring numbers or searching large databases, much faster than their ordinary equivalent. Has anyone built a quantum computer? Labs around the world have built devices with a handful of working qubits, but they wouldn’t even put a pocket calculator out of business: one of the most impressive results to date is factoring 21 into 3 and 7. Meanwhile, several years ago, D-Wave burst on to the scene, offering up its machines for sale. But despite high-profile customers – including US defence firm Lockheed Martin and Google, which operates its D-Wave machine in partnership with NASA – there are still questions about whether the machines really count as quantum computers. They rely on an alternative theory called adiabatic quantum computing and no one knows whether the theoretical quantum speed-up this provides can be translated to real-world machines. So is it quantum or not? D-Wave has demonstrated that its machine behaves in a quantum way and that it can compute things, but the jury is still out on whether it is actually using quantum mechanics to hasten its calculations. “Nobody knows whether it works. It is a totally high-risk, speculative project,” says Matthias Troyer of ETH Zurich in Switzerland. “If it pays off, it is a huge breakthrough.” Earlier this year, Troyer’s team released results from tests of a D-Wave Two machine that suggested there was no quantum speed-up. Today, these results are published in Science (DOI: 10.1126/science.1252319). Wait, why would anyone buy a computer if they don’t know that it works as claimed? D-Wave does not publically list the cost of its computers, but they are thought to be $10 to $15 million – a drop in the bucket for a multibillion dollar company like Google. Essentially D-Wave’s customers and investors are hoping to get in on the ground floor of a computing revolution. Can’t you just test whether it runs faster than a regular computer? Ye-es, but first you have to figure out what kind of test to run. It has to be a fair fight – one D-Wave-sponsored test that showed apparent gains was later criticised for pitting a specialised quantum algorithm against unoptimised software on an ordinary PC. What other aspects are necessary for a useful comparison? The test also has to involve a problem where being quantum actually gives you an advantage. D-Wave computers solve problems in a process similar to exploring a hilly landscape where the lowest points corresponds to the best solution. While an ordinary computer is forced to climb up and over the hills to find the low points, a quantum machine can simply tunnel its way through. The trouble is that many test problems aren’t challenging enough, leading some to suggest that the reason D-Wave didn’t show a quantum speed-up in some tests – such as Troyer’s – isn’t because it is not able to deliver a better performance, but rather because the test didn’t force it too. “The D-Wave machine would rather use classical resources instead of quantum,” suggests Vadim Smelyanskiy of NASA’s Quantum Artificial Intelligence Laboratory in Mountain View, California, which hosts the Google-purchased computer. D-Wave claims that is the case with Troyer’s test. “Those problems are simply too easy,” says Colin Williams of D-Wave. Will this one ever be resolved? Smelyanskiy is currently working with others at NASA and Google to develop tests he hopes will put the machine through its paces, which he presented at the Adiabatic Quantum Computing conference in Los Angeles last week. “You want to construct those tall mountains and absolutely be sure that there is no way around,” he says. “For those problems, we will be able to see if the machine really is forced to do something quantum.” What happens if the D-Wave machines do strut their quantum stuff? Even if they do eventually demonstrate a quantum speed-up, they won’t be as good as fully quantum computers, which are still at least 15 to 25 years away, says Smelyanskiy. He says the comparison is similar to the way Charles Babbage’s 19th century analogue difference engine, the precursor to today’s computers, measures up with your current PC. At the moment, D-Wave’s machine is more like a rusty difference engine that doesn’t seem to work properly, but if D-Wave can clean it up, the results could be impressive. “It will probably have the same impact on mankind as the real difference engine had,” he says. When can I have my own personal quantum computer? Whatever the future of quantum computers, don’t expect to own one yourself. “This will be a special-purpose device that can solve a limited set of problems much better than a classical one, but it will never be a general purposes machine like your laptop or your iPhone,” says Troyer. “It’s not what we’ll have at home in the future.” More on these topics:
<urn:uuid:d550675a-b625-43cb-93bd-8b2c345f20cd>
CC-MAIN-2017-47
https://www.newscientist.com/article/dn25760-commercial-quantum-computer-still-awaits-ultimate-test/?cmpid=RSS%2525257CNSNS%2525257C2012-GLOBAL%2525257Ctech
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00348.warc.gz
en
0.951887
1,315
3.515625
4
Scientists Achieve Breakthrough in Polariton Laser Technology An international scientific team has modeled and conducted an experiment in the course of which they have managed to produce an electrically pumped spin-polarized polariton laser. This allows for a reduction in the laser’s energy consumption levels and control over the output polarization. This is achieved thanks to the use of magnetic materials in the device’s contacts: the electrons that come into contact with the laser have a preferred spin direction, which allows for effective spin pumping. Polariton lasers are very promising for the very reason that they do not require high amounts of energy. In addition, they work at room temperatures. Due to this, they can be used in portable electronics, optical computers and communication devices. Results of the experiment have been published in Physical Review Letters. The primary advantage of polariton lasers is their low energy consumption. A regular laser is based on the induced radiation effect: if a high-energy excited electron is hit by a photon, the electron “falls” into a low-energy state, producing two photons that are identical to the original one. A cascade of such processes results in the formation of a large number of identical, coherent photons that form the laser emission. To generate a laser beam, the population inversion condition has to be met: the electron density on the high-energy level must be higher than on the low-energy level. In this way, it is important to “pump into” the system an amount of additional energy that is required to transfer enough electrons onto the high-energy level. The minimum amount of energy required for the formation of laser radiation is referred to as the lasing-action threshold: this value, among other things, determines a laser’s minimum energy requirement. There is also another process which concerns the type of emission that is called spontaneous: in that case, an electron on the high-energy level can, at a random time, emit a photon and fall into a low-energy state. The issue with spontaneous emission is that the exiting photons tend to be incoherent, with a random phase. This issue can be avoided if all the electrons are put into the same quantum-mechanical state, which would also cause all exiting photons to be identical. Unfortunately, this cannot be done with electrons, as they cannot assume the same quantum-mechanical state according to the so-called Pauli Exclusion Principle, which states that two or more identical fermions (particles with half-integer spin) cannot occupy the same quantum state within a quantum system simultaneously. A polariton laser setup. Credit: bibo.kz In polariton lasers, this limitation is overcome in the following manner: electrons in such systems, by interacting with each other and the light, form composite particles – exciton polaritons. These particles have a full spin, and are thus left unaffected by the Pauli Principle; at low temperatures, they can assume the same quantum-mechanical state. Such a state of matter in which a large fraction of particles occupy the same quantum state is referred to as the Bose-Einstein condensate. If polaritons can disappear from the condensate while spontaneously emitting photons that exit through the cavity face of a laser, the laser output will be coherent just like in a regular laser. Except that the lasing-action threshold does not need be reached any longer! In real life, of course, some amount of energy will still be required, but it will be substantially (by several magnitudes) less than for what is needed for regular, semiconductor-based lasers. The first polariton lasers have been built in early 00’s; they worked at ultra-low temperatures of several Kelvin and had to be pumped by another laser. In the recent years, both of these issues have been solved: in 2013 an electrically pumped polariton laser that could operate at room temperature was demonstrated. The last remaining issue is that of controlling the polarization of the emission. “In a polariton laser, two Bose-Einstein condensates tend to form: one with upward-directed spin polaritons, and another with downward-directed ones. Both are emitting independently: as a result, the emitted light’s polarization is linear, and its direction is random. If we could manage to mostly pump one of the condensates, it would allow us to receive a stable, circularly-polarized emission and to also lower the lasing-action threshold, further reducing the laser’s energy consumption. Such spin-selective pumping is quite easy to implement when dealing with optics, but electric spin-polarized pumping has not yet been done,” – comments Ivan Iorsh, one of the article’s authors and an associate professor at ITMO University’s Department of Nanophotonics and Metamaterials. This is exactly what was accomplished by the international team that also included physicists from the University of Michigan, Nanyang Technological University, University of Southhampton and St. Petersburg State University. The researches have used a ferromagnetic material as a contact in their laser setup, which was used to create a magnetic field. Electrons that entered this system had their spin polarization defined by the ferromagnetic material, which they passed on to the polaritons that formed a condensate. This led to a stable, elliptical polarization of the resulting emission and the lowering of the threshold. By controlling the spin using a magnetic field, it is also possible to manipulate the light’s polarization. This means that optical signals can be coded through electrical ones. In that case, the direction of polarization would substitute the ones and zeroes – such a setup can be implemented on a microchip with low power consumption that will work at room temperature. The results of this research were showcased in an experiment at the University of Michigan. The team from ITMO University and St. Petersburg State University modeled the system. “The experiment has fully confirmed the behavior predicted by our modeling. It’s always amazing to see an experiment confirm a theoretical prediction. The discovered effect is very important for spintronics – the science of coding information not through electrical charges, but through the spin. The main issue there is the inevitable spin relaxation – the loss of spin polarization by electrons due to their interactions with the crystal grating. We have demonstrated the opposite effect – the increase and amplification of spin polarization, which could open up completely new opportunities for application in devices,” – comments Alexey Kavokin, head of the Spin Optics Laboratory at St.Petersburg State University. He added that another promising area of development in science that is related to polariton lasers is that of quantum simulators based on polariton condensates. Researchers are currently racing to create the first quantum processor. Google’s artificial intelligence laboratory has collected 49 qubits, Mikhail Lukin’s teams at Harvard – 51. The hundred-qubit threshold will likely be crossed in the coming months. Still, the practical application of such systems is highly limited: Google’s superconductor-based processor only works at ultra-low temperatures (less than one degree Kelvin), while Mikhail Lukin’s qubits are based on cold atoms, which can only be kept together in lab conditions. “In this context, polaritons offer an alternative platform for quantum computations. It is a semiconductor platform, and those are relatively cheap and are easy to integrate into the existing processors. And the main thing that our work with our colleagues from Michigan has shown is that polariton condensates are fine with room temperatures. I’m sure that a semiconductor platform for quantum technology can be created in Russia in a short time. We might even come out ahead of Google!” – adds Kavokin. The scientist notes that, per his opinion, in the next two or three years polariton lasers will likely see practical application. This is mostly related to their use in the creation of macroscopic multiparticle wave functions at room temperatures. The use of polariton lasers in quantum computing is also a promising venture. Reference: Room Temperature Spin Polariton Diode Laser, Aniruddha Bhattacharya, Md Zunaid Baten, Ivan Iorsh, Thomas Frost, Alexey Kavokin, Pallab Bhattacharya, 2017, Physical Review Letters.
<urn:uuid:c61e59be-fd48-4a8b-baee-c99437ad77e5>
CC-MAIN-2017-47
http://news.ifmo.ru/en/science/photonics/news/6919/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804724.3/warc/CC-MAIN-20171118094746-20171118114746-00548.warc.gz
en
0.935585
1,741
3.640625
4
Often in physics, new discoveries are made by improving the sensitivities of measurements, such as the recent example of the gravitational wave detector. One way to improve the sensitivities for the measurement and transduction of physical forces is cavity optomechanics. Cavity-optomechanics is an interdisciplinary area of mechanical engineering, electrical engineering, optics and quantum physics.It emerged as an independent field of its own only very recently, and utilizes the interaction between mechanical motion and light. Recently featured as the ‘milestone of photon history’ in nature photonics, cavity optomechanics is also one of the chosen fields of interest for Dr. Vibhor Singh, Assistant Professor at the Department of Physics, Indian Institute of Science, Bangalore. Dr. Vibhor has worked extensively in nanomechanical systems during his graduate as well as post doctoral career and has recently joined IISc. He is currently setting up an experimental laboratory to explore various nanomechanical and optomechanical systems. Cavity optomechanics is about utilizing the interactions between light and motion to control and manipulate their quantum states. Light carries momentum and hence can produce radiation pressure force. Each photon, a particle of light, bouncing off a mirror, for example, imparts some momentum to the mirror. Yet, in our everyday experience, we do not observe mirrors moving when reflecting light. This is because the transferred momentum is small compared to the size of the mirror. However, by increasing the light intensity and using lighter mirrors, the effects of the light radiation can be magnified. Arrangement of mirrors in a particular way can be used to amplify light by letting the photons from the light source bounce back and forth between the mirrors. In such a system, if one of the mirrors is made extremely light (micrometer thickness or lower) and movable, we obtain an “optomechanical resonator” where the radiation forces of the reflected photons cause mechanical movement of the mirror. A feedback mechanism kicks into action since a change in the mirror position affects the length of the cavity which in turn changes the intensity of light inside it. Thus, the radiation pressure force on the mirror is dependent on its position, leading to a coupling between the optical and mechanical modes. There are several different implementations of such interaction with variations in the size and material of the movable mirror, its placement and the frequency of light used. So, what are these optomechanical systems good for? As it turns out, there are quite a variety of tricks to be played with this system. First off is the one involving the infamous Schrödinger’s cat, the popular face of quantum theory. The Schrödinger’s cat refers to the postulate of quantum theory that quantum states can exist in a superposition of various states at the same time. But, such quantum superpositions are observable only in microscopic systems isolated from outside interference since interactions with the environment destroy the superposition. For example, in real life, we do not see a cat being simultaneously dead and alive, like the hypothetical Schrodinger’s cat is capable of. Objects seem to lose their ‘quantumness’ once we are in the macroscopic realm. The mechanism of this decay of quantum states into normal classical states, termed decoherence, is of interest from a fundamental physics point of view. This is where optomechanical resonators step in by presenting us with quantum control over the motion of macroscopic objects (the mirror) via the optical field, thus enabling researchers to prepare superpositions of macroscopic mechanical states. Dr. Vibhor’s lab aims to take advantage of this capability to carry out fundamental tests of quantum theory. These experiments will be carried out at very low temperatures to remove thermal as well as Brownian motion of mechanical systems thus enabling the study of purely quantum mechanical effects. Dr. Vibhor is also interested in pursuing the potential applications of optomechanical resonators in quantum information technology. Optomechanical resonators act as efficient transducers – devices effecting interconversion between different types of signals – because of the versatility of both the light field and the nanomechanical oscillations to couple to a variety of systems. Such transducers are required in today’s ‘hybrid circuits’ which aim to integrate physical systems such as light and sound along with electronic components in circuits, and specifically find applications in quantum computing. Quantum computing refers to computing carried out by exploiting quantum mechanical effects.In the hunt for physical systems that can be used as qubits or quantum bits (which are the building blocks of quantum computing just as the two state bits are the basis of classical computing), superconducting circuits have emerged as a worthy competitor. Due to inherent mechanical compliance, optomechanical systems have the potential to act as a transducer to convert quantum information from a superconducting qubit to the optical photons which are good information careers over long distances. Just as optical cables connect the nodes of today’s networks, such interfaces are necessary in connecting the quantum nodes of a future quantum internet. In charting the future course for his laboratory, Dr. Singh envisions a lab exploring the field of cavity optomechanics from both a fundamental physics as well as an application point of view, especially looking at various implementations in quantum information technology. About the scientist: Dr. Vibhor Singh is currently an assistant professor at the Department of Physics, Indian Institute of Science, Bangalore.
<urn:uuid:d8e7cb68-85ce-4295-82e5-13390276138f>
CC-MAIN-2017-47
http://iisc.researchmedia.center/article/cavity-optomechanics-union-engineering-and-quantum-physics
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/warc/CC-MAIN-20171120203833-20171120223833-00759.warc.gz
en
0.936205
1,139
3.765625
4
In a quantum computer, information is stored not as a string of ones and zeroes, but as a series of quantum-mechanical states: spin directions of electrons, for instance, or polarization orientations of a photon. In 1985 David Deutsch of the University of Oxford pointed out that quantum physical law allows particles to be in more than one state at a time, making it possible for each particle in a quantum computer to hold more than one bit of information. (In this field, the term "bit" is replaced by "qubit," meaning quantum bit.) A computer containing, say, a hundred particles could execute a computation on 2100 numbers at once. The ability to crunch many numbers at the same time--known as massive parallelism--would make quantum computers ideal for some basic computing tasks, such as factoring large numbers. Two years ago, Peter W. Shor of AT&T Bell Labs presented an algorithm showing exactly how a quantum computer would carry out such task. But there is much more to quantum computers than breaking down large numbers. At last May's ACM Symposium on Theory of Computing, Lov K. Grover, also at Bell Labs, announced a more down-to-earth application: a crafty algorithm that, building on Shor's ideas, would allow a quantum computer to make lightning-fast searches through a database. In this scheme, each item in the database would be represented by a quantum state of a particle in the computer. Relying on the inherently fuzzy laws governing those particles, Grover's algorithm would enhance the state in the system corresponding to the desired item and suppress the others. Rather than slogging dumbly through a list, the algorithm operates on all of the particles at once, so it could far exceed the speed and efficiency of a classical computer. Such list-searching ability could have applications in many other tasks that require finding the optimal member of a set. And a combined talent for factoring and searching may make quantum computers ideal tools for cracking codes (including the Data Encryption Standard, the official U.S. government cryptographic standard). Another exciting role for quantum computers involves turning them back on their own world and using them to simulate other quantum-mechanical systems--the behavior of quarks in an atomic nucleus, for instance, or electrons in a superconductor. Seth Lloyd of the Massachusetts Institute of Technology is one of the leading researchers working on concrete ways to realize this exotic idea. In essence, the quantum behavior of one set of particles would act as a proxy for that of a different system, bypassing the extraordinarily complex rules of simulation that normally would need to be programmed into a computer. While nobody is denying the vast potential of quantum computers, even the most ardent enthusiasts are sobered by the obstacles that must be overcome before usable devices can be built. The greatest of these is that the slightest outside disruption--heat or light, for instance--can destroy the balance of quantum states that stores information and makes the computing possible. In technical terms, the system loses its quantum coherence. The very process of reading the state of a qubit can upset the coherence, so retrieving the result of a calculation poses a tough challenge. Even if the system does not fall apart, quantum computers will naturally tend to accumulate errors; the kinds of error-correction schemes developed for classical computers do not translate to the subatomic realm. Here too, however, there has been substantial recent progress. Shor is working on a method whereby each piece of information is spread, or entangled, over several qubits. In this way, the erroneous decay of one of the quantum states will not lose the information. Of course, using additional qubits trades off some efficiency. Shor's original scheme involved using nine qubits. More recently, Raymond Laflamme and his colleagues at Los Alamos National Laboratory have derived an error-correction technique that requires only five qubits. Shor is also studying how much error is allowable before it taints the results from quantum computers; in essence, proponents of quantum computing are trying to reinvent from the ground up all of the basic logic problems that other computer scientists have developed since the days of ENIAC, the ancestor of the modern electronic computer. And the programmers working on ENIAC had a significant advantage over Shor and his ilk: they at least had a physical device to work with. Researchers at the National Institute of Standards and Technology, led by David J. Wineland, and a team headed by H. Jeff Kimble at the California Institute of Technology have made some headway in constructing real quantum systems that function as crude logic gates--sort of nano-transistors. These are only the first baby steps toward a full, workable quantum computer. (Click here to view a schematic of CIT's experimental setup.) Nevertheless, many people are betting the technical hurdles are manageable. Researchers at M.I.T., Caltech and the University of Southern California have banded together to form the Quantum Information and Computing institute. The Defense Department's Advanced Research Projects Agency (ARPA) is providing a five-year, $5-million grant--a skinny slice of the total defense R&D pie, but a sign of faith that quantum computing will eventually find a place in our macroscopic lives.
<urn:uuid:402c574f-4ac3-459c-82a2-e5794e1cd2fb>
CC-MAIN-2017-47
https://www.scientificamerican.com/article/subatomic-logic/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809392.94/warc/CC-MAIN-20171125032456-20171125052456-00760.warc.gz
en
0.931662
1,081
3.84375
4
Quantum physics—the laws that govern the behavior of smallest components of our universe, such as fundamental particles, atoms and molecules—is admittedly a tough subject, a complicated path of intricate mathematics and scientific theory. Those outside the field who brave the journey often find themselves in a confusing place where the classical principles they learned in school no longer apply and the new rules seem…well…a bit unbelievable. In the quantum world, things can be in two places at once? Better yet, they can be two things at once? What??? If this has been your experience, don’t worry—you’re in very good company. Respected scientists, including Albert Einstein, felt the same way, and made many attempts to prove that these strange new theories couldn’t be correct. Each attempt, however, failed, and instead reinforced the reality of quantum physics in contrast to our conventional intuition. But this is good news—the properties buried in quantum theory hold great promise for exciting, real-world applications. So how do we make sense of these bizarre new rules? What really makes quantum physics so different, so strange, and so promising? To start, let’s take a look back to 1900 and the work of physicist Max Planck, who first drew back the curtain on the mysterious quantum world. That year, Planck was embroiled in a nagging physics problem—how to explain the radiation of light emanating from hot objects. At the time, there were two conflicting laws, neither of which was quite right. Sandwiching visible light on the electromagnetic spectrum are infrared waves, which have longer wavelengths and a lower frequency, and ultraviolet waves, which have shorter wavelengths and a higher frequency. One law—Wien’s law—could accurately predict the experimental results of ultraviolet waves, but fell apart when it came to infrared waves. Conversely, the Rayleigh-Jeans law covered infrared waves, but didn’t work for ultraviolet. What Planck needed, then, was one law that would correctly apply to both ends of the spectrum. For the birth of quantum physics, the details of Planck’s solution to this problem were far less important than the trick he used to arrive at it. This trick, which Planck later on called “happy guesswork,” was simple but unsettling: the radiation energy had to be chopped up into tiny packages, or particles of light. Based on everything physicists knew at the time, this claim was outrageous: light was understood as a wave, which left little space for particles of light, nowadays known as photons. So now light could be…both? While it was not his intent, Planck’s trick was the first step in a chain reaction that turned the physics world upside-down. We now understand that it’s not just light, but all of the fundamental components of our universe that embrace this dual nature and the other properties of the quantum world. To explain, let’s take another step back, this time to our early science education, and picture electrons—the negatively charged fundamental particles that, together with the positively charged protons and neutral neutrons, make up atoms. Are you picturing them as miniature billiard balls? What about a light wave? Do you imagine it as a tiny version of what comes crashing against the shoreline? These are convenient pictures, because they are easy to imagine. But what is your evidence that these mental pictures really describe the nature of an electron, and the nature of light? With your sensory perception, you cannot see a single electron, nor observe a light wave oscillate. And, as it turns out, neither light, nor electrons, nor atoms, nor even molecules are simply waves, or just particles. When it comes to strange quantum properties, this dual wave-particle nature is just the tip of the iceberg. One of the most striking concepts is that of quantum entanglement. It can be illustrated like this: imagine being the proud parent of two children, Susy and Sam, who have just hit the age of disagreeing with each other all the time. They both like mac & cheese as well as pizza. Sadly, this is no longer sufficient to guarantee a drama-free dinner. As a counter strategy, you and your partner team up and question Sam and Susy simultaneously in different rooms. This way, they cannot coordinate their dissent, and you have a 50 percent chance of random agreement on the dinner choice. Believe it or not, in the quantum world you would be doomed. In an experiment, the two parties could be photons, and the dinner question could be a measurement of their polarization. Polarization corresponds to the direction of oscillation—moving up and down or from side to side—when light behaves as a wave. Even if you separate the two parties, eliminating all communication, quantum physics allows for an invisible link between them known as entanglement. Quantum-Susy might change her answer from day to day (even pizza gets boring after a while), but every single time there is perfect anti-correlation with quantum-Sam’s answer: if one wants pizza, the other opts for mac & cheese—all the time! This is just one example of the many bizarre properties we know to be true based on careful calculation and experimentation. But if we’re so sure, why do we witness so little of the quantum world? Much of quantum physics happens at length scales so small that they remain hidden to us, even when using the most powerful microscopes. In addition, witnessing quantum physics at work turns out to be radically different from what you might call an “observation.” Seeing that an object is the color red is a fairly straightforward, unobtrusive process. Probing a quantum object like an electron or photon is an entirely different matter. True quantum behavior tends to be fragile, and attempting to measure it often constitutes a major but unavoidable disruption that usually prevents quantum weirdness from becoming directly visible. However, just because we cannot see quantum physics in action doesn’t mean that is hasn’t affected our lives in a tangible, positive way. The impact of quantum physics has been enormous: not only is it the prime common factor in nearly all physics Nobel Prizes awarded in the past one-hundred years, but it has also been a crucial driving force in technological advances ranging from lasers and superconductors to medical imaging like MRIs. Indeed, imagining a world in which quantum physics had never been discovered would amount to eliminating a lot of the technology we take for granted each and every day. The grandest vision, perhaps, is that of harnessing the power of quantum physics for a completely new kind of supercomputer. Such a quantum computer could solve tasks in a heartbeat that would currently require centuries of computation time on the fastest computers available today. Sounds intriguing? Many physicists around the world working on the hardware of such a machine would agree. (To learn more about what would make a quantum computer so powerful, check out the slideshow above.) They would also explain, however, how daunting the challenges are in this endeavor. Overcoming the fragile nature of quantum behavior is not an easy task—one that rivals the quantum leap of faith taken by Planck and his colleagues to bring us into this new and exciting world.
<urn:uuid:26b8d5ed-56fa-446d-9607-43ebd0147bcb>
CC-MAIN-2017-47
http://helix.northwestern.edu/article/why-quantum-physics-weird-and-stunningly-useful
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806338.36/warc/CC-MAIN-20171121094039-20171121114039-00163.warc.gz
en
0.951854
1,498
3.75
4
Two research teams working in the same laboratories at UNSW Australia have found distinct solutions to a critical challenge that has held back the realisation of super powerful quantum computers. The teams created two types of quantum bits, or "qubits" – the building blocks for quantum computers – that each process quantum data with an accuracy above 99%. The two findings have been published simultaneously today in the journal Nature Nanotechnology. "For quantum computing to become a reality we need to operate the bits with very low error rates," says Scientia Professor Andrew Dzurak, who is Director of the Australian National Fabrication Facility at UNSW, where the devices were made. "We've now come up with two parallel pathways for building a quantum computer in silicon, each of which shows this super accuracy," adds Associate Professor Andrea Morello from UNSW's School of Electrical Engineering and Telecommunications. The UNSW teams, which are also affiliated with the ARC Centre of Excellence for Quantum Computation & Communication Technology, were first in the world to demonstrate single-atom spin qubits in silicon, reported in Nature in 2012 and 2013. Now the team led by Dzurak has discovered a way to create an "artificial atom" qubit with a device remarkably similar to the silicon transistors used in consumer electronics, known as MOSFETs. Post-doctoral researcher Menno Veldhorst, lead author on the paper reporting the artificial atom qubit, says, "It is really amazing that we can make such an accurate qubit using pretty much the same devices as we have in our laptops and phones". Meanwhile, Morello's team has been pushing the "natural" phosphorus atom qubit to the extremes of performance. Dr Juha Muhonen, a post-doctoral researcher and lead author on the natural atom qubit paper, notes: "The phosphorus atom contains in fact two qubits: the electron, and the nucleus. With the nucleus in particular, we have achieved accuracy close to 99.99%. That means only one error for every 10,000 quantum operations." Dzurak explains that, "even though methods to correct errors do exist, their effectiveness is only guaranteed if the errors occur less than 1% of the time. Our experiments are among the first in solid-state, and the first-ever in silicon, to fulfill this requirement." The high-accuracy operations for both natural and artificial atom qubits is achieved by placing each inside a thin layer of specially purified silicon, containing only the silicon-28 isotope. This isotope is perfectly non-magnetic and, unlike those in naturally occurring silicon, does not disturb the quantum bit. The purified silicon was provided through collaboration with Professor Kohei Itoh from Keio University in Japan. The next step for the researchers is to build pairs of highly accurate quantum bits. Large quantum computers are expected to consist of many thousands or millions of qubits and may integrate both natural and artificial atoms. Morello's research team also established a world-record "coherence time" for a single quantum bit held in solid state. "Coherence time is a measure of how long you can preserve quantum information before it's lost," Morello says. The longer the coherence time, the easier it becomes to perform long sequences of operations, and therefore more complex calculations. The team was able to store quantum information in a phosphorus nucleus for more than 30 seconds. "Half a minute is an eternity in the quantum world. Preserving a 'quantum superposition' for such a long time, and inside what is basically a modified version of a normal transistor, is something that almost nobody believed possible until today," Morello says. "For our two groups to simultaneously obtain these dramatic results with two quite different systems is very special, in particular because we are really great mates," adds Dzurak. Associate Professor Morello and Scientia Professor Dzurak are at the School of Electrical Engineering & Telecommunications, UNSW Australia. They are team leaders at the ARC Centre of Excellence for Quantum Computation and Communication Technology, headquartered at UNSW. The quantum bit devices were constructed at UNSW at the Australian National Fabrication Facility, with support from researchers at the University of Melbourne and the Australian National University. The research was funded by: the Australian Research Council, the US Army Research Office, the NSW Government, UNSW Australia and the University of Melbourne. Ry Crozier | Eurek Alert! IceCube experiment finds Earth can block high-energy particles from nuclear reactions 24.11.2017 | Penn State New proton record: Researchers measure magnetic moment with greatest possible precision 24.11.2017 | Johannes Gutenberg-Universität Mainz High-precision measurement of the g-factor eleven times more precise than before / Results indicate a strong similarity between protons and antiprotons The magnetic moment of an individual proton is inconceivably small, but can still be quantified. The basis for undertaking this measurement was laid over ten... Heat from the friction of rocks caused by tidal forces could be the “engine” for the hydrothermal activity on Saturn's moon Enceladus. This presupposes that... The WHO reports an estimated 429,000 malaria deaths each year. The disease mostly affects tropical and subtropical regions and in particular the African continent. The Fraunhofer Institute for Silicate Research ISC teamed up with the Fraunhofer Institute for Molecular Biology and Applied Ecology IME and the Institute of Tropical Medicine at the University of Tübingen for a new test method to detect malaria parasites in blood. The idea of the research project “NanoFRET” is to develop a highly sensitive and reliable rapid diagnostic test so that patient treatment can begin as early as possible. Malaria is caused by parasites transmitted by mosquito bite. The most dangerous form of malaria is malaria tropica. Left untreated, it is fatal in most cases.... The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake. Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a... Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching. That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these... 15.11.2017 | Event News 15.11.2017 | Event News 30.10.2017 | Event News 24.11.2017 | Physics and Astronomy 24.11.2017 | Health and Medicine 24.11.2017 | Earth Sciences
<urn:uuid:76e1d618-bcf7-463b-8680-f9a21d96a324>
CC-MAIN-2017-47
http://www.innovations-report.com/html/reports/physics-astronomy/australian-teams-set-new-records-for-silicon-quantum-computing.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809778.95/warc/CC-MAIN-20171125105437-20171125125437-00566.warc.gz
en
0.935487
1,479
3.671875
4
Researchers at the University of Michigan's Center for Optical Coherent and Ultrafast Science (FOCUS) and Department of Physics have reported the first demonstration of laser-cooling of individual trapped atoms of different species. This may be an important step in the construction of a future "quantum computer," in which quantum superpositions of inputs are processed simultaneously in a single device. Trapped atoms offer one of the only realistic approaches to precisely controlling the complex quantum systems underlying a quantum computer. The demonstration is described in the April 2002 issue of Physical Review in an article, "Sympathetic Cooling of Trapped Cd+ Isotopes," by Boris B. Blinov, Louis Deslauriers, Patricia Lee, Martin J. Madsen, Russ Miller, and Christopher Monroe. Partially based on these results, Monroe has proposed a new "Architecture for a Large-Scale Ion-Trap Quantum Computer," with co-authors David Kielpinski (MIT) and David Wineland (National Institute of Standards and Technology), in the June 13 issue of the journal Nature. Interest in quantum computing has mushroomed in the last decade as its potential for efficiently solving difficult computing tasks, like factoring large numbers and searching large databases, has become evident. Encryption and its obverse, codebreaking, are just two of the applications envisioned for quantum computing if and when it becomes a practical technology. Quantum computation has captured the imagination of the scientific community, recasting some of the most puzzling aspects of quantum physics---once pondered by Einstein, Schroedinger and others---in the context of advancing computer science. "Right now, there's a lot of black magic involved in understanding what makes a quantum computer tick and how to actually build one," Monroe said. "Many physicists doubt we'll ever be able to do it, but I'm an optimist. We may not get there for decades, but given enough time and resources---and failing unexpected roadblocks like the failure of quantum mechanics---we should be able to design and build a useable quantum computer. It's a risky business, but the potential payoff is huge." In their experiment, the Michigan researchers used electric fields to confine a crystal of exactly two Cd+ atoms of different isotopes. They were able to cool the single 112Cd+ atom to a chilly 0.001 degree Celsius above absolute zero through direct laser cooling of the neighboring 114Cd+ atom. Laser cooling of this "refrigerator atom" removes unwanted motion in the atom crystal without affecting the internal state of the other atom. This is an important step toward scaling a trapped atom computer, where "qubits" of information are stored in the quantum states within the individual atoms. The architecture proposed in the Nature article describes a "quantum charge-coupled device" (QCCD) consisting of a large number of interconnected atom traps. A combination of radiofrequency (RF) and quasistatic electric fields can be used to change the operating voltages of these traps, confining a few charged atoms in each trap or shuttling them from trap to trap, and the traps can be combined to form complex structures. The cooling of multiple species demonstrated at Michigan is a key component of this broader proposal. "This is a realistic architecture for quantum computation that is scalable to large numbers of qubits," the authors conclude. "In contrast to other proposals, all quantum state manipulations necessary for our scheme have already been experimentally tested with small numbers of atoms, and the scaling up to large numbers of qubits looks straightforward." Frontiers in Optical Coherent and Ultrafast Science Subscribe To SpaceDaily Express Bell Labs Scientists Usher in New Era of Molecular-Scale Electronics Murray Hills - Oct 17, 2001 Scientists from Lucent Technologies' Bell Labs have created organic transistors with a single-molecule channel length, setting the stage for a new class of high-speed, inexpensive carbon-based electronics. |The content herein, unless otherwise known to be public domain, are Copyright 1995-2016 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.|
<urn:uuid:a098760e-f496-4334-a7ec-d788723b3bad>
CC-MAIN-2017-47
http://www.spacedaily.com/news/nanotech-02q.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806086.13/warc/CC-MAIN-20171120164823-20171120184823-00370.warc.gz
en
0.906899
1,043
3.6875
4
There are many different schemes for making quantum computers work (most of them evil). But they pretty much all fall into two categories. In most labs, researchers work on what could be called a digital quantum computer, which has the quantum equivalent of logic gates, and qubits are based on well-defined and well-understood quantum states. The other camp works on analog devices called adiabatic quantum computers. In these devices, qubits do not perform discrete operations, but continuously evolve from some easily understood initial state to a final state that provides the answer to some problem. In general, the analog and digital camps don't really mix. Until now, that is. The adiabatic computer is simpler than a quantum computer in many ways, and it is easier to scale. But an adiabatic computer can only be generalized to any type of problem if every qubit is connected to every other qubit. This kind of connectivity is usually impractical, so most people build quantum annealers with reduced connectivity. These are not universal and cannot, even in principle, compute solutions to all problems that might be thrown at it. The issues with adiabatic quantum computers don't end there. Adiabatic quantum computers are inherently analog devices: each qubit is driven by how strongly it is coupled to every other qubit. Computation is performed by continuously adjusting these couplings between some starting and final value. Tiny errors in the coupling—due to environmental effects, for instance—tend to build up and throw off the final value. For annealers with limited connectivity—each qubit is only connected to a few other qubits, rather than all other qubits—this is not such an issue. The coupling between these qubits tends to be strong, so the noise is small compared to the coupling. For a fully interconnected adiabatic quantum computer, however, the weak connections between distant qubits are very sensitive to environmental noise. Thus, errors accumulate—if you are unlucky, pi ends up equal to three. Digital quantum computing, which uses logic operations and quantum gates, offers the possibility of error correction. By encoding information in multiple qubits, you can detect and correct errors. Unfortunately, digital qubits are delicate things compared to those used in adiabatic quantum computers, and the ability to program and run complex problems with them is out of reach at the moment. What if the computation was performed by qubits that were operating as an adiabatic quantum computer, but with connections between the qubits controlled via a digital network of qubits? What about a hybrid approach? That's the question asked by a international group of researchers in a recently-published paper in Nature. They’ve tested a system where the computation is performed by qubits that were operating as an adiabatic quantum computer, but with connections between the adiabatic qubits is controlled via a digital network of qubits. This allows the benefits of scale and flexibility that you get from adiabatic quantum computing, while also making the connections less susceptible to noise. Digital vs. analog Let me make an analogy here. Imagine that I have an instrument that measures the hot air concentration in Congress (dangerously high when in session). The instrument produces an analog voltage that is displayed on an analog meter right at the instrument, and the results are relayed to a second analog meter in my home in the Netherlands. The meter in Washington shows a reasonably accurate value with a strong correlation between the reading of hot air displayed in the Capitol Building and speeches by members of Congress. But the distance to my house is so great that my needle only shows an awful lot of noise. So, instead of transporting the signal directly, I digitize it, encode it, and send it via a network to my home, where it is re-converted to an analog signal and read by my meter. Now, my meter is almost as accurate as the local meter. The only differences between the two are the errors due to the two conversions between digital and analog domains. In other words, as long as the process that converts between digital and analog domains generates less noise than the transport of the analog signal, you win. And that is exactly the determining factor for a hybrid adiabatic quantum computer as well. The difference is that, instead of measuring methane concentrations, we are varying the coupling between qubits. Instead of a continuous, slow change, the coupling is stepped from value to value in jumps that are determined by the number of qubits in the digital part of the circuit. Now, as you might have guessed, this is very expensive in terms of quantum gates. With the new hardware, the researches have an adiabatic quantum computer with up to nine computational qubits. In the interests of reproducibility, I can say that the digital connection between two qubits involves... wait for it... hmm, well, the researchers don't say how they are connected to each other. In fact, the whole paper in Nature seems to lack details on how this quantum computer is laid out. All we get is one tiny electron microscope picture of nine qubits in a row, with none of the coupling network shown. But, we get some idea of how the hardware works from various things that the researchers say when describing it. For a four qubit case, the link between each qubit seems to require a cluster of 48 qubits for control. The link itself is made up of five entangled qubits, coming to a grand total of 159 qubits. The authors also mention that for nine computation qubits, they need about 1,000 auxiliary qubits for control purposes. Yet despite all of the supporting digital architecture, there are still only 5 steps between the start and end of the computation. The big question is, of course, does it work? And you know that it must have, because otherwise it, and the researchers that worked on it, would be gathering dust in a cupboard somewhere. The researchers were able to compare their implementation to a model of an ideal version and to a noise-free analog model. Now, since the digitization was very coarse, the answers to the toy problems that they got the computer to solve are not terribly accurate compared to the analog case. But they do show that their real digital version performs about as well as can be expected. That is, the model of the digitized adiabatic quantum computer and the real digitized adiabatic quantum computer performed about the same. That is faint praise, though. My impression is that this is a huge engineering feat. The researchers have implemented not just a multi-qubit adiabatic quantum computer, but also a complex quantum digital network between the qubits. To give you an idea of the complexity: each qubit in the network is, even when you don't want it to, able to talk to the rest of the network. So, setting the coupling between any two qubits varies the coupling of adjacent qubits too. To prevent this, the researchers developed an impressive set of decoupling sequences. Instead of actually stating what this means, let me use an analogy. Imagine that you and some friends are in a row of rooms that are joined by a set of vertical windows. If two of you are standing up, you can communicate using hand signs; if you are both lying down, you can communicate with hand signs. But, if one of you is lying down and the other is standing up, you cannot see each other's hands and no communication is possible. And, if people are in the rooms in between, they might block your view anyway. So, for you to communicate with one of your friends, you have to do two things: you have to make sure you and your friend have the same orientation. And, since the rooms are all in a row, you have to make sure that all the people in between you are in the opposite orientation. When I started writing about quantum computing, it was lab stuff... Now, things are starting to get scary. This is pretty much what decoupling sequences do: they change the state of a qubit so that the coupling between it and the qubit you want to manipulate is at a minimum (ideally, zero, but in practice it is never quite zero). After you've performed the desired operation on the target qubit, you reverse the decoupling operation to return the qubit to its original state. This requires exquisite control and timing to get right. And, in this paper, that sort of control was impressively demonstrated on a large scale. When I started writing about quantum computing, it was lab stuff of mostly academic interest. We celebrated every qubit and every time there was evidence of quantum goodness in our computing. Now, things are starting to get scary. Computations involving many qubits are common. And companies—not just startups, but serious companies that do serious things like setting milestones—are getting involved. There is still a long way to go before a useful quantum computer emerges. But in the past there were also an awful lot of "if" statements associated with every "when" statement. Those qualifiers are being worked through very quickly, and the "when" is looking a good deal more certain. Nature, 2016: DOI: 10.1038/nature17658
<urn:uuid:2e987b15-7498-4008-897c-d596153fa52b>
CC-MAIN-2017-47
https://arstechnica.com/science/2016/06/going-digital-may-make-analog-quantum-computer-scaleable/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804125.49/warc/CC-MAIN-20171118002717-20171118022717-00570.warc.gz
en
0.961635
1,922
3.703125
4
Over 400 million transistors are packed on dual-core chips manufactured using Intel's 45nm process. That'll double soon, per Moore's Law. And it'll still be like computing with pebbles compared to quantum computing. Quantum computing is a pretty complicated subject—uh, hello, quantum mechanics plus computers. I'm gonna keep it kinda basic, but recent breakthroughs like this one prove that you should definitely start paying attention to it. Some day, in the future, quantum computing will be cracking codes, powering web searches, and maybe, just maybe, lighting up our Star Trek-style holodecks. Before we get to the quantum part, let's start with just "computing." It's about bits. They're the basic building block of computing information. They've got two states—0 or 1, on or off, true or false, you get the idea. But two defined states is key. When you add a bunch of bits together, usually 8 of 'em, you get a byte. As in kilobytes, megabytes, gigabytes and so on. Your digital photos, music, documents, they're all just long strings of 1s and 0s, segmented into 8-digit strands. Because of that binary setup, a classical computer operates by a certain kind of logic that makes it good at some kinds of computing—the general stuff you do everyday—but not so great at others, like finding ginormous prime factors (those things from math class), which are a big part of cracking codes. Quantum computing operates by a different kind of logic—it actually uses the rules of quantum mechanics to compute. Quantum bits, called qubits, are different from regular bits, because they don't just have two states. They can have multiple states, superpositions—they can be 0 or 1 or 0-1 or 0+1 or 0 and 1, all at the same time. It's a lot deeper than a regular old bit. A qubit's ability to exist in multiple states—the combo of all those being a superposition—opens up a big freakin' door of possibility for computational powah, because it can factor numbers at much more insanely fast speeds than standard computers. Entanglement—a quantum state that's all about tight correlations between systems—is the key to that. It's a pretty hard thing to describe, so I asked for some help from Boris Blinov, a professor at the University of Washington's Trapped Ion Quantum Computing Group. He turned to a take on Schrödinger's cat to explain it: Basically, if you have a cat in a closed box, and poisonous gas is released. The cat is either dead, 0, or alive, 1. Until I open the box to find out, it exists in both states—a superposition. That superposition is destroyed when I measure it. But suppose I have two cats in two boxes that are correlated, and you go through the same thing. If I open one box and the cat's alive, it means the other cat is too, even if I never open the box. It's a quantum phenomenon that's a stronger correlation than you can get in classical physics, and because of that you can do something like this with quantum algorithms—change one part of the system, and the rest of it will respond accordingly, without changing the rest of the operation. That's part of the reason it's faster at certain kinds of calculations. The other, explains Blinov, is that you can achieve true parallelism in computing—actually process a lot of information in parallel, "not like Windows" or even other types of classic computers that profess parallelism. So what's that good for? For example, a password that might take years to crack via brute force using today's computers could take mere seconds with a quantum computer, so there's plenty of crazy stuff that Uncle Sam might want to put it to use for in cryptography. And it might be useful to search engineers at Google, Microsoft and other companies, since you can search and index databases much, much faster. And let's not forget scientific applications—no surprise, classic computers really suck at modeling quantum mechanics. The National Institute of Science and Technology's Jonathan Home suggests that given the way cloud computing is going, if you need an insane calculation performed, you might rent time and farm it out to a quantum mainframe in Google's backyard. The reason we're not all blasting on quantum computers now is that this quantum mojo is, at the moment, extremely fragile. And it always will be, since quantum states aren't exactly robust. We're talking about working with ions here—rather than electrons—and if you think heat is a problem with processors today, you've got no idea. In the breakthrough by Home's team at NIST—completing a full set of quantum "transport" operations, moving information from one area of the "computer" to another—they worked with a single pair of atoms, using lasers to manipulate the states of beryllium ions, storing the data and performing an operation, before transferring that information to a different location in the processor. What allowed it to work, without busting up the party and losing all the data through heat, were magnesium ions cooling the beryllium ions as they were being manipulated. And those lasers can only do so much. If you want to manipulate more ions, you have to add more lasers. Hell, quantum computing is so fragile and unwieldy that when we talked to Home, he said much of the effort goes into methods of correcting errors. In five years, he says, we'll likely be working with a mere tens of qubits. The stage it's at right now, says Blinov, is "the equivalent of building a reliable transistor" back in the day. But that's not to say those of tens of qubits won't be useful. While they won't be cracking stuff for the NSA—you'll need about 10,000 qubits for cracking high-level cryptography—that's still enough quantum computing power to calculate properties for new materials that are hard to model with a classic computer. In other words, materials scientists could be developing the case for the iPhone 10G or the building blocks for your next run-of-the-mill Intel processor using quantum computers in the next decade. Just don't expect a quantum computer on your desk in the next 10 years. Special thanks to National Institute of Standards and Technology's Jonathan Home and the University of Washington Professor Boris Blinov! Still something you wanna know? Send questions about quantum computing, quantum leaps or undead cats to email@example.com, with "Giz Explains" in the subject line.
<urn:uuid:b6ab455e-c938-425f-b15b-c5af9ec8eefe>
CC-MAIN-2017-47
https://gizmodo.com/5335901/giz-explains-why-quantum-computing-is-the-future-but-a-distant-one
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00775.warc.gz
en
0.936922
1,387
3.5625
4
Quantum internet and hybrid quantum computers, built out of subsystems that operate by means of various physical phenomena, are now becoming more than just the stuff of imagination. In an article just published in the journal Nature Photonics, physicists from the University of Warsaw’s Faculty of Physics (FUW) and the University of Oxford have unveiled a key element of such systems: an electro-optical device that enables the properties of individual photons to be modified. Unlike existing laboratory constructions, this new device works with previously unattainable efficiency and is at the same time stable, reliable, and compact. Building an efficient device for modifying the quantum state of individual photons was an exceptionally challenging task, given the fundamental differences between classical and quantum computing. Contemporary computing systems are based on the processing of groups of bits, each of which is in a specific, well-known state: either 0 or 1. Groups of such bits are continually being transferred both between different subcomponents within a single computer, and between different computers on the network. We can illustrate this figuratively by imagining a situation in which trays of coins are being moved from place to place, with each coin laying either with the heads side or the tails side facing upwards. Things are more complicated in quantum computing, which relies on the phenomenon of superposition of states. A quantum bit, known as a qubit, can be both in the 1 state and the 0 state at the same time. To continue the analogy described above, this would be like a situation in which each coin is spinning on its edge. Information processing can be described as “quantum” processing as long as this superposition of states can be retained during all operations — in other words, as long as none of the coins gets tipped out of the spinning state while the tray is being moved. “In recent years, physicists have figured out how to generate light pulses with a specific wavelength or polarization, consisting of a single quantum — or excitation — of the electromagnetic field. And so today we know how to generate precisely whatever kind of quantum ‘spinning coins’ we want,” says Dr. Michal Karpinski from the Institute of Experimental Physics (FUW), one of the authors of the publication. “But achieving one thing always leaves you wanting more! If we now have individual light quanta with specific properties, it would be useful to modify those properties. The task is therefore more or less this: take a spinning silver coin and move it from one place to another, but along the way quickly and precisely turn it into a gold coin, naturally without tipping it over. You can easily see that the problem is nontrivial.” Existing methods of modifying individual photons have utilized nonlinear optical techniques, in practice attempting to force an individual photon to interact with a very strong optical pump beam. Whether the photon so subjected actually gets modified is a matter of pure chance. Moreover, the scattering of the pump beam may contaminate the stream of individual photons. In constructing the new device, the group from the University of Warsaw and the University of Oxford decided to make use of a different physical phenomenon: the electro-optic effect occurring in certain crystals. It provides a way to alter the index of refraction for light in the crystal — by varying the intensity of an external magnetic force that is applied to it (in other words, without introducing any additional photons!). “It is quite astounding that in order to modify the quantum properties of individual photons, we can successfully apply techniques very similar to those used in standard fiber-optic telecommunications,” Dr. Karpinski says. Using the new device, the researchers managed — without disrupting the quantum superposition! — to achieve a six-fold lengthening of the duration of a single-photon pulse, which automatically means a narrowing of its spectrum. What is particularly important is that the whole operation was carried out while preserving very high conversion efficiency. Existing converters have operated only under laboratory conditions and were only able to modify one in several tens of photons. The new device works with efficiency in excess of 30%, up to even 200 times better than certain existing solutions, while retaining a low level of noise. “In essence we process every photon entering the crystal. The efficiency is less than 100% not because of the physics of the phenomenon, but on account of hard-to-avoid losses of a purely technical nature, appearing for instance when light enters of exits optical fibers,” explains PhD student Michal Jachura (FUW). The new converter is not only efficient and low-noise, but also stable and compact: the device can be contained in a box with dimension not much larger than 10 cm (4 in.), easy to install in an optical fiber system channeling individual photons. Such a device enables us to think realistically about building, for instance, a hybrid quantum computer, the individual subcomponents of which would process information a quantum way using different physical platforms and phenomena. At present, attempts are being made to build quantum computers using, among others, trapped ions, electron spins in diamond, quantum dots, superconducting electric circuits, and atomic clouds. Each such system interacts with light of different properties, which in practice rules out optical transmission of quantum information between different systems. The new converter, on the other hand, can efficiently transform single-photon pulses of light compatible with one system into pulses compatible with another. Scientists are therefore gaining at a real pathway to building quantum networks, both small ones within a single quantum computer (or subcomponent thereof), and global ones providing a way to send data completely securely between quantum computers situated in different parts of the world.
<urn:uuid:145b7106-f482-4f9c-8cc9-9812917f2f64>
CC-MAIN-2017-47
http://www.scienceandtechnologyresearchnews.com/researchers-develop-single-photon-converter-key-component-quantum-internet/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806317.75/warc/CC-MAIN-20171121055145-20171121075145-00780.warc.gz
en
0.935801
1,162
3.5
4
Using lasers for quantum information - Quantum computing - Quantum teleportation - Quantum cryptography - Sources for single or entangled photons It was only short time after the formulation and acceptance of quantum theory when scientists started to discuss possible benefits of this theory for mankind. The quantum computer, probably the most famous application of quantum theory, is expected to reach incredible computing speeds that enable calculations which were not possible before. Any coupled quantum mechanical system can be used for quantum computing. Solid state systems, trapped ions, atoms in optical lattices, and photons with linear optical elements are at the heart of quantum computer research. First quantum operations have been demonstrated with solid state systems and trapped ions but the race is still open. The basis for quantum computing is “entanglement”, a quantum mechanical property of a system in which the state of one part of the system is fully linked to the state of another part. The famous “Schrödinger cat” example tries to visualize how strange entanglement is compared to experiences in daily life. Even Einstein doubted this property so much that he and his colleagues Podolski and Rosen published an article in 1935 in which they thought to proof that quantum theory cannot be complete and would have to be substituted by another theory including variables that in quantum theory are still “hidden”. Their “EPR paradox” argument was first theoretically falsified by Bell (“Bell’s theorem”) who showed that quantum mechanics is indeed complete. Until today, Bells theorem was experimentally supported many times. No hidden variables are needed to describe the quantum nature completely. The strange property entanglement is also the basis for quantum teleportation – where one transfers a quantum mechanical state from one system at one place to another system at another place - and quantum cryptography. The goal of the latter is to send information from one place to another in a completely secure way. Obviously, a quantum cryptography apparatus would be a very powerful and important instrument. Quantum cryptography relies mostly on single on entangled photons and is already commercialized. Quantum computing is expected to allow for calculations, simulations or operations at a speed that classical computing can never reach. For example, it was theoretically shown that a quantum computer would be able to perform database searches or factorization of large numbers much faster than classical computers. The enormous calculation power of a quantum computer is a consequence of two main ingredients. First of all, the fundamental piece of information is a quantum mechanical two state system (|0> and |1>) called QuBit that – unlike a classical bit which is either 0 or 1 – can be in any superposition (a|0> + b|1>) of the two states. Second, the basic calculations are coherent operations that act on such a superposition state. This way, all possible realizations of anything between |0> and |1> can be computed simultaneously and highly parallel computation is realized. Gate operations, the fundamental operations of computing, were shown with trapped ions and with photon based quantum computers. Using solid state systems (NMR), a proof of principle for quantum computed factorization of the number 15 was demonstrated. Quantum teleportation is referring to a procedure in which the quantum mechanical state of one object is fully transferred to another object at a different place. It makes use of the non-locality of entanglement that confused not only Einstein. Using a clever sequence of measurements and entanglement operations on photons, the polarization state of one photon could be mapped to another photon completely. Just recently, quantum teleportation between distant matter QuBits was shown using two separate ion traps. Closely related to quantum teleportation and quantum computing is the so-called “quantum logic”. Here, depending on the quantum state of one object a specific state of another object is created. This controlled state preparation was used in metrology to realize one of the best atomic clocks in the world based on aluminum ions. Quantum cryptography uses quantum physics properties like entanglement and back action of the measurement process on a quantum state to achieve secure communication between a sender (Alice) and a receiver (Bob). The standard approach is that Alice and Bob perform measurements on entangled quantum systems, usually entangled photons, in order to create a key for Alice and Bob. Since they can then use this code to encrypt and decrypt the real message, the quantum cryptography method is called quantum key distribution. The real message is encrypted by Alice according to her measurement results and sent through an open channel (so anyone is allowed to “listen”) to Bob who decrypts the message according to his measurements. Any eavesdropping, so any attempt of a third party to detect the quantum key, can be detected because according to quantum physics laws each measurement influences the quantum mechanical state itself. Eavesdropping would be noticed always. Due to its obvious significance, quantum cryptography research is pushed a lot and many results have been achieved so far. Quantum key distribution over hundreds of km in fiber or over a whole city in free space was shown already while satellite-links of entangled photons between earth stations are currently explored. To proof the usability, a quantum encrypted bank transaction was undertaken. Sources for single or entangled photons are important tools for quantum computing and quantum cryptography. Single photon sources emitting exactly one photon at a triggered time can be realized in many ways incorporating e.g. color centers or ions in solids, single atoms in traps or optical cavities, trapped ions or quantum dot systems. The most common source for entangled photons is based on spontaneous parametric down conversion. A “blue” photon is converted into two red photons within a non-linear optical crystal. Polarization, momentum and energy of the two photons are strongly correlated. A lot of research on this topic is under way. Main efforts are focused on the development of efficient – ideally full deterministic – sources and realizations with mass production potential. TOPTICA’s added value TOPTICA is a highly appreciated supplier for quantum information experiments that involve trapped ions or atoms. Our lasers are successfully applied to cool, trap, optically pump or coherently manipulate ions and atoms. They are fabricated or tuned to the required wavelength such that they can be used to excite single photon emitters. To create entangled photon pairs by parametric down conversion one needs a fundamental laser at half the wavelength of the photon pair in order to initiate the conversion process. Frequently, entangled photons in the near infrared around 800 nm are used and hence violet lasers around 400 nm are required. The development and fabrication of lasers in the UV is TOPTICA’s core competence. We were the first company to produce diode laser systems in the UV and offer a variety of systems with different linewidth/coherence characteristics and power levels for scientific research and industry. No other company has a similar product portfolio. Please contact us to find the best laser for your application. - Brochure: Scientific Lasers - Brochure: iBeam smart - Article: Frequenzkonvertierte cw-Lasersysteme für Forschung und Industrie - Application Notes: Trapping and quantum computing - Book recommendation: Oliver Morsch: „Quantum Bits and Quantum Secrets“, Wiley
<urn:uuid:4c453fa4-308a-42d0-9541-49ba6280bea6>
CC-MAIN-2017-47
http://www.toptica.com/index.php?id=176&L=0
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804724.3/warc/CC-MAIN-20171118094746-20171118114746-00582.warc.gz
en
0.927737
1,482
3.6875
4
In a step that brings silicon-based quantum computers closer to reality, researchers at Princeton University have built a device in which a single electron can pass its quantum information to a particle of light. The particle of light, or photon, can then act as a messenger to carry the information to other electrons, creating connections that form the circuits of a quantum computer. The research, published in the journal Science and conducted at Princeton and HRL Laboratories in Malibu, California, represents a more than five-year effort to build a robust capability for an electron to talk to a photon, said Jason Petta, a Princeton professor of physics. A Princeton University-led team has built a device that advances silicon-based quantum computers, which when built will be able to solve problems beyond the capabilities of everyday computers. The device isolates an electron so that can pass its quantum information to a photon, which can then act as a messenger to carry the information to other electrons to form the circuits of the computer. Credit: Princeton University "Just like in human interactions, to have good communication a number of things need to work out -- it helps to speak the same language and so forth," Petta said. "We are able to bring the energy of the electronic state into resonance with the light particle, so that the two can talk to each other." The discovery will help the researchers use light to link individual electrons, which act as the bits, or smallest units of data, in a quantum computer. Quantum computers are advanced devices that, when realized, will be able to perform advanced calculations using tiny particles such as electrons, which follow quantum rules rather than the physical laws of the everyday world. Each bit in an everyday computer can have a value of a 0 or a 1. Quantum bits -- known as qubits -- can be in a state of 0, 1, or both a 0 and a 1 simultaneously. This superposition, as it is known, enables quantum computers to tackle complex questions that today's computers cannot solve. Simple quantum computers have already been made using trapped ions and superconductors, but technical challenges have slowed the development of silicon-based quantum devices. Silicon is a highly attractive material because it is inexpensive and is already widely used in today's smartphones and computers. The researchers trapped both an electron and a photon in the device, then adjusted the energy of the electron in such a way that the quantum information could transfer to the photon. This coupling enables the photon to carry the information from one qubit to another located up to a centimeter away. Quantum information is extremely fragile -- it can be lost entirely due to the slightest disturbance from the environment. Photons are more robust against disruption and can potentially carry quantum information not just from qubit to qubit in a quantum computer circuit but also between quantum chips via cables. For these two very different types of particles to talk to each other, however, researchers had to build a device that provided the right environment. First, Peter Deelman at HRL Laboratories, a corporate research-and-development laboratory owned by the Boeing Company and General Motors, fabricated the semiconductor chip from layers of silicon and silicon-germanium. This structure trapped a single layer of electrons below the surface of the chip. Next, researchers at Princeton laid tiny wires, each just a fraction of the width of a human hair, across the top of the device. These nanometer-sized wires allowed the researchers to deliver voltages that created an energy landscape capable of trapping a single electron, confining it in a region of the silicon called a double quantum dot. The researchers used those same wires to adjust the energy level of the trapped electron to match that of the photon, which is trapped in a superconducting cavity that is fabricated on top of the silicon wafer. Prior to this discovery, semiconductor qubits could only be coupled to neighboring qubits. By using light to couple qubits, it may be feasible to pass information between qubits at opposite ends of a chip. The electron's quantum information consists of nothing more than the location of the electron in one of two energy pockets in the double quantum dot. The electron can occupy one or the other pocket, or both simultaneously. By controlling the voltages applied to the device, the researchers can control which pocket the electron occupies. "We now have the ability to actually transmit the quantum state to a photon confined in the cavity," said Xiao Mi, a graduate student in Princeton's Department of Physics and first author on the paper. "This has never been done before in a semiconductor device because the quantum state was lost before it could transfer its information." The success of the device is due to a new circuit design that brings the wires closer to the qubit and reduces interference from other sources of electromagnetic radiation. To reduce this noise, the researchers put in filters that remove extraneous signals from the wires that lead to the device. The metal wires also shield the qubit. As a result, the qubits are 100 to 1000 times less noisy than the ones used in previous experiments. Eventually the researchers plan to extend the device to work with an intrinsic property of the electron known as its spin. "In the long run we want systems where spin and charge are coupled together to make a spin qubit that can be electrically controlled," Petta said. "We've shown we can coherently couple an electron to light, and that is an important step toward coupling spin to light." David DiVincenzo, a physicist at the Institute for Quantum Information in RWTH Aachen University in Germany, who was not involved in the research, is the author of an influential 1996 paper outlining five minimal requirements necessary for creating a quantum computer. Of the Princeton-HRL work, in which he was not involved, DiVincenzo said: "It has been a long struggle to find the right combination of conditions that would achieve the strong coupling condition for a single-electron qubit. I am happy to see that a region of parameter space has been found where the system can go for the first time into strong-coupling territory." Princeton Professor Jason Petta is available to comment at email@example.com. John Cramer | EurekAlert! NASA CubeSat to test miniaturized weather satellite technology 10.11.2017 | NASA/Goddard Space Flight Center New approach uses light instead of robots to assemble electronic components 08.11.2017 | The Optical Society The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake. Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a... Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching. That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these... Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm. During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles.... The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications. Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,... Pillared graphene would transfer heat better if the theoretical material had a few asymmetric junctions that caused wrinkles, according to Rice University... 15.11.2017 | Event News 15.11.2017 | Event News 30.10.2017 | Event News 17.11.2017 | Physics and Astronomy 17.11.2017 | Health and Medicine 17.11.2017 | Studies and Analyses
<urn:uuid:1814c6cc-4d16-4206-afb5-b139059db877>
CC-MAIN-2017-47
http://www.innovations-report.com/html/reports/information-technology/electron-photon-small-talk-could-have-big-impact-on-quantum-computing.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804680.40/warc/CC-MAIN-20171118075712-20171118095712-00382.warc.gz
en
0.928863
1,866
4.25
4
New Haven, Conn. -- Two major steps toward putting quantum computers into real practice -- sending a photon signal on demand from a qubit onto wires and transmitting the signal to a second, distant qubit -- have been brought about by a team of scientists at Yale. The accomplishments are reported in sequential issues of Nature on September 20 and September 27, on which it is highlighted as the cover along with complementary work from a group at the National Institute of Standards and Technologies. Over the past several years, the research team of Professors Robert Schoelkopf in applied physics and Steven Girvin in physics has explored the use of solid-state devices resembling microchips as the basic building blocks in the design of a quantum computer. Now, for the first time, they report that superconducting qubits, or artificial atoms, have been able to communicate information not only to their nearest neighbor, but also to a distant qubit on the chip. This research now moves quantum computing from "having information" to "communicating information." In the past information had only been transferred directly from qubit to qubit in a superconducting system. Schoelkopf and Girvin's team has engineered a superconducting communication 'bus' to store and transfer information between distant quantum bits, or qubits, on a chip. This work, according to Schoelkopf, is the first step to making the fundamentals of quantum computing useful. The first breakthrough reported is the ability to produce on demand -- and control -- single, discrete microwave photons as the carriers of encoded quantum information. While microwave energy is used in cell phones and ovens, their sources do not produce just one photon. This new system creates a certainty of producing individual photons. "It is not very difficult to generate signals with one photon on average, but, it is quite difficult to generate exactly one photon each time. To encode quantum information on photons, you want there to be exactly one," according to postdoctoral associates Andrew Houck and David Schuster who are lead co-authors on the first paper. "We are reporting the first such source for producing discrete microwave photons, and the first source to generate and guide photons entirely within an electrical circuit," said Schoelkopf. In order to successfully perform these experiments, the researchers had to control electrical signals corresponding to one single photon. In comparison, a cell phone emits about 1023 (100,000,000,000,000,000,000,000) photons per second. Further, the extremely low energy of microwave photons mandates the use of highly sensitive detectors and experiment temperatures just above absolute zero. "In this work we demonstrate only the first half of quantum communication on a chip -- quantum information efficiently transferred from a stationary quantum bit to a photon or 'flying qubit,'" says Schoelkopf. "However, for on-chip quantum communication to become a reality, we need to be able to transfer information from the photon back to a qubit." This is exactly what the researchers go on to report in the second breakthrough. Postdoctoral associate Johannes Majer and graduate student Jerry Chow, lead co-authors of the second paper, added a second qubit and used the photon to transfer a quantum state from one qubit to another. This was possible because the microwave photon could be guided on wires -- similarly to the way fiber optics can guide visible light -- and carried directly to the target qubit. "A novel feature of this experiment is that the photon used is only virtual," said Majer and Chow, "winking into existence for only the briefest instant before disappearing." To allow the crucial communication between the many elements of a conventional computer, engineers wire them all together to form a data "bus," which is a key element of any computing scheme. Together the new Yale research constitutes the first demonstration of a "quantum bus" for a solid-state electronic system. This approach can in principle be extended to multiple qubits, and to connecting the parts of a future, more complex quantum computer. However, Schoelkopf likened the current stage of development of quantum computing to conventional computing in the 1950's, when individual transistors were first being built. Standard computer microprocessors are now made up of a billion transistors, but first it took decades for physicists and engineers to develop integrated circuits with transistors that could be mass produced. Schoelkopf and Girvin are members of the newly formed Yale Institute for Nanoscience and Quantum Engineering (YINQE), a broad interdisciplinary activity among faculty and students from across the university. Further information and FAQs about qubits and quantum computing are available online at http://www. Other Yale authors involved in the research are J.M. Gambetta, J.A. Schreier, J. Koch, B.R. Johnson, L. Frunzio, A. Wallraff, A. Blais and Michel Devoret. Funding for the research was from the National Security Agency under the Army Research Office, the National Science Foundation and Yale University. Citation: Nature 449, 328-331 (20 September 2007) doi:10.1038/nature06126 & Nature 499, 443-447 (27 September 2007) doi:10.1038/nature06184
<urn:uuid:27278e7c-ed3b-4799-8aed-9c9fe531b5d8>
CC-MAIN-2017-47
https://www.eurekalert.org/pub_releases/2007-09/yu-ysm092507.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805578.23/warc/CC-MAIN-20171119115102-20171119135102-00384.warc.gz
en
0.928669
1,094
3.78125
4
The question that intrigued the great American physicist John Archibald Wheeler in the last decades of his life was: “Are life and mind irrelevant to the structure of the universe, or are they central to it?” He suggested that the nature of reality was revealed by the bizarre laws of quantum mechanics. According to the quantum theory, before the observation is made, a subatomic particle exists in several states, called a superposition (or, as Wheeler called it, a ‘smoky dragon’). Once the particle is observed, it instantaneously collapses into a single position. Wheeler was a scientist-philosopher who introduced the concept of wormholes and coined the term “black hole”. He pioneered the theory of nuclear fission with Niels Bohr and introduced the S-matrix (the scattering matrix used in quantum mechanics). Wheeler devised a concept of quantum foam; a theory of “virtual particles” popping in and out of existence in space (similarly, he conceptualized foam as the foundation of the fabric of the universe). Wheeler inspired many aspiring young scientists, including some of the greats of the 20th century. Among his doctoral students were Richard Feynman, a Nobel Prize laureate, with whom he coauthored the “Wheeler-Feynman absorber theory”; Hugh Everett, who proposed the many worlds interpretation; Kip Thorne, who predicted the existence of red supergiant stars with neutron-star cores; Jacob Bekenstein, who formulated black hole thermodynamics; Charles Misner, who discovered a mathematical spacetime called Misner space; Arthur Wightman, the originator of Wightman axioms; and Benjamin Schumacher, who invented the term “qubit” and is known for the “Schumacher compression”. Wheeler suggested that reality is created by observers and that: “no phenomenon is a real phenomenon until it is an observed phenomenon.” He coined the term “Participatory Anthropic Principle” (PAP) from the Greek “anthropos”, or human. He went further to suggest that “we are participants in bringing into being not only the near and here, but the far away and long ago.” This claim was considered rather outlandish until his thought experiment, known as the “delayed-choice experiment,” was tested in a laboratory in 1984. This experiment was a variation on the famous “double-slit experiment” in which the dual nature of light was exposed (depending on how the experiment was measured and observed, the light behaved like a particle (a photon) or like a wave). The results of this experiment, as well as another conducted in 2007, proved what Wheeler had always suspected – observers’ consciousness is required to bring the universe into existence. This means that a pre-life Earth would have existed in an undetermined state, and a pre-life universe could only exist retroactively. Now it appears that Wheeler was a major influence on New York Times bestselling author Deepak Chopra who joined forces with physicist Menas Kafatos to explore some of the most important and baffling questions about human existence. What happens when modern science reaches a crucial turning point that challenges everything we know about reality? In the coming era, the universe will be completely redefined as a "human universe" radically unlike the cold, empty void where human life and our planet is a mere mote of dust in the cosmos. You Are the Universe literally means what it says--each of us is a co-creator of reality extending to the vastest reaches of time and space. This seemingly impossible proposition follows from the current state of science, where outside the public eye, some key mysteries cannot be solved, even though they are the very issues that define reality itself: What Came Before the Big Bang? Why Does the Universe Fit Together So Perfectly? Where Did Time Come From? What Is the Universe Made Of? Is the Quantum World Linked to Everyday Life? Do We Live in a Conscious Universe? How Did Life First Begin? “The shift into a new paradigm is happening,” the duo writes. “All of us live in a participatory universe. Once you decide that you want to participate fully with mind, body, and soul, the paradigm shift becomes personal. The reality you inhabit will be yours either to embrace or to change.” What these two great minds offer is a bold, new understanding of who we are and how we can transform the world for the better while reaching our greatest potential. The most distant galaxies billions of light years away, have no reality without you, because everything that makes any galaxy real— with the multitude of stars with their heat, emitted light, and masses, the positions of the distant galaxies in space and the velocity that carries each distant galaxy away at enormous speed—requires a human observer with a human nervous system. If no one existed to experience heat, light, mass, and so on, nothing could be real as we know it. If the qualities of Nature are a human construct arising from human experiences, the existence of the physical universe "out there" must be seriously questioned--and along with it, our participation in such a universe. Physics has had decades to process the insight of Wheeler, the eminent American physicist, general relativist and quantum physicist, who originated the notion of a participatory universe, A cosmos in which all of us are embedded as co-creators, replacing the accepted universe "out there," which is separate from us. Wheeler used the image of children with their noses pressed against a bakery window to describe the view that kept the observer separate from the thing being observed. But in a fully participatory universe, the observer and the thing observed are one. The brain isn't the seat of consciousness but acts more like a radio receiver, and perhaps emitter, translating conscious activity into physical correlates. (The radio receiver metaphor describes the feedback loop between mind and brain, which are actually not separate but part of the same complementary activity in consciousness.) To understand our true participation in the universe, we must learn much more about awareness and how it turns mind into matter and vice versa. These are difficult truths for mainstream scientists to accept, and some would react to them with skepticism, disbelief, or anger. But following the other track of explanation, beginning with physical objects "out there," fails utterly to explain how we are conscious to begin with. That's why in scattered pockets, some physicists are beginning to talk about a conscious universe, where consciousness is a given throughout Nature. In fact, the founders of quantum mechanics a century ago agreed more with this view, having understood that quantum mechanics implies observation and agency of mind. In their upcoming book You Are the Universe, they call it the human universe, emphasizing where the whole construct comes from. Image at top of page: Galaxy cluster MACS J0717, one of the most complex and distorted galaxy clusters known, is the site of a collision between four clusters. It is located about 5.4 billion light years away from Earth. X-ray: NASA/CXC/SAO/van Weeren et al.; Optical: NASA/STScI; Radio: NSF/NRAO/VLA
<urn:uuid:d9df5f15-33ff-4fd4-a4f6-766e2f925f8c>
CC-MAIN-2017-47
http://www.dailygalaxy.com/my_weblog/2017/01/the-conscious-universe-a-radical-theory-the-universe-exists-because-we-are-here-view-video.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809695.97/warc/CC-MAIN-20171125071427-20171125091427-00183.warc.gz
en
0.946298
1,510
3.515625
4
In the new quantum information technologies, fragile quantum states have to be transferred between distant quantum bits. Researchers at ETH have now realized such a quantum transmission between two solid-state qubits at the push of a button. Data transmission is the backbone of the modern information society, on both the large and small scale. On the internet, data are exchanged between computers all over the world, most often using fibre optic cables. Inside a computer, on the other hand, information has to be shuttled back and forth between different processors. A reliable exchange of data is also of great importance for the new quantum information technologies that are currently being developed – but at the same time it is also fiendishly difficult. At the ETH in Zurich, a team of physicists led by Andreas Wallraff of the Laboratory for Solid State Physics has now succeeded in transmitting quantum information, at the push of button and with high fidelity, between two quantum bits roughly a metre apart. Their results are published in the scientific journal Nature this week. Flying quantum bits The main peculiarity of quantum information technologies, such as quantum computers and quantum cryptography, is the use of quantum bits or «qubits» as the elementary unit of information. Differently from classical bits, qubits cannot just have the value 0 or 1, but also take on so-called superposition states. On the one hand, this results in the possibility to build extremely powerful computers that make use of those superposition states to perform calculations much more efficiently and faster than classical computers. On the other hand, those states are also very sensitive and cannot be transmitted simply using conventional techniques. The problem is that the state of a stationary qubit first has to be transformed into a so-called “flying” qubit, for instance a photon, and then back into another stationary qubit. A few years ago researchers were able to transmit the quantum state of an atom in this way. Wallraff and his co-workers have now succeeded in realizing such a transmission also from one superconducting solid-state qubit to another one some distance away. To do so, the physicists connected two superconducting qubits using a coaxial cable of the kind that is also used to connect to antenna terminals. The quantum state of the first qubit, which is defined by the number of superconducting electron pairs (also known as Cooper pairs) contained in it, was first transferred to a microwave photon of a resonator using very precisely controlled microwave pulses. From that resonator the photon could then fly through the coaxial cable to a second resonator, inside of which microwave pulses, once more, transferred its quantum state onto the the second qubit. Similar experiments were recently carried out at Yale University. Deterministic rather than probabilistic “The important point of our method is that the transmission of the quantum state is deterministic, which means that it works at the push of a button”, Philipp Kurpiers, a PhD student in Wallraff’s lab, emphasizes. In some earlier experiments a transfer of quantum states could already be realized, but that transmission was probabilistic: sometimes it worked, but most of the time it didn’t. A successful transmission could, for instance, be signalled by a “heralding photon”. Whenever the transmission hadn’t worked, one simply tried again. In that way, the effective quantum transmission rate was, of course, strongly reduced. For practical applications, therefore, deterministic methods such as the one now demonstrated at ETH are clearly advantageous. “Our transmission rate for quantum states is among the highest ever realized, and at 80% our transmission fidelity is very good in the first realization of the protocol”, says Andreas Wallraff. Using their technique, the researchers were also able to create a quantum mechanical entanglement between the qubits as many as 50,000 times per second. The transmission procedure itself took less than a millionth of a second, which means that there is quite a bit of room for improvement in the transmission rate. Quantum mechanical entanglement creates an intimate link between two quantum objects even across large distances, a feature that is used for cryptography or quantum teleportation. Quantum transfer for quantum computers As a next step, the researchers want to try to use two qubits each as transmitter and receiver, which makes entanglement swapping between the qubit pairs possible. Such a process is useful for larger quantum computers, which are supposed to be built in the next few years. So far, they only consist of a handful of qubits, but when trying to build larger computers, already for a few hundred qubits one will have to worry about how to connect them most effectively in order to exploit the advantages of a quantum computer in the best possible way. Much like clusters of single computers used today, quantum computer modules could then be connected together using Wallraff’s technique. The transmission distance, which is currently about a metre, could certainly be increased. Wallraff and his colleagues recently demonstrated that an extremely cold, and thus superconducting, cable could transmit photons over distances of several tens of metres with very little loss. Wiring together a quantum computing centre, therefore, seems to be quite feasible. Publication: Kurpiers P, Magnard P, Walter T, Royer B, Pechal M, Heinsoo J, Salathé Y, Akin A, Storz S, Besse J-C, Gasparinetti S, Blais B, Wallraff A. Deterministic quantum state transfer and remote entanglement using microwave photons. Nature, volume 558, pages 264–267 (2018), doi: 10.1038/s41586-018-0195-y
<urn:uuid:8ccc7675-99fb-4cbe-85ec-1a5f0ffcea3f>
CC-MAIN-2021-10
https://scitechdaily.com/quantum-transmission-between-two-solid-state-qubits-at-the-push-of-a-button/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360293.33/warc/CC-MAIN-20210228054509-20210228084509-00258.warc.gz
en
0.950542
1,184
3.609375
4
These days, losing the manual for some piece of electronics you’ve purchased is notable mostly because you had a printed document to lose in the first place. In the dead-tree dominated days of yore, of course, this was less true. Documentation loss is a major problem in the effort to understand old computer systems, and it’s part of what drives ongoing data preservation efforts across the industry. Until recently, the Zuse Z4 could have been a poster child for this sort of problem. The Z4 was the brainchild of Konrad Zuse, a German who deserves to be better known than he is for his early, groundbreaking work. Zuse had the misfortune to be making some of his biggest breakthroughs immediately prior to and during World War II. It was Zuse who designed the first high-level programming language from 1942 to 1945. This is remarkable because, as Wikipedia notes, Zuse had no training whatsoever in mechanical computing devices. He independently discovered both propositional calculus and lattice theory, calling them “combinatorics of conditionals” and “study of intervals,” respectively. The Zuse Z4 is the oldest preserved digital computer in the world and arguably* the first digital computer. The Z4 was developed through the end of the war and was moved multiple times while under construction to keep it away from the advancing Soviet army. After the war, it was expanded and became the second digital computer in the world to be sold. The preserved model is on display at the Deutsches Museum in Munich and is pictured above. Its documentation, however, was a different story. A recent blog post by the Association of Computing Machinery details how the rare documents were found. Archivist Evelyn Boesch, with ETH Zurich University, contacted Herbert Bruder of the ACM and informed him that her father, René Boesch, had kept a tranche of rare historical documents. These turned out to include a user manual for the Z4 Zuse, as well as notes on flutter calculations. Other documents, dated October 27, 1953, detail what the Z4 was working on. At the time, it was being used to perform flutter calculations on the Swiss FFA P-16 fighter aircraft, which was then in development. Details from the recovered documents show that it took the Z4 50 hours to simulate 2.4 seconds of flight time, which is slightly worse than the current version of Microsoft Flight Simulator. The ACM blog post notes that “around 100 jobs were carried out with the Z4 between 1950 and 1955,” implying an average per-job computation time of about three weeks. What We Learn From Manuals Like This The recovered Z4 manual illustrates why this type of document preservation is so important. From their earliest days, computers were upgradeable — machines like ENIAC were outfitted with the equivalent of RAM upgrades and CPU improvements. In the Z4’s case, support for conditional jump instructions was added post-manufacture. The only problem was, nobody could remember exactly how the feature worked. ACM notes: “However, in a survey a few years ago, the few surviving eyewitnesses could not remember how it was executed.” Page 8 of the manual provides this information. My German is rusty, my technical German is nonexistent, and frankly, the images are a bit tough to read, so I’m not going to try to translate exactly how the function worked. Without information like this, it would be impossible to precisely replicate or understand how the Z4 embodied or improved upon the computational capabilities of the time. *The answer to “Who invented the first computer?” is essentially arbitrary and depends entirely on how you choose to define the term “computer.” The UK’s Colossus is declared the world’s first “programmable, electronic, digital computer,” by Wikipedia, but it was programmed by switches and plugs, not a stored program. The Z4 is considered to be the first commercial digital computer but it’s not electronic. The first electronic stored-program computer is the Manchester Baby, but Konrad Zuse’s earlier Z3 could store programs on tape — it just wasn’t electronic. Other obscure machines, like the Atanasoff-Berry Computer, were not Turing-complete and couldn’t store programs, but still contributed critical ideas to the development of computing. Also, if you were taught that ENIAC was the first computer (or digital computer, or electronic digital computer, etc, ad nauseam), that’s more propaganda than fact. ENIAC was more directly based on machines like Colossus than was known at the time, because the wartime efforts of the British remained classified, while ENIAC was widely celebrated in the media. Finally, reading up on the history of early computing is a good reminder of how many people, institutions, and companies contributed various technologies and principles to the field. One reason you can subdivide the question of “Who built the first computer” to such a fine degree is that there were so many “firsts” for someone to achieve. There was a time in the 1930s and 1940s when mechanical, electromechanical, and digital systems were sharing space and serious development dollars simultaneously. We don’t have anything remotely equivalent today, and even our wildest architectural departures from the x86 “norm” are still based on digital computing. That could change in the future, if Intel’s MESO architecture comes to fruition and proves capable of replacing CMOS in the long term. But for now, the 1930s and 1940s represent a tremendously dynamic period in computing history that we don’t really have an equivalent for — though some of the quantum computing work is getting really interesting.
<urn:uuid:b2901c50-f57e-4bed-b4bf-d264ac5b151c>
CC-MAIN-2021-10
https://www.extremetech.com/computing/315396-we-just-found-the-user-manual-for-the-first-digital-computer-ever-built
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366969.45/warc/CC-MAIN-20210303134756-20210303164756-00617.warc.gz
en
0.971809
1,208
3.65625
4
Vincent van Gogh’s “Starry Night” seems to have a special appeal for scientists, who have recreated it using bacteria, among other media, in the past. Now scientists at Caltech have made their own tiny version of the painting—a dime’s width across—out of folded DNA molecules. Some day the same technique could be used to build teensy biosensors, or for targeted drug delivery. It’s called “DNA origami,” and while many different kinds of shapes have been created using it, this is the first proof of concept that it’s possible to scale up and build large numbers of DNA based devices on computer chips. The Caltech team described their work in a new paper in Nature. “Everybody thinks molecules are eventually going to be the devices of the future,” Caltech’s Paul Rothemund, DNA origami pioneer and co-author, told Gizmodo. “But how do you connect them? How do you wire them up into larger circuits? How do you do anything with them? You need an interface between the molecular and the macroscopic world, and that’s what this is.” It’s been ten years since Rothemund made the first amusing shapes by folding strands of DNA. His nanoscale smiley faces, stars, snowflakes, and a miniature map of the Western hemisphere were even displayed at the Museum of Modern Art in New York City in 2008—a true marriage of science and art. DNA takes the form of a double helix, and encodes all the genetic instructions for manufacturing proteins. It has four repeating chemical bases—known as A, T, G, and C—that are complementary, so A always pairs with T, and G always pairs with C. To create his special shapes, Rothemund folded a single long strand of DNA back and forth into whatever shape or pattern he desired (determined beforehand with computer modeling), then stuck it all together at strategic points with “staples” comprised of shorter DNA strands. Each V-shaped staple had two “arms” with a base sequence that would bind to its complementary sequence on the longer DNA strand. Then he heated the long DNA strand in a saline solution, and let the whole thing self-assemble into the desired pattern. It only takes about one week to design the pattern on the computer, and another week to synthesize the DNA, and the actual self-assembly only takes a few hours. “But then you’re stuck with a device that’s floating around in a solution,” said Rothemund. “You can’t combine it with anything else, you can’t wire it into a circuit, it’s even hard to measure its performance.” If this were ever to find any practical application, Rothemund knew he needed to figure out how to integrate his DNA origami with silicon microfabrication, and he collaborated with IBM scientists to do just that. By 2009, they had discovered that you could make sticky patches on a chip that were the same size and shape as the DNA origami. Simply pour the solution containing the DNA over the surface of the chip and the DNA molecules will stick to those matching patches. That DNA shape now acts as scaffolding, making it possible to attach other tiny components—like fluorescent molecules. Rothemund likens it to the pegboards typically found in garages to hold various tools, except this is a self-assembled pegboard where the tools find their own positions and stick there, held in place by DNA functioning like Velcro. Rothemund and his colleagues have been refining this technique ever since. Over the last six years, he and a Caltech postdoc, Ashwin Gopinath, have shown that they can position their DNA origami on pretty much any surface used to make computer chips. And their latest paper offers the first application: using the method to stick fluorescent molecules into tiny light sources, much like light bulbs screw into lamps. The “lamps” in these experiments are photonic crystal cavities tuned to a specific wavelength of light—in this case, a deep shade of red. (Manmade photonic crystals are engineered with a highly precise honeycomb structure that causes light to reflect off the surface in such a way as to block certain frequencies of light and let others through.) The injected fluorescent molecules will glow at the tuned wavelength, thereby “lighting” the lamps. But location is key: the molecules will glow more brightly at some locations within the cavity than at other locations. By fiddling with positioning, Rothemund and Gopinath found they could create checkerboard patterns of “hot” and “cold” spots. That gave them the capability to reproduce other, more elaborate patterns. Gopinath chose to recreate “Starry Night” to demonstrate the technique’s power, because he’d always liked van Gogh’s work. Besides, he had just seen that Doctor Who episode (“Vincent and the Doctor”) in which everyone’s favorite Time Lord goes back to 1890 to help a fictional van Gogh battle an alien monster. Whereas prior work in this area used just a handful of these kinds of devices, Gopinath scaled everything up and stitched together 65,536 of them to recreate van Gogh’s masterpiece. The next step is to refine this technique even further, perhaps by using different fluorescent molecules or another type of light emitter, like quantum dots, since the ones they used for these experiments tend to burn out quickly. Plus, the colors aren’t as pure as one would like for certain applications, like optical or quantum computing at the nanoscale. Physicists are likely to be more interested in the potential for doing more fundamental experiments. For instance, an upcoming set of experiments will involve placing multiple emitters inside resonators and trying to get them to sync with each other—a phenomenon called “superadiance” that was first predicted by Robert Dicke back in 1952. Gopinath likens the effect to how a bunch of metronomes on a table may start ticking out of sync, but will gradually start ticking in unison over time if the conditions are just right. In much the same way, multiple light emitters should sync up as well. “Nobody has yet done a clean experiment, because you have to position emitters at specific distances with respect to each other,” said Gopinath. This new paper provides a possible way to do that.
<urn:uuid:57d6f6f5-ab3d-492d-91a8-2c1aa504381c>
CC-MAIN-2021-10
https://gizmodo.com/heres-van-goghs-starry-night-recreated-with-dna-origami-1783358097
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355937.26/warc/CC-MAIN-20210225211435-20210226001435-00420.warc.gz
en
0.951229
1,392
3.75
4
The Google quantum computer “Sycamore” recently solved an equation in 200 seconds, a task that would have taken a supercomputer thousands of years. Photo: Google The Basics of a Quantum Computer Mar 5, 2020 by Carlos M. Gonzalez Companies like IBM, Google, and Honeywell—that has just unveiled its new quantum computer in partnership with Microsoft—are all developing quantum computing systems to tackle complex computations for both business and engineering services. Understanding how a quantum computer works and operates is an ongoing puzzle, even to its own developers. The larger question is, can quantum computers become the future of computing and what role will engineers play in their development? How Does a Quantum Computer Work? First, to understand quantum computing, we need to understand three basic principles of quantum mechanics, and how quantum computers manipulate those mechanics to store information differently. For regular computers, bits are the basic unit of information. They are binary, that is, they can be either on, represented by a “1,” or off, noted by a “0.” This binary code is the language of computer coding. Arranging the 1’s and 0’s into different configurations enables us to see an image, a video, a text, or a graphic on any computer. The basic unit of information in quantum computing is the qubit, and it has many possibilities. "Physicists often think of a qubit like a little globe, with '0' at the north pole and '1' at the south pole," said Marissa Giustina, a Google research scientist and quantum electronics engineer. "The qubit’s configuration is represented by a point on the globe. In manipulating the qubit, we can send any point on the globe to any other point on the globe." Read More About: Artificial Intelligence Transforms Manufacturing Quantum computing uses the mechanics of superposition, entanglement, and interference to create states of exponential scalability. - Superposition is a combination of states that would typically operate independently of each other. Dr. Talia Gershon, senior manager of Q Experiences at IBM Research, describes superposition as a qubit operating in both a “yes” and a “no” state, much like a coin spinning on a table that can be both heads and tales. Superposition reflects actual behavior in which an object can be in multiple states at the same time. - Entanglement creates a system of qubits. If two qubits are entangled, they will both show the same result when measured. To use the coin analogy, Dr. Gershon explains that two coins spinning at the same time, would both have equal value regardless of which face is up when they stop spinning. If one qubit is measured as open, its entangled qubit is measured open. - Interference is the act of quantum qubits operating as a wave. These waves can work in unison or opposite of each other. When the waves are in phase, their amplitudes add, creating constructive interference. When they are out of phase, their amplitudes cancel out, causing destructive interference. This is very similar to how noise-canceling headphones work. “By using interference, we can amplify the signals leading to the right answer, and cancel out the signals leading to the wrong answer,” Dr. Gershon said. Like a conventional bit, it can be either "1" or "0". But it can also be in both those states at the same time. This means that each qubit has four potential states—"1-1", "1-0", "0-1", and "0-0"—though it can only be in one of those states at a time. These grow exponentially with more qubits. So if you have 100 qubits, there are 2 to 100th power of possible states. Google’s quantum computer, for example, can perform complex test computations within 200 seconds. The most powerful supercomputers would spend years to finish the same computations. You May Also Like: AI to Predict Kidney Failure in Advance For each of these states, the value of qubit can only be measured in 1’s and 0’s. So while information can be stored in multiple states, computing those states still require a binary relationship. This is the current hurdle of quantum computing. Researchers are working on how to scale up quantum computers and how to measure the quantum processors accurately. What is Inside a Quantum Computer? The inside of a quantum computer resembles a massive fridge. The dilution refrigerator is layered in tiers that create colder and colder levels until it reaches super freezing temperatures of 10 to 15 milliKelvin, which is colder than temperatures in outer space. These temperatures allow quantum processors to create superposition and entanglement scenarios. IBM’s quantum computer can be broken down into seven areas. It starts with the qubit signal amplifier. This first is one of two amplifying stages where the cooling starts to a temperature of 4 Kelvin. The second area is the input microwave lines, where attenuation is applied to each stage of the refrigerator to protect the qubits from thermal noise while controls and signals are sent to and from the processor. The third area is the superconducting coaxial lines that direct the signals between the first and second amplifying stages. The fourth area is cryogenic isolators, which enable the qubit signals to go forward while preventing noise. The fifth area is quantum amplifiers inside a magnetic shield, which captures and amplifies readout signals while minimizes noise. The sixth area is the cryoperm shield. The shield is where the quantum processor sits, and the qubits are found within the quantum processor. It protects the quantum processor from the electromagnetic radiation to preserve its quality. Lastly, the seventh area is the mixing chamber, the lowest part of the refrigerator. It provides the necessary cooling power for the processor to function. Recommended for You: Artificial Intelligence Camera Improves Sight in Autonomous Vehicles This device is massive, and the processors are its finite resource. Google’s quantum computer Sycamore is a near intermediate noisy quantum (NISQ) device. The Sycamore has about 50 qubits and a finite lifetime. A NISQ device will perform up to a few thousand quantum operations, and then you will need to replace the quantum processor with new qubits. The limited computational lifetime forces engineers to carefully decide which computations will be performed. How Engineers Will Impact Quantum Computing? To date, quantum computer use cases are limited and have revolved around solving complex data scenarios that would be difficult for supercomputers. “It is very early for quantum computing, and we are building assembly languages so you can interchangeably program for a supercomputer or a quantum computer. We are not envisioning quantum computers replacing classical computers anytime soon,” Dr. Gershon said. “We think quantum computers are going to be used to accelerate the types of computations that are hard for classical machines. Simulating nature is something that is really hard—such as modeling atomic bonding or electronic orbital overlap. Instead of writing out a large summation over many terms, you can now simulate the system directly onto a quantum computer.” Watch our Video on: CES 2020 Highlights in Robotics, AI, and Smart Vehicles "Quantum computing will enable us to tackle complex scientific and business challenges, driving step-change improvements in computational power, operating costs, and speed," said Honeywell's Chief Executive Darius Adamczyk in a recent press release. Honeywell has partnered with two quantum software and algorithm providers, Cambridge Quantum Computing and Zapata Computing, to launch and to research use cases for their quantum computing. "Materials companies will explore new molecular structures. Transportation companies will optimize logistics. Financial institutions will need faster and more precise software applications. Pharmaceutical companies will accelerate the discovery of new drugs. Honeywell is striving to influence how quantum computing evolves and to create opportunities for our customers to benefit from this powerful new technology," said Adamczyk. A prime example of using quantum computing is for chemistry applications. Quantum computers can predict the behavior of molecules in a solar cell, because the same laws of physics govern qubits in a quantum processor. It provides engineers with a more accurate model of how their design will function. While computer processing is advanced, the systems used to create a quantum computer are still based on traditional engineering. “To build the dilution refrigerator, traditional mechanical cryogenic engineering is used to achieve the super cold temperatures,” said Yu Chen, a quantum electrical engineer at Google. “Also, the control signal used to command the qubits are based on classical micro-electrical engineering principals.” Creating new systems for quantum computers will be the task for design engineers. For example, in order to control 50 qubits, a computer will need more than 100 control channels. Determining how to create stable and scalable connections will be the task for electrical engineers. On the mechanical side, creating stable heating and cooling systems will be the main task. Vacuums are used to create the low temperatures, and the challenge is to figure out how vibration will affect the computer’s systems. Mechanical engineers will need to design a cooling system capable of the low Kelvin temperatures without damaging the equipment. “There is a lot of unknown in terms of how the current environment will interact with quantum systems. We will need a lot of help from the engineering community to reach the next stage,” Chen said. “How do we modify classical computing facilities to house quantum computers? How should we build logic facilities? What are the mechanical concerns to house a quantum computer? These are questions that the engineering community will need to answer once the quantum computer begins to scale.” Carlos M. González is special projects manager.
<urn:uuid:55f980f8-913d-4025-a12c-ae1104384b55>
CC-MAIN-2021-10
https://www.asme.org/topics-resources/content/the-basics-of-a-quantum-computer
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375274.88/warc/CC-MAIN-20210306162308-20210306192308-00260.warc.gz
en
0.919873
2,045
3.78125
4
Image credit: archy13 / Shutterstock.com Right now, in the world of computing the race is on to create a truly useful and effective quantum computer. Next generation super-computers such as these would pave the way for solving a new realm of problems that are incomprehensible to existing computers. The benefit of quantum computing is that unlike classical computing it can make good use of the unique ability possessed by subatomic particles to exist in more than one state at any given time. What this means is that whereas the current generation of computers use bits – which is a single piece of information that can exist in one of two states, one or zero – quantum computers make use of quantum bits known as ‘qubits’, instead. Therefore, they have the ability exceed traditional information storage capabilities of one or zero due to the fact they can exist in any superposition of these values. However, the coherence of a qubit, i.e. its preservation of the superposition, is in a fragile quantum state which be easily destroyed by environmental ‘noise’. Furthermore, this noise that can be generated by electronic systems, heat dissipation, or any impurities present in the qubit itself can lead to critical errors that could prove demanding to rectify. MIT and Dartmouth College researchers have successfully designed and coordinated the first set of laboratory tests that utilizes a breakthrough method that allows for effective monitoring and detection of the characteristics that troublesome environmental noise generates. This significant leap may offer new insights into microscopic noise mechanisms to further assist the engineering of state-of-the-art processes to protect the fragile qubits. This is the first concrete step toward trying to characterize more complicated types of noise processes than commonly assumed in the quantum domain. As qubit coherence properties are being constantly improved, it is important to detect non-Gaussian noise in order to build the most precise quantum systems possible. Lorenza Viola, Professor of Physics, Dartmouth The technique developed by the researchers separates non-Gaussian noise from the background Gaussian noise, then they were able to reconstruct comprehensive sets of information about these signals by using signal-processing techniques. Thus, offering researchers the ability to generate more realistic noise models, which could go some way toward further protecting qubits by enabling vigorous processes that shields them from certain noise types. This is necessitated by the fact that the development of qubits with fewer defects than previous iterations may lead to an increased presence of non-Gaussian noise. This is akin to being at a loud party where although it may be difficult to maintain a steady conversation it is still possible, however, when individual voices start to stand-out it can contribute to a breakdown in one’s own thought-process making it much more difficult to sustain continual discussion. “It can be very distracting”, stated William Oliver, an associate professor of electrical engineering and computer science, professor of the practice of physics, MIT Lincoln Laboratory Fellow, and associate director of the Research Laboratory for Electronics (RLE). For qubits with many defects, there is noise that decoheres, but we generally know how to handle that type of aggregate, usually Gaussian noise. However, as qubits improve and there are fewer defects, the individuals start to stand out, and the noise may no longer be simply of a Gaussian nature. We can find ways to handle that, too, but we first need to know the specific type of non-Gaussian noise and its statistics. Throughout their research, the team determined that qubits with superconducting capabilities can act as sensors for the noise they generate themselves. In the experiments, they introduced non-Gaussian ‘dephasing’ noise as engineered flux noise that interrupts the coherence of the qubit, this can then be utilized as a measuring tool. “Usually, we want to avoid decoherence, but in this case, how the qubit decoheres tells us something about the noise in its environment,” Oliver says. A detailed description of the process was published in a paper in the journal Nature Communications. However, while the study won’t make large-scale quantum computers manifest in the immediate future it is still considered highly valuable work as the team bridged the gap between theory and practice. “This research started on the whiteboard. We didn’t know if someone was going to be able to put it into practice, but despite significant conceptual and experimental challenges, the MIT team did it,” said Felix Beaudoin, a former Dartmouth postdoctoral student and vital part of Professor Viola’s team. The progressive impact this study could have on the future of quantum computing is far-reaching, as well as preserving the integrity of qubits it would enable the computers to be more precise, robust, and dependable. Once the gate is open it is expected quantum computing will allow machine learning to accelerate exponentially which means taking giant steps towards advanced AI systems and even reducing the time to solve a problem from hundreds of years to just a few seconds. In short, we could see quantum computing solving some of humanities most complex and difficult questions.
<urn:uuid:34f35ce2-5067-4f5a-b198-02c25b447908>
CC-MAIN-2021-10
https://www.azoquantum.com/News.aspx?newsID=6681
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00462.warc.gz
en
0.949531
1,051
3.90625
4
Through the Internet, humans have connected the world. People are closer to each other than ever while still remaining apart. The next phase of the the Internet will be about connecting things. The Internet of Things will be central to the infrastructure that we build. (The “Futurist’s Cheatsheet” series surveys technologies on the horizon: their promise, how likely they are, and when they might become part of our daily lives. This article is Part 5.) What Is It? Think of a thing. Really, it could be anything. A chair, a toaster, parts of a car, the lights in your house, the electricity meter, the security cameras in your offices, a fire hydrant, traffic lights … really, anything or everything that can exist could be connected to the Internet. Another name for the Internet of Things is a network of things. The network can monitor your home, your car, infrastructure (utilities such as electricity or water), traffic patterns and a variety of other possibilities to create a more informed and responsive system through data analysis. How It Works Do you really need an Internet-connected toaster? Probably not. But, the toaster is a good place to start when discussing the Internet of Things. What would you expect from a smart toaster? Perhaps a touch screen on which to schedule cooking. It could be connected to the coffee pot, enabling the perfect breakfast for you as soon as you wake. Your toaster could be programmed from your computer or a mobile app. Say you are laying in bed and know you are going to sleep in the next day, pull out your smartphone and reprogram the toaster to start an hour later. A toaster could have its own IP address on the Internet. In theory, you could visit your toaster’s site. Giving things a full IP address is one way to tie a thing to the Internet. Another way, and the way in which many things will be tied to the Internet, is for a thing to just have the ability to connect to the Internet, without and IP address. Now, imagine that there is no digital interface on your toaster. In this case it is just a toaster that happens to have cellular or Wi-Fi capabilities and sensors to monitor how well it performs. It sends sensor data back to the manufacturer through Internet nodes and portals without an individual IP address. The manufacturer uses this data to know how its product is working in the wild, how often it is used, and use this data to make a better toaster. Go back and replace the word toaster with anything, say, a power meter. The same concepts apply. An Internet of Things can use the Web as an interface, or just use the Internet to move data. That data can be used to interact with the network of things or just as a pipeline where data moves two ways, analyzed and used to make objects smarter and more responsive to people’s needs. There are so many ways that an Internet of Things could impact people’s lives that it is hard to describe everything. Distilling it to a few key areas helps define what the scope of an Internet of Things could be: infrastructure (buildings and utilities), consumer (cars and homes), health care and businesses (consumer products and retail locations). Weather-related sensors could help agriculture by monitoring the moisture in the air or ground and give farmer’s warning about droughts. Smart buildings can provide enhanced security for the people that enter them or warning on disasters such as earthquakes. Connected cars can improve traffic flows or allow functions to be controlled remotely. Items within the home (such as the toaster) can be controlled and monitored and even connected to each other. Health care is an interesting avenue for the Internet of Things. Certain aspects of the body could be connected to the Internet. Heart sensors could give patients and doctors data to prevent disease. Sensors that monitor white blood cells could give cancer or AIDS patients warning of a relapse. The scope and impact of the Internet of Things is almost limitless. It is just up to the innovators of the world to be creative and find ways to make it work. Much of the base technology that will enable and Internet of Things is available. The challenge now is to refine that technology and make it ubiquitous. A truly connected society involves a concerted effort from many different industry sectors such as telecommunications (the lines that would do the actual connecting), to device and appliance makers that would implant sensors and connectivity into things. Software developers would then have to create the interfaces. There are also security and privacy issues, such as keeping this mountain of data safe and away from prying eyes. Wireless standards and infrastructure also need to improve to handle all of the data that would be generated. When Will It Be Ready? Many of the innovations we have written about in The Futurist’s Cheat Sheet have seeds in today’s technology. That is the same for the Internet of Things. The technology is present, but the infrastructure and stability behind it needs to be improved. Companies specializing in machine-to-machine functions such as Numerex and KORETelematics are already in the process of designing the connected world and building business models that will help define the Internet of Things. The progression will be slow. There is no event horizon where suddenly the technology that is only a theory becomes a reality. The Internet of Things is something that must be built and refined, not something like quantum computing that is waiting for a significant technological breakthrough. In five years we will start seeing more connected cars and homes. Infrastructure like smart grids and utilities will take longer to build and we will see it evolve over the next 10 years and more. The Internet of Things will become embedded in our lives and the growth will not stop during out lifetimes. European Commission: Cluster of European Research Projects: Vision and Challenges for Realising the Internet of Things (March 2010) ReadWriteWeb: Top 5 Web Trends of 2009: Internet of Things ReadWriteWeb: Top 10 Internet of Things Developments of 2010 ReadWriteWeb: Internet of Things Explained (Video)
<urn:uuid:b9e2c45f-3f07-4204-bf5d-f484a9dfa370>
CC-MAIN-2021-10
https://readwrite.com/2012/08/31/futurists-cheat-sheet-internet-of-things/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00223.warc.gz
en
0.943829
1,268
3.625
4
A proof-of-concept published today in Nature promises warmer, cheaper and more robust quantum computing. And it can be manufactured using conventional silicon chip foundries. Most quantum computers being developed around the world will only work at fractions of a degree above absolute zero. That requires multi-million-dollar refrigeration and as soon as you plug them into conventional electronic circuits they’ll instantly overheat. But now researchers led by Professor Andrew Dzurak at UNSW Sydney have addressed this problem. “Our new results open a path from experimental devices to affordable quantum computers for real world business and government applications,” says Professor Dzurak. The researchers’ proof-of-concept quantum processor unit cell, on a silicon chip, works at 1.5 Kelvin – 15 times warmer than the main competing chip-based technology being developed by Google, IBM, and others, which uses superconducting qubits. “This is still very cold, but is a temperature that can be achieved using just a few thousand dollars’ worth of refrigeration, rather than the millions of dollars needed to cool chips to 0.1 Kelvin,” explains Dzurak. “While difficult to appreciate using our everyday concepts of temperature, this increase is extreme in the quantum world.” Quantum computers are expected to outperform conventional ones for a range of important problems, from precision drug-making to search algorithms. Designing one that can be manufactured and operated in a real-world setting, however, represents a major technical challenge. The UNSW researchers believe that they have overcome one of the hardest obstacles standing in the way of quantum computers becoming a reality. In a paper published in the journal Nature today, Dzurak’s team, together with collaborators in Canada, Finland and Japan, report a proof-of-concept quantum processor unit cell that, unlike most designs being explored worldwide, doesn’t need to operate at temperatures below one-tenth of one Kelvin. Dzurak’s team first announced their experimental results via the academic pre-print archive in February last year. Then, in October 2019, a group in the Netherlands led by a former post-doctoral researcher in Dzurak’s group, Menno Veldhorst, announced a similar result using the same silicon technology developed at UNSW in 2014. The confirmation of this ‘hot qubit’ behaviour by two groups on opposite sides of the world has led to the two papers being published ‘back-to-back’ in the same issue of Nature today. Qubit pairs are the fundamental units of quantum computing. Like its classical computing analogue – the bit – each qubit characterises two states, a 0 or a 1, to create a binary code. Unlike a bit, however, it can manifest both states simultaneously, in what is known as a “superposition”. Cheaper and easier to integrate The unit cell developed by Dzurak’s team comprises two qubits confined in a pair of quantum dots embedded in silicon. The result, scaled up, can be manufactured using existing silicon chip factories, and would operate without the need for multi-million-dollar cooling. It would also be easier to integrate with conventional silicon chips, which will be needed to control the quantum processor. A quantum computer that is able to perform the complex calculations needed to design new medicines, for example, will require millions of qubit pairs, and is generally accepted to be at least a decade away. This need for millions of qubits presents a big challenge for designers. “Every qubit pair added to the system increases the total heat generated,” explains Dzurak, “and added heat leads to errors. That’s primarily why current designs need to be kept so close to absolute zero.” The prospect of maintaining quantum computers with enough qubits to be useful at temperatures much colder than deep space is daunting, expensive and pushes refrigeration technology to the limit. The UNSW team, however, have created an elegant solution to the problem, by initialising and “reading” the qubit pairs using electrons tunnelling between the two quantum dots. The proof-of-principle experiments were performed by Dr Henry Yang from the UNSW team, who Dzurak describes as a “brilliant experimentalist”. The Latest Updates from Bing News & Google News Go deeper with Bing News on: - Superconductor experts at Fermilab lead efforts to build revolutionary quantum computerson February 26, 2021 at 3:11 pm By Shivani Majmudar and Grace Rodgers Medill Reports The Fermilab National Accelerator Laboratory, just west of Chicago, is leading one of five national centers to advance quantum computing — a move ... - UCLA Engineering Faculty Receives NSF Grant to Improve Quantum Computing Chipson February 26, 2021 at 7:56 am Kang Wang, a UCLA electrical and computer engineering professor and his colleagues received a one-year, $920,000 grant from the National ... - Qubit breakthrough is a big step towards networked quantum computers say researcherson February 26, 2021 at 7:47 am A research team used a cable to entangle qubits located in different quantum nodes. This could go a long way in creating a powerful cluster of quantum devices. - The hunt for the quantum collapseon February 26, 2021 at 5:43 am The most famous cat in science is Schrödinger's cat, the quantum mechanical mammal, which can exist in a superposition, a state that is alive as well as dead. The moment you look at it, one of both ... - Scalable distributed gate-model quantum computerson February 26, 2021 at 5:12 am A scalable model for a distributed quantum computation is a challenging problem due to the complexity of the problem space provided by the diversity of possible quantum systems, from small-scale ... Go deeper with Google Headlines on: Go deeper with Bing News on: Practical quantum computers - After a Year of Quantum Advances, the Time to Protect Is Nowon February 26, 2021 at 12:40 pm Innovations in quantum computing mean enterprise and manufacturing organizations need to start planning now to defend against new types of cybersecurity threats. - A quantum computer just solved a decades-old problem three million times faster than a classical computeron February 23, 2021 at 7:27 am Wave's researchers demonstrated that a quantum computational advantage could be achieved over classical means. - Could quantum computers fix political polls?on February 23, 2021 at 4:32 am If a quantum system can predict the locations of air molecules in a hurricane, you’d think predicting election results would be a much simpler problem. A quantum physicist and a neuroscientist tell us ... - Towards practical applications in quantum computational biologyon February 22, 2021 at 10:19 am By taking advantage of quantum phenomena, quantum computing devices allow a speedup in solving diverse tasks. In this Perspective, we discuss the potential impact of quantum computing on computational ... - Quantum computing opportunities in renewable energyon February 22, 2021 at 10:18 am There is much difficult work ahead in the translation of quantum computers from theoretical advantages over classical ones into the practical applications that will justify their production. Adding ...
<urn:uuid:f4c88448-5c3d-4011-862d-e80d15a404a0>
CC-MAIN-2021-10
https://innovationtoronto.com/2020/04/breaking-one-of-the-biggest-constraints-on-the-way-to-practical-quantum-computers/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00105.warc.gz
en
0.927738
1,530
3.78125
4
Quantum computing is one of the most interesting fields of study nowadays, both from theoretical and pragmatic point of views. A universal quantum computer can potentially solve certain problems, as factorizing numbers, much faster and with fewer resources than a classical one 1. This developing technology is based on the use of quantum mechanics as a new framework for computation. On quantum computers the information is not store, neither compute, in classical bits, but in quantum bits. The principal difference between classical and quantum bits is that qubits can be in superposition of two different states. Unfortunately, there is a general agreement about the state of the art, and it is that we are still far from having an operable universal quantum computer. Recent experimental groups have developed different architectures, but they are still small and they can operate just a few qubits. Hence, the principal concern regarding the real utility of quantum computers is the possibility of designing one that handles a few hundred qubits, at least. One milestone in the direction of quantum computing has been recently released, by the private company DWave . This company has developed an operable quantum simulator of 128 qubits, called D-Wave One. Furthermore, they are going to install a new version, called D-Wave Two, with capability for operating with 512 qubits, for a new laboratory created by Google and NASA 2. Hence, the question now is what this new machine does, and if it is really a quantum computer. What is D Wave and which problem can it solve? D-Wave is based in a technology called superconducting flux qubits. The qubits it operates are grouped in 4×4 units, with different connections between them. A scheme of the architecture is shown in Fig. 1. The first version has 128 qubits, but tot all of them can be used for computing, as some of them are disconnected from their neighbors. This new machine should be considered a quantum simulator more than a universal quantum computer. It does not perform universal quantum computation, because it cannot make arbitrary operations in all qubits. Basically the principal problem D-Wave can solve is the problem of quantum annealing. That is to find the ground state, that is the state with lowest energy, of an Ising spin glass model. This problem is known to be a non polynomial (NP) hard problem. Because of that when the dimension of the problem growths linearly the complexity of finding a solution growths in an exponential way. There are no indications that allow us to conclude that a quantum computer can solve this problem in a non-exponential way, but in any case it can give an important speedup over classical computers. A fair question regarding this, or any, new technology is which really useful problems can it solve. Directly, it can solve only annealing, but this is interesting enough, because this problem is as hard as the hardest problem in the NP class. In this class there are many popular problems, as the salesman, factoring or minimization in artificial intelligence. If any of these problems is mapped to the problem of annealing it could be solved by the use of this new technology. The way of mapping each concrete algorithm is, of course, highly non trivial. How can D-Wave be tested? Recently, a test of the quantumness of D-Wave has been performed 3. For this purpose several researchers performed computer simulations in both the quantum machine and classical computers. They compare principally three approaches, D-Wave, a classical simulation of how quantum annealing should perform in quantum simulators, and the best known classical. For obvious reasons, the last two approaches were run in classical computers. In Figure 2, the principal differences between these three approaches are displayed. For this simulation the authors selected 1000 different configurations and they launched each of them 1000 times, with different initial states. The plot shows how many configurations were found as a function of the success probability of finding the correct answer for each configuration. The interpretation of this figure is clear. Classical and quantum annealing exhibit very different behaviors. For classical algorithms the distribution presents only one maximum, and most of the problems have one half probability of being solvable. For the quantum case there are many “easy” and “hard” problems and that leads to a distribution with two maxima, closed to zero and one. These result points in the direction that really D-Wave behaves in a quantum way, as its results are closer to the simulation of a quantum system than to a classical algorithm. Finally, the authors also analyzed the scaling of the computation time with the size of the problem, but the results are inconclusive. The optimization problem for 108 qubits is quite easy, and it is difficult to see if there is a quantum speedup. This question could be potentially addressed by the next generation of quantum annealers, which are expected to have 512 qubits. Based in the research performed Boixo et al there are many indicators that D-Wave is a genuine quantum annealer. On the other hand, it is not faster than classical ones. In any case it is definitively a milestone in the field of quantum computer, because it also represents a paradigm shift. Instead of creating a universal quantum computer with a few qubits, the developers of D-Wave have focused in a quantum device with many qubits that can perform only one task. Only time can clarify if this is the beginning of a new time in quantum computing. If D-Wave Two really beats classical computers in annealing it will be the first time a quantum device can compute a general problem better than classical ones. This problem can be, at least potentially, useful to many other fields. - Quantum Computation and Information. Nielsen and Chuang. Cambridge University Press. ↩ - Google and NASA snap up quantum computer. N. Jones. Nature News. doi:10.1038/nature.2013.12999 ↩ - S. Boixo, T.F. Ronnow, S.V. Isakov, Z. Wang, D. Wecker, D.A. Lidar, J.M. Martinis and M. Troyer. arXiv 1304.4595 [quant-ph] ↩
<urn:uuid:6b9eb6f5-1224-4f0c-855e-124ae873f306>
CC-MAIN-2021-10
https://mappingignorance.org/2013/05/30/is-d-wave-a-quantum-computer/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00428.warc.gz
en
0.947238
1,287
3.875
4
I am a Software Engineer with a passion for science, technology, business, and everything in between. Just like that, virtually every single computer on earth can be considered obsolete. Even if a manufacturer decides to release a new computer model within the next six months, it’s out of date. How is this possible? The fact is, Google, IBM, and other tech giants are working around the clock to engineer technology based on quantum mechanics, which makes our current computers look like toys. This is a brand-new type of technology that has never been developed before. So just like a horse and buggy is not the same as a car, our modern-day computers are not at all like the newly developed quantum computers. They have so much more processing power and are capable of cracking problems that current technology cannot even touch. What Is a Quantum Computer? For decades, computers have always used bits and the binary system to compute information. This system consists of zeros and ones and gives an ordinary computer a precise command. The machine thus responds by performing the appropriate activity, whether it be completing a job or producing the data requested. A quantum computer uses qubits, which is a more complex quantum version of the traditional binary system leaving endless possibilities. Unlike our current computers, the jobs or instructions given to the machine can take place under a measure of uncertainty with the help of superposition, entanglement, and interference. But what does this mean? In the quantum world, superposition can be likened to a constant spin of a coin having zero and one individually on each side. Therefore, there is some probability of the outcome being zero and some probability of it being one. Also, this probability does not have to be 50/50, leaving a measure of uncertainty when transferring data. Entanglement incorporates two qubits that are in a superposition spinning between zero and one, and they are mimicking each other in movement. This means that when you alter the state of one particle, you subsequently change the stated of the other particle, no matter how far apart these particles are. The particles are joined together but not by a physical connection. And lastly, interference is the wave movement created by particles called beating, as the data is transmitted. Superposition produces patterns of interference, which at times can work in harmony or cancel each other out when processing data. If you are scratching your head, it just means that you are slowly beginning to get it. To gain a further understanding of quantum computers, you have to grasp what quantum mechanics is. Quantum mechanics is the scientific theory behind the tiniest parts of the world around us, such as molecules, atoms, and other subatomic particles. Based on extensive research, scientists have been able to engineer quantum chips in hermetically sealed glass laboratories and place them into the circuit boards of temperature-controlled quantum computers. The metal on a silicon chip, also known as superconducting qubits, is how the particles of information are transferred from point A to B. Can It Reverse Time? Within a carefully controlled environment, scientists were able to use a quantum computer to reverse a process that had previously taken place. How was this done? Within the theory of quantum mechanics, the atoms and various particles are described as a wave function. It is not the same as a tangible wave, but it’s an abstract mathematical portrayal of the position and movement of an electron. Even so, these calculations of its position are all probability, and nothing is ever exact. Nevertheless, researchers were able to take these calculations and use the law of thermodynamics to reverse a process that had taken place by two qubits. Although difficult and problematic in a standard atmosphere, in a quantum environment it has become a straightforward procedure to complete. This does not mean that a time machine has finally been created. But they still have effectively been able to write a computer program that reverses the system back to its original state 85% of the time. When a third qubit was introduced into the experiment, the success rate decreased to only 50% because the computer found it more challenging to maintain control over the environment. This break-through demonstrates how they can easily mimic occurrences that cannot be duplicated in the real world. Quantum Computers' Future Potential Uncertainty is usually not looked at in a positive light. However, quantum computers take advantage of superposition and this uncertainty in solving problems in a different manner. In the healthcare industry, diseases plague millions of people. With the help of these computers and specially designed applications, one can use uncertainty to predict the spread of a particular illness and help to find a cure. Uncertainty with the IT industry means that you can use this technology to prevent hackers from accessing private information. A hacker would never be able to decode a password key perfectly because they would have to break the laws of quantum physics to break encryption with this technology. The most shocking aspect is the development of applications for teleporting information with the use of a quantum computer. It is not the teleportation of a physical object, but the transportation of data. This is possible through the manipulation of photon particles across space and time, creating a channel for teleportation, and also making it possible for a new type of internet. Scientists were able to complete this by using entanglement with two twin photon particles, separating them by sending one to a satellite that orbits the earth, and then begin the manipulation of one photon, which in turn also manipulates the other. Why or how this happens is not entirely clear, but it has been accomplished. Due to the advanced capabilities of this new technology, artificial intelligence is on the verge of reaching an entirely new level. The human brain has always been much more sophisticated and complex than any computer could have been designed. Engineers have been trying to replicate this type of neural network for a long time. Currently, scientists are building a quantum computer that performs very similar to the human brain. The real challenge with this is designing the software program to accomplish this difficult task and uploading it to a quantum computer. So, the question still remains, will computers be calling all the shots? You tell me. Larry Slawson from North Carolina on March 19, 2019: A very interesting idea. I have never heard of this before. Thank you for sharing.
<urn:uuid:d5737336-9d25-4964-80f5-b37e23d8d475>
CC-MAIN-2021-10
https://turbofuture.com/computers/Will-Computers-Be-Calling-All-the-Shots
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00108.warc.gz
en
0.951007
1,289
3.5
4
- On Tuesday, Intel delivered something called “a 17-qubit superconducting test chip” to QuTech, Intel’s research partner in the Netherlands. - This is the world’s second 17-quantum bits, or qubits, quantum computing chip. The first was introduced by IBM in May. - This chip allows Intel to put a stake in the ground in one of the strangest, and potentially game-changing, new forms of computing that researchers are currently developing. On Tuesday, Intel came out with its state-of-the 17-qubit superconducting test chip, matching the biggest quantum computer chip ever to be produced, by IBM. Until now, quantum computing has been, in some regards, a two-horse race between IBM and Google. In April, Google showed off its research on a nine qubit computing chip and has advanced other research that could allow it to break some quantum computing records by the end of the year. In May, IBM showed off the first-ever 17 qubit chip. IBM’s work is based on research done at Yale through professor Robert Schoelkopf (the IBM team includes many of his Ph.D. and post-grad students). Google’s work is based on research from University of California at Santa Barbara under professor John Martinis, an effort that was backed and absorbed by Google in 2014. All the researchers from IBM, Intel, Google and elsewhere, like Microsoft, are all in a race to to build a 50 quibit chip. That’s the size needed to build a supercomputer which would be vastly more powerful than any of today’s supercomputers. No one even knows what kinds of problems a computer that fast and smart could solve. Quantum computers are different than today’s computers, which are digital. A digital computer thinks in two states: zero and one (or off and on). But a quantum computer uses combinations of zeroes and ones to creates multiple states, which can be a zero, a one, both at the same time or (and this is the weird part) something in between, a mysterious zero/one state that’s hard to describe or determine. These messy states are called “entanglement,” and there are already several well-known mathematical formulas (aka algorithms), that can make use of these states to calculate things that traditional computers aren’t powerful enough yet to do. For instance, quantum computers can work with billions of variables at the same time, like the interaction between molecules in chemistry. They are also great for machine-learning tasks. These computers are expected to help find new drugs, create new forms of computer security. It is also believed that this type of computing could lead to computers that can think and reason to create humanoid robots, or deliver medicine that is personalised to each human’s own, unique chemistry. If all this sounds hard to grasp, don’t worry, you aren’t alone. Microsoft is also betting big on quantum computing and yet Bill Gates admits that, despite all the physics and maths he knows, even he doesn’t really understand how quantum works. That’s how complicated it is. Colder than space For now, the challenge is simply to build bigger quantum computers. As Intel explains, qubits are tremendously fragile. Any noise or unintended distraction can cause them to lose data. They rely on superconducting metals that must be kept unbelievably cold. They must operate in a temperature known as “20 millikelvin — or 250 times colder than deep space,” Intel says. That kind of condition is hard to create and maintain. It’s not just the cold that’s a problem. As a quantum computer grows in size by adding more qubits, it can malfunction in a lot of ways. But progress is going fast. In May of 2016, IBM launched a five-qubit machine and the world’s first cloud service. Flash forward a year and these chips are already more than triple the size. Google expects that it will have created a test computer so big and powerful by the end of this year, that it will be able to perform certain calculations that traditional supercomputers cannot do, a concept called “quantum supremacy,” Martinis told Motherboard. In the meantime, Intel just threw itself headlong into the game. Here’s a closer look at what it’s up to: Get the latest IBM stock price here. Intel's 7-qubit test chip is about the size of a quarter. The gold connectors allow the chip to be connected to the world outside the quantum computer. Here's a look at the other side of the chip, as it was packaged in the box. One of the things that Intel is also working on is how to eventually mass produce these chips. Mass production is a much bigger, harder problem than creating a single experimental chip. There are still lots of variables to be perfected before this technology is ready for the factory floor. These researchers at QuTech's quantum computing lab are focused on just that. Just for comparison: IBM's quantum computer lives in that white thing. It's a special refrigerator that keeps it at almost absolute zero. Business Insider Emails & Alerts Site highlights each day to your inbox.
<urn:uuid:f7e7abd7-5140-4ad8-ad04-2f89f8e0199c>
CC-MAIN-2021-10
https://www.businessinsider.com.au/intel-just-challenged-ibm-for-the-future-of-quantum-computing-2017-10
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00071.warc.gz
en
0.949936
1,120
3.5
4
A quantum computer is any gadget for calculation that utilizes unmistakably quantum mechanical wonders, for example, superposition and ensnarement, to perform the procedure on the information. In an old-style (or regular) computer, data is put away as bits; in quantum computers, it is put away as qubits (quantum bits). The essential guideline of quantum calculation is that the quantum properties can be utilized to speak to and structure information and that quantum instruments can be formulated and worked to perform tasks with this information. Even though quantum registering google is still in its early stages, tests have been done in which quantum computational activities were executed on a few google quantum computers qubits. Research in both hypothetical and reasonable zones proceeds at a distracted pace, and numerous national governments and military subsidizing organizations bolster quantum processing google research to create google quantum computers for both regular citizen and national security purposes, for example, cryptanalysis. On the off chance that enormous scope google quantum computers can be fabricated, they will have the option to take care of specific issues exponentially quicker than any of our present traditional computers (for instance Shor's calculation). Quantum computers are unique about different computers, for example, DNA computers and customary computers dependent on transistors. Some processing structures, optical computers may utilize old-style superposition of electromagnetic waves, however without some explicitly quantum mechanical assets, for example, trap, they have less potential for computational accelerate than quantum computers. The intensity of quantum computers Integer factorization is accepted to be computationally infeasible with normal computers for huge whole numbers that are the result of just a couple of prime numbers (e.g., results of two 300-digit primes). How does Quantum Computer work? Here we will talk about how quantum computers work and on which quantum algorithm. Quantum Computers works on Grover's Algorithm. Quantum computers perform computations dependent on the likelihood of an item's state before it is estimated – rather than only 1s or 0s – which implies they can process a large amount of information as compared to classical computers. Old style computers complete sensible activities utilizing the unmistakable situation of a physical state. These are typically double, which means its activities depend on one of two positions. A solitary state -, for example, on or off, up or down, 1 or 0 – is known as a piece. In quantum registering, activities rather utilize the quantum condition of an article to create what's known as a qubit. These states are the unclear properties of an article before they' ve been distinguished, for example, the turn of an electron or the polarization of a photon. Instead of having an unmistakable position, unmeasured quantum states happen in a blended 'superposition' much the same as a coin turning through the air before it arrives in your grasp. These superpositions can be caught with those of different articles, which means their ultimate results will be numerically related regardless of whether we don't have the foggiest idea yet what they are. The intricate arithmetic behind these disrupted conditions of snared ' turning coins' can be connected to uncommon calculations to make short work of issues that would take a traditional computer quite a while to work out… if they would ever ascertain them by any stretch of the imagination. Such calculations would be valuable in taking care of complex numerical issues, delivering hard- to-break security codes, or foreseeing different molecule collaborations in substance responses. Types of Quantum Computers Processors: IBM IBM Q 53 53 QB Intel 17-Qubit Superconducting Intel Tangle Lake 49 QB Rigetti 8Q Agave 8 QB Comparison Between Quantum and Classic Computers: By comparison, a quantum computer could take care of this issue more effectively than old-style computers utilizing Grover’s algorithm calculation to discover its elements, and consequently, there is google quantum matchless quality. This capacity would permit quantum computers to "break" huge numbers of the cryptographic frameworks being used today, as in there would be a polynomial-time (in the number of bits of the whole number) calculation for taking care of the issue. Specifically, the greater part of the well known open key figures depends on the trouble of calculating whole numbers, including types of RSA. These are utilized to ensure secure Web pages, encoded email, and numerous different kinds of information. Breaking these would have noteworthy consequences for electronic protection and security. The best way to expand the security of a calculation like RSA is to increment the key size and expectation that a foe doesn't have the assets to fabricate and utilize an amazing enough quantum computer. It appears to be conceivable that it will consistently be conceivable to fabricate old-style computers that have a greater number of bits than the number of qubits in the biggest quantum computers. Why Quantum computing is the Supreme? Now, we will discuss google quantum supremacy. For the present, old-style innovation can deal with any errand tossed at a quantum computer. Quantum matchless quality depicts the capacity of quantum computers to beat their old style partners. A few organizations, for example, IBM and Google, guarantee we may be close, as they keep on packing more qubits together and fabricate increasingly exact gadgets. Not every person is persuaded that quantum computers merit the exertion. There is a term called quantum tempering which is a significant factor. Quantum annealing (QA) is a metaheuristic for finding the worldwide least of a given target work over a given arrangement of competitor arrangements (applicant states), by a procedure utilizing quantum variances (as such, a meta-system for finding a technique that finds a flat out least size/length/cost/good ways from inside a perhaps extremely huge, yet regardless limited arrangement of potential arrangements utilizing quantum change based calculation rather than old-style calculation). A few mathematicians accept some hindrances that are difficult to survive, putting quantum figuring always far off. What's Quantum Computer price? Today, a solitary qubit will interfere with you $10,000 – and that is before you consider innovative work costs. At that value, helpful all-inclusive quantum computers – equipment alone – comes in at any rate of $10bn. This for a machine whose genuine business esteem is a long way from ensuring. To make quantum computers industrially reasonable, the expense per qubit should drop significantly. However, how this will occur, nobody knows. Quantum computing Google benefits: Following are quantum computing benefits 1. Quantum computers can take care of issues that are inconceivable or would take a customary computer an unfeasible measure of time (a billion years) to fathom. 2. Quantum computers are extraordinary for taking care of streamlining issues from making sense of the most ideal approach to plan trips at an air terminal to deciding the best conveyance courses for the FedEx truck. 3. Quantum computers will change the scene of information security. Even though quantum computers would have the option to split a significant number of the present encryption methods, forecasts are that they would make hack-confirmation substitutions. Five ways Quantum Computers will change the world: Here are five different ways that show how will quantum computing change the world. Let us discuss it in brief. 1. Make life-sparing medications and comprehend a portion of science's most mind-boggling issues: Quantum registering will alter Artificial Intelligence(AI). The fundamental standard of AI is that the more input you give a computer program, the more precise it becomes. This criticism figures probability from numerous potential decisions and results in the program showing "intelligence" and improved execution. Quantum computers quickly dissect gigantic amounts of information so they can fundamentally abbreviate the AI expectation to absorb information. If innovation turns out to be increasingly instinctive it will make a colossal effect in each Industry. We' ll have the option to do things we never thought conceivable from making life-sparing meds to tackling a portion of science's most mind-boggling issues. 2. A genuine discussion with AI Quantum Computing will change man-made brainpower by giving huge figuring capacity to empower a quicker and increasingly hearty AI particularly in characteristic language handling and general AI. we have achieved an extraordinary arrangement just in the previous barely any years with the present advances in registering power. Quantum Computing is a far cry further developed than anything we have today. An AI on a Quantum Computers can hold a genuine discussion with people and comprehend what is being said. 3. Critical danger to digital security The intensity of quantum computers will predominate our present preparing abilities, introducing another time of information and revelation. On the drawback, that force postures such a critical danger to digital security that we will need to revaluate how we secure business exchanges (and every single other datum moves), or none of them will be protected. 4. No exchanging Let's be honest — quantum registering will make human exchanging outdated. We saw what high-recurrence exchanging did to the intensity of primate-slid entertainers in the money related markets. Quantum calculations are going to exploit to an unheard-of level. 5. Undermine web-based financial exchanges, every one of our interchanges, driverless vehicles, and even our races: Quantum Computing will compel us to re-examine the central ideal models of our advanced security. Known quantum computer assaults — which are simply trusting that genuine quantum computers will show up — will break quite a bit of the present broadly utilized. cryptography. For what reason do we give it a second thought? Since this crypto supports the security of pretty much all that we currently underestimate: from web-based financial exchanges to every one of our interchanges, to the trust we have in our driverless vehicles and even in our races.
<urn:uuid:141014bb-69c5-4ddf-a681-350bd6844e0b>
CC-MAIN-2021-10
https://www.insightsoftechnology.com/simple-guidance-for-you-in-what-is-quantum-computers/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00232.warc.gz
en
0.920918
2,018
3.75
4
Back in 1958, in the earliest days of the computing revolution, the US Office of Naval Research organized a press conference to unveil a device invented by a psychologist named Frank Rosenblatt at the Cornell Aeronautical Laboratory. Rosenblatt called his device a perceptron, and the New York Times reported that it was “the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence.” Those claims turned out to be somewhat overblown. But the device kick-started a field of research that still has huge potential today. A perceptron is a single-layer neural network. The deep-learning networks that have generated so much interest in recent years are direct descendants. Although Rosenblatt’s device never achieved its overhyped potential, there is great hope that one of its descendants might. Today, there is another information processing revolution in its infancy: quantum computing. And that raises an interesting question: is it possible to implement a perceptron on a quantum computer, and if so, how powerful can it be? Today we get an answer of sorts thanks to the work of Francesco Tacchino and colleagues at the University of Pavia in Italy. These guys have built the world’s first perceptron implemented on a quantum computer and then put it through its paces on some simple image processing tasks. In its simplest form, a perceptron takes a vector input—a set of numbers—and multiplies it by a weighting vector to produce a single-number output. If this number is above a certain threshold the output is 1, and if it is below the threshold the output is 0. That has some useful applications. Imagine a pixel array that produces a set of light intensity levels—one for each pixel—when imaging a particular pattern. When this set of numbers is fed into a perceptron, it produces a 1 or 0 output. The goal is to adjust the weighting vector and threshold so that the output is 1 when it sees, say a cat, and 0 in all other cases. Tacchino and co have repeated Rosenblatt’s early work on a quantum computer. The technology that makes this possible is IBM’s Q-5 “Tenerife” superconducting quantum processor. This is a quantum computer capable of processing five qubits and programmable over the web by anyone who can write a quantum algorithm. Tacchino and co have created an algorithm that takes a classical vector (like an image) as an input, combines it with a quantum weighting vector, and then produces a 0 or 1 output. The big advantage of quantum computing is that it allows an exponential increase in the number of dimensions it can process. While a classical perceptron can process an input of N dimensions, a quantum perceptron can process 2N dimensions. Tacchino and co demonstrate this on IBM’s Q-5 processor. Because of the small number of qubits, the processor can handle N = 2. This is equivalent to a 2×2 black-and-white image. The researchers then ask: does this image contain horizontal or vertical lines, or a checkerboard pattern? It turns out that the quantum perceptron can easily classify the patterns in these simple images. “We show that this quantum model of a perceptron can be used as an elementary nonlinear classifier of simple patterns,” say Tacchino and co. They go on to show how it could be used in more complex patterns, albeit in a way that is limited by the number of qubits the quantum processor can handle. That’s interesting work with significant potential. Rosenblatt and others soon discovered that a single perceptron can only classify very simple images, like straight lines. However, other scientists found that combining perceptrons into layers has much more potential. Various other advances and tweaks have led to machines that can recognize objects and faces as accurately as humans can, and even thrash the best human players of chess and Go. Tacchino and co’s quantum perceptron is at a similarly early stage of evolution. Future goals will be to encode the equivalent of gray-scale images and to combine quantum perceptrons into many-layered networks. This group’s work has that potential. “Our procedure is fully general and could be implemented and run on any platform capable of performing universal quantum computation,” they say. Of course, the limiting factor is the availability of more powerful quantum processors capable of handling larger numbers of qubits. But most quantum researchers agree that this kind of capability is close. Indeed, since Tacchino and co did their work, IBM has already made a 16-qubit quantum processor available via the web. It’s only a matter of time before quantum perceptrons become much more powerful. This article was originally published by: https://www.technologyreview.com/s/612435/machine-learning-meet-quantum-computing/
<urn:uuid:86285d1b-0273-415b-90d2-c23ec9b4d5a9>
CC-MAIN-2021-10
https://scienceofsingularity.com/2020/02/24/mit-technology-review-machine-learning-meet-quantum-computing/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00310.warc.gz
en
0.919581
1,052
3.859375
4
Graphene strips folded in similar fashion to origami paper could be used to build microchips that are up to 100 times smaller than conventional chips, found physicists – and packing phones and laptops with those tiny chips could significantly boost the performance of our devices. New research from the University of Sussex in the UK shows that changing the structure of nanomaterials like graphene can unlock electronic properties and effectively enable the material to act like a transistor. - Robots for kids: STEM kits and more tech gifts for hackers of all ages - The best VR and AR headsets for business and personal use - The best 3D printers for business and home use - What is AI? Everything you need to know - We are living a dizzying rate of technological change. Is it good for us? (ZDNet YouTube) - Free PDF: Robotics in the enterprise (TechRepublic) The scientists deliberately created kinks in a layer of graphene and found that the material could, as a result, be made to behave like an electronic component. Graphene, and its nano-scale dimensions, could therefore be leveraged to design the smallest microchips yet, which will be useful to build faster phones and laptops. SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium) Alan Dalton, professor at the school of mathematical and physics sciences at the University of Sussex, said: “We’re mechanically creating kinks in a layer of graphene. It’s a bit like nano-origami. “This kind of technology – ‘straintronics’ using nanomaterials as opposed to electronics – allows space for more chips inside any device. Everything we want to do with computers – to speed them up – can be done by crinkling graphene like this.” Discovered in 2004, graphene is an atom-thick sheet of carbon atoms, which, due to its nano-sized width, is effectively a 2D material. Graphene is best known for its exceptional strength, but also for the material’s conductivity properties, which has already generated much interest in the electronics industry including from Samsung Electronics. With our award-winning products, Network Performance Monitor (NPM) and NetFlow Traffic Analyzer (NTA), you will gain insights into traffic patterns and detect, diagnose, and resolve network performance issues. Downloads provided by SolarWinds The field of straintronics has already shown that deforming the structure of 2D nanomaterials like graphene, but also molybdenum disulfide, can unlock key electronic properties, but the exact impact of different “folds” remains poorly understood, argued the researchers. Yet the behavior of those materials offers huge potential for high-performance devices: for example, changing the structure of a strip of 2D material can change its doping properties, which correspond to electron density, and effectively convert the material into a superconductor. The researchers carried an in-depth study of the impact of structural changes on properties, such as doping in strips of graphene and of molybdenum disulfide. From kinks and wrinkles to pit-holes, they observed how the materials could be twisted and turned to eventually be used to design smaller electronic components. Manoj Tripathi, research fellow in nano-structured materials at the University of Sussex, who led the research, said: “We’ve shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate.” The findings are likely to resonate in an industry pressed to conform to Moore’s law, which holds that the number of transistors on a microchip doubles every two years, in response for growing demand for faster computing services. The problem is, engineers are struggling to find ways to fit much more processing power into tiny chips, creating a big problem for the traditional semiconducting industry. A tiny graphene-based transistor could significantly help overcome these hurdles. “Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future,” said Dalton. Since it was discovered over 15 years ago, graphene has struggled to find as many applications as was initially hoped for, and the material has often been presented as a victim of its own hype. But then, it took over a century for the first silicon chip to be created after the material was discovered in 1824. Dalton and Tripathi’s research, in that light, seems to be another step towards finding a potentially game-changing use for graphene. - Stratasys launches carbon fiber material, aims for wider adoption By bringing carbon fiber to the F123 Series printers, via FDM ABS-CF10 material, Stratasys will bring the material to more end users. - Humble hero: How RFID is helping end the pandemic A common technology takes on an uncommon mission: Distributing vaccines around the globe. - Apple will fix your Apple Watch for free if you run into this charging problem Free repair offer applies to Apple Watch Series 5 or Apple Watch SE running watch OS 7.2 and 7.3 that won’t charge after entering Power Reserve mode. - Inside the Middle East’s growing love for eSports With high internet penetration and lots of gamers, an eSports boom could be on the way across the region. - Australia’s space sector wants policies introduced to ensure satellite sovereignty.But the community wants it done in a way that will not hurt the sector’s growth. - Aussie blockchain community calls for more government support around the nascent tech.Blockchain Australia CEO said there should be active consultation while representatives from RMIT said there’s an opportunity for ‘regulatory evolution’ around blockchain and.. - AustCyber merges with Stone & Chalk to boost local capability in emerging tech Touted as combining the ‘greatest concentration of cybersecurity industry expertise’ with the ‘most developed technology commercialisation infrastructure that Australia … - Bitcoin SV node software update lifts limits and uplifts COVID-19 vaccination throughput. The Dynastic update to Bitcoin SV Node software means a lifting of limitations that were previously imposed on apps so that enterprises can increase throughput and effectively scale. … - Best security camera for businesses and home use in 2021.Storage, flexibility, quality recording, and easy installation are some of the important factors to consider when deciding on a work-safe security system. Our top picks for commercial … Graphene Processors and Quantum Gates Since the 1960s, Moore’s law has accurately predicted the evolution trend of processors as to the amount of transistor doubling every 2 years. But lately we’ve seen something odd happening, processor clocks aren’t getting any faster. This has to do with another law called Dennard Scaling and it seems that the good old days with silicon chips are over. Hello everyone, subject zero here! Thankfully the solution might have been available for quite some time now and Graphene offers something quite unique to this problem, not only for your everyday processor types, but also Quantum computing. In 2009 it was speculated that by now we would have the famous 400GHz processors, but this technology has proven itself to be a bit more complicated than previously thought however most scientists including me, believe that in the next 5 years we will see the first Graphene commercial hardware come to reality. References https://en.wikipedia.org/wiki/Quantum… https://www.nature.com/articles/s4153… https://www.hpcwire.com/2019/05/08/gr… https://en.wikipedia.org/wiki/Graphen… https://www.computerhope.com/history/… http://www.tfcbooks.com/teslafaq/q&a_… https://www.rambus.com/blogs/understa… https://www.technologyreview.com/s/51… https://arxiv.org/ftp/arxiv/papers/13… https://www.sciencedaily.com/releases… https://www.nature.com/articles/srep2… http://infowebbie.com/scienceupdate/s… https://graphene-flagship.eu/field-ef… https://github.com/karlrupp/microproc… https://aip.scitation.org/doi/full/10… https://www.theglobeandmail.com/repor…
<urn:uuid:37554561-9596-490a-ab90-cecaab18c26f>
CC-MAIN-2021-10
https://onlinemarketingscoops.com/tiny-graphene-microchips-could-make-your-phones-laptops-thousands-of-times-faster-say-scientists/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00432.warc.gz
en
0.924224
1,832
3.625
4
Author: Sarah Kearns Editors: David Mertz, Zuleirys Santana Rodriguez, and Scott Barolo In a previous post, we discussed how proteins fold into unique shapes that allow them to perform their biological functions. Through many physical and chemical properties, like hydrogen bonding and hydrophobicity, proteins are able to fold correctly. However, proteins can fold improperly, and sometimes these malformed peptides aggregate, leading to diseases like Alzheimer’s. How can we figure out when the folding process goes wrong? Can we use computers to figure out the folding/misfolding process and develop methods to prevent or undo the damage done by protein aggregates? In the late 1960s, a scientist named Cyrus Levinthal noted that protein folding is different from regular chemical reactions. Chemical reactions proceed from a reactant to a product via a set pathway of structures and intermediates. Proteins do not do this because a protein doesn’t find just one intermediate shape as it folds — it can potentially find millions. Levinthal concluded that a new protein, moving through so many intermediate structures, must take an enormously long time to find its final native state. To understand the vast number of conformational possibilities, let’s take a polypeptide of 101 amino acids. There will be a total of 100 bonds connecting amino acids, each bond having six possible conformations (see Figure 1). This means that a protein of 101 amino acids has 3100, or 5*1047, configurations—and some proteins are five or ten times longer! Even if our 101-amino acid protein were able to sample 1013 conformations per second, it would still need 1027 years to try all possible shapes. However, in reality, it takes seconds, not eons, for a protein to find its native conformation. This leads to a big question: Can humans predict how proteins will fold? Even with the help of computers, which can test each possible shape in microseconds, testing them all would require 30 years of computation just for one protein. Simplifying Structure Prediction Protein structures, such as hydrogen and ionic bonding and hydrophobic interactions, are difficult to predict rationally just based on the amino acid sequence. Instead, a database of protein structures found by x-ray crystallography, called the Protein Data Bank, has been more helpful in determining the rules of protein folding. Still, determining protein structures accurately is difficult and time-consuming. Some computational shortcuts have made the process simpler, but the predicted folds still are not exact. The biggest simplifications are made by assuming a lattice structure or using a coarse-grained representation. The former takes a globular protein that typically has variable bond lengths between each amino acid into a lattice (has uniform bond lengths) and places each residue into a 3D grid structure thus limiting the number of possibilities the possible placements of each amino acid. A coarse-grained model would simplify a protein structure by representing amino acids as a single point (see Figure 2). So far, computational prediction of protein structures is limited to these simpler models because more realistic all-atom energy diagrams are too complex and computationally heavy. In our protein of 101 amino acids, there are close to 2000 atoms to move around in 3100 configurations. With the advent of quantum computing, such problems are becoming easier to solve, but for now, they still use coarse-grained representations. How Your PC Can Help Mine Data Some researchers have turned such computational problems into citizen science projects. Perhaps the most famous of these is FoldIt, developed by the Center for Game Science and the Department of Biochemistry at the University of Washington. Foldit is an online game where players compete to create accurate protein structures by moving around the backbone chain, amino acid residues, and domains. Players score points by packing the protein, hiding hydrophobic residues, and clearing any clashes between side chains to minimize the energy of the overall structure. The lowest-energy conformations from the game are then collected and analyzed to improve real-life folding algorithms. A less hands-on folding program is Folding@home from Stanford University, which borrows unused processors on your personal computer to work on a folding algorithm. While users check their emails or listen to music, or even when the screensaver runs, their computers solve structures and compute minimization functions. All this data has gone towards the goal of figuring out both how malformed proteins aggregate and how to design drugs that will prevent misfolding. FoldIt has already produced a retrovirus structure that is being used to determine inhibitors of HIV. One of the labs behind FoldIt has been focusing on proteins involved in cancer, AIDS, and other diseases. The Folding@home project has produced about 130 peer-reviewed papers describing its accomplishments in simulating, not only protein folding but also molecular dynamics, which help determine the ability for drugs to bind. Having an idea of what the protein does and where it does it, without having to use expensive machines to do crystallography (to get the structure of a protein) or high-throughput screening (to find the substrates of a protein), saves both time and resources when developing a drug. More work has to be done before computational predictions perfectly line up with crystal structures. But when that day comes, we will be much closer to understanding how proteins work, and how to cure diseases of protein folding and function. About the author Read all posts by Sarah here. Figure 1: Sarah Kearns Figure 2: Sarah Kearns
<urn:uuid:aed15a1a-9d98-49e4-8306-17762288b69c>
CC-MAIN-2021-10
https://misciwriters.com/2017/03/14/computing-levinthals-paradox-protein-folding-part-2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00513.warc.gz
en
0.942798
1,137
3.875
4
Quantum computers are expected to play a crucial role in Machine Learning (ML) and AI. But qubit counts and more-sophisticated algorithms alone will not deliver quantum advantage. Toolkits and a unified experience are needed to bring it to the masses. Think of Quantum computing as a new kind of computing, using the same physical rules that atoms follow in order to manipulate information. At this fundamental level, quantum computers execute quantum circuits—like a computer's logical circuits but now using the physical phenomena of superposition, entanglement, and interference to implement mathematical calculations out of the reach of even the most advanced supercomputers. All computing systems rely on a fundamental ability to store and manipulate information. Current computers manipulate individual bits, which store information as binary 0 and 1 states. Quantum computers leverage quantum mechanical phenomena to manipulate information. To do this, they rely on quantum bits, or qubits. Superposition refers to a combination of states we would ordinarily describe independently. To make a classical analogy, if you play two musical notes at once, what you will hear is a superposition of the two notes. Entanglement is a famously counterintuitive quantum phenomenon describing behavior we never see in the classical world. Entangled particles behave together as a system in ways that cannot be explained using classical logic. Finally, quantum states can undergo interference due to a phenomenon known as phase. Quantum interference can be understood similarly to wave interference; when two waves are in phase, their amplitudes add, and when they are out of phase, their amplitudes cancel. Quantum and AI There are high hopes that quantum computing’s tremendous processing power will unleash exponential advances in artificial intelligence (AI). AI systems thrive when machine-learning algorithms used to train them are given massive amounts of data to ingest, classify, and analyze. The more precisely that data can be classified according to specific characteristics, or features, the better AI will perform. Quantum computers are expected to play a crucial role in machine learning, including the crucial aspect of accessing more computationally complex feature spaces—the fine-grain aspects of data that could lead to new insights. Machine learning is changing the way we use computers in our everyday lives and in science. It is natural to seek connections between these two emerging approaches, in the hope of reaping multiple benefits. IBM is testing quantum systems to train and run machine-learning algorithms to dramatically improve tasks such as classification of data. This could allow us to solve complex problems more quickly, potentially improving applications like disease diagnosis, fraud detection, efficient energy management, and more. Scaling Quantum Systems For commercial application, quantum computers will need to demonstrate fault tolerance, just as we expect our existing computing systems to. What does it take to create a fault-tolerant quantum system? To increase the computational power of a quantum computer, improvements are needed along two dimensions. - Qubit count: The more qubits you have, the more states can in principle be manipulated and stored. - Low error rates: These are needed to manipulate qubit states accurately and perform sequential operations that provide answers, not “noise.” A useful metric for understanding quantum capability is quantum volume. This measures the relationship between number and quality of qubits, circuit connectivity, and error rates of operations. Developing systems with larger quantum volume will lead to discovering the first instances of applications where quantum computers can offer a computational advantage for solving real problems. Organizations across a wide array of industries are partnering with IBM to explore a broad set of quantum computing applications. Carmakers, airlines, energy companies, healthcare providers, financial services firms, and world-class research organizations are considering new solutions and services that until recently were unthought of. Organizations are now able to start their quantum journey with access to advanced systems and a comprehensive software stack, supported by a large quantum development community. These projects help showcase quantum computing’s power to solve real-world problems too complex for even today’s most-powerful supercomputers. The annual IBM Quantum Summit looked at the promise of quantum computing for industry looking at several business challenges that quantum computers are well-suited to tackle. Foremost among them are the ability to help researchers create simulations of complex chemical compounds and reactions out of reach for today’s computers. Such simulations are expected to have a profound impact on the development of new materials that improve battery technology, resist corrosion, and make renewable energy more efficient and less expensive. IBM announced a roadmap at the annual IBM Quantum Summit to reach 1,000+ qubits by 2023. Qubit counts and more-sophisticated algorithms alone will not deliver Quantum Advantage (the point where certain information-processing tasks can be performed more efficiently or cost-effectively on a quantum computer versus a classical one). The roadmap aims to take the technology from today’s noisy, small-scale devices to the million-plus qubit devices of the future. This is essential if quantum computers are to help industry and research organizations tackle some of the world’s biggest challenges, across industry, government, and research. Here are five things you should know about the roadmap: - IBM quantum scientists are building a quantum computer with a 1,121-qubit processor, called Condor, inside a large dilution "super-fridge." The Condor processor-based quantum computer will be online and capable of exploring Quantum Advantage by 2023. - Condor lays the groundwork for scaling to fully error-corrected, interconnected, 1-million-plus-qubit quantum computers. These multi-million-qubit super-fridges, connected via intranets, will make the exploration of classically intractable problems possible for any number of industries, including finance, chemistry, and AI. - In 2021, IBM will debut the 127-qubit "Eagle" chip. Eagle features several upgrades to reduce qubit errors, including its unique layout, which will allow for scaling the number of qubits that work together as logical qubits—the "fault tolerant" qubits needed to reach Quantum Advantage. With the Eagle processor, IBM will also introduce concurrent real-time classical compute capabilities that will allow for execution of a broader family of quantum circuits and codes. - Eagle will be followed by the 433-qubit "Osprey" processor in 2022. Osprey continues to push the boundaries of fabrication techniques to build a smaller chip to ensure more logical qubits that don't sacrifice performance. Its more-efficient and denser controls and cryogenic infrastructure will ensure that scaling up future processors doesn’t sacrifice the performance of individual qubits, introduce further sources of noise, or take up too large a footprint. - These advances are necessary to establish a Quantum Industry. Over the next three years, IBM's multidisciplinary team of scientists will work alongside academia and industry to help solve the challenges of fabrication, cryogenics, and electronics, as well as improve software capabilities, such as error-correction coding. MIT has introduced a Quantum Computing Online Curriculum for professionals and leaders in business, government, and technology to deliver a better understanding of the business and technical implications of quantum computing. The online courses will apply the principles of quantum computing to real-world examples utilizing a state-of-the-art web-available quantum computer: IBM’s Quantum Experience. MIT’s quantum learning initiative is created in collaboration with IBM Q, and the MIT-IBM Watson AI Lab. The MIT-IBM Watson AI Lab is focused on fundamental AI research with the goal of propelling scientific breakthroughs that unlock the potential of AI. A key initiative of the lab is the intersection of quantum computing and machine learning. Currently, quantum computing researchers and enthusiasts need to know quantum programming; it’s simply a must. Soon, though, all they will need is a quantum app store and a line of code. Not an app store like in your smartphone, but similar to a code repository of today, such as GitHub—a type of digital library where software developers make the code they have written available to anyone. And in the near future, developers will be able to put in their lines of code that will call on quantum computers to deal with specific tasks a regular computer can’t. The quantum research field has undergone dramatic changes in the last few decades, but only recently have quantum scientists released easy-to-use tools to make this discipline accessible to everyone. IBM offers all the quantum programming tools you need with Qiskit and makes it easy to get started running quantum circuits on our systems with the IBM Q Experience quantum cloud platform. Users have already run over 28 million experiments and simulations. Your Next Steps Consider applying for the IBM Quantum Challenge: “Programming for the Not-So-Distant Quantum Future,” a three-week quantum computing educational challenge starting on November 8 or November 9, 2020 (depending on your time zone). More information is available here. In addition, IBM Quantum will sponsor 5,000 students to attend an eight-month intensive quantum computing course from The Coding School. Finally, if AI is an area you are interested in, you can learn more about it in the book I co-authored, Artificial Intelligence: Evolution and Revolution. Special thanks to the IBM quantum team and their blogs, a key source for this article.
<urn:uuid:db7ff258-3aad-45d4-ab68-262cd3b50c29>
CC-MAIN-2021-10
https://www.mcpressonline.com/analytics-cognitive/quantum-is-the-next-big-thing-for-artificial-intelligence
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00118.warc.gz
en
0.911705
1,927
3.734375
4
Leonard Susskind, a pioneer of string theory, the holographic principle and other big physics ideas spanning the past half-century, has proposed a solution to an important puzzle about black holes. The problem is that even though these mysterious, invisible spheres appear to stay a constant size as viewed from the outside, their interiors keep growing in volume essentially forever. How is this possible? In a series of recent papers and talks, the 78-year-old Stanford University professor and his collaborators conjecture that black holes grow in volume because they are steadily increasing in complexity — an idea that, while unproven, is fueling new thinking about the quantum nature of gravity inside black holes. Black holes are spherical regions of such extreme gravity that not even light can escape. First discovered a century ago as shocking solutions to the equations of Albert Einstein’s general theory of relativity, they’ve since been detected throughout the universe. (They typically form from the inward gravitational collapse of dead stars.) Einstein’s theory equates the force of gravity with curves in space-time, the four-dimensional fabric of the universe, but gravity becomes so strong in black holes that the space-time fabric bends toward its breaking point — the infinitely dense “singularity” at the black hole’s center. According to general relativity, the inward gravitational collapse never stops. Even though, from the outside, the black hole appears to stay a constant size, expanding slightly only when new things fall into it, its interior volume grows bigger and bigger all the time as space stretches toward the center point. For a simplified picture of this eternal growth, imagine a black hole as a funnel extending downward from a two-dimensional sheet representing the fabric of space-time. The funnel gets deeper and deeper, so that infalling things never quite reach the mysterious singularity at the bottom. In reality, a black hole is a funnel that stretches inward from all three spatial directions. A spherical boundary surrounds it called the “event horizon,” marking the point of no return. Since at least the 1970s, physicists have recognized that black holes must really be quantum systems of some kind — just like everything else in the universe. What Einstein’s theory describes as warped space-time in the interior is presumably really a collective state of vast numbers of gravity particles called “gravitons,” described by the true quantum theory of gravity. In that case, all the known properties of a black hole should trace to properties of this quantum system. Indeed, in 1972, the Israeli physicist Jacob Bekenstein figured out that the area of the spherical event horizon of a black hole corresponds to its “entropy.” This is the number of different possible microscopic arrangements of all the particles inside the black hole, or, as modern theorists would describe it, the black hole’s storage capacity for information. Bekenstein’s insight led Stephen Hawking to realize two years later that black holes have temperatures, and that they therefore radiate heat. This radiation causes black holes to slowly evaporate away, giving rise to the much-discussed “black hole information paradox,” which asks what happens to information that falls into black holes. Quantum mechanics says the universe preserves all information about the past. But how does information about infalling stuff, which seems to slide forever toward the central singularity, also evaporate out? The relationship between a black hole’s surface area and its information content has kept quantum gravity researchers busy for decades. But one might also ask: What does the growing volume of its interior correspond to, in quantum terms? “For whatever reason, nobody, including myself for a number of years, really thought very much about what that means,” said Susskind. “What is the thing which is growing? That should have been one of the leading puzzles of black hole physics.” In recent years, with the rise of quantum computing, physicists have been gaining new insights about physical systems like black holes by studying their information-processing abilities — as if they were quantum computers. This angle led Susskind and his collaborators to identify a candidate for the evolving quantum property of black holes that underlies their growing volume. What’s changing, the theorists say, is the “complexity” of the black hole — roughly a measure of the number of computations that would be needed to recover the black hole’s initial quantum state, at the moment it formed. After its formation, as particles inside the black hole interact with one another, the information about their initial state becomes ever more scrambled. Consequently, their complexity continuously grows. Using toy models that represent black holes as holograms, Susskind and his collaborators have shown that the complexity and volume of black holes both grow at the same rate, supporting the idea that the one might underlie the other. And, whereas Bekenstein calculated that black holes store the maximum possible amount of information given their surface area, Susskind’s findings suggest that they also grow in complexity at the fastest possible rate allowed by physical laws. John Preskill, a theoretical physicist at the California Institute of Technology who also studies black holes using quantum information theory, finds Susskind’s idea very interesting. “That’s really cool that this notion of computational complexity, which is very much something that a computer scientist might think of and is not part of the usual physicist’s bag of tricks,” Preskill said, “could correspond to something which is very natural for someone who knows general relativity to think about,” namely the growth of black hole interiors. Researchers are still puzzling over the implications of Susskind’s thesis. Aron Wall, a theorist at Stanford (soon moving to the University of Cambridge), said, “The proposal, while exciting, is still rather speculative and may not be correct.” One challenge is defining complexity in the context of black holes, Wall said, in order to clarify how the complexity of quantum interactions might give rise to spatial volume. A potential lesson, according to Douglas Stanford, a black hole specialist at the Institute for Advanced Study in Princeton, New Jersey, “is that black holes have a type of internal clock that keeps time for a very long time. For an ordinary quantum system,” he said, “this is the complexity of the state. For a black hole, it is the size of the region behind the horizon.” If complexity does underlie spatial volume in black holes, Susskind envisions consequences for our understanding of cosmology in general. “It’s not only black hole interiors that grow with time. The space of cosmology grows with time,” he said. “I think it’s a very, very interesting question whether the cosmological growth of space is connected to the growth of some kind of complexity. And whether the cosmic clock, the evolution of the universe, is connected with the evolution of complexity. There, I don’t know the answer.”
<urn:uuid:9aa5856d-bc68-46e6-8492-8f015bd1f550>
CC-MAIN-2021-10
https://www.quantamagazine.org/why-black-hole-interiors-grow-forever-20181206/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358064.34/warc/CC-MAIN-20210227024823-20210227054823-00039.warc.gz
en
0.938987
1,460
3.578125
4
The RSA algorithm is the basis of a cryptosystem -- a suite of cryptographic algorithms that are used for specific security services or purposes -- which enables public key encryption and is widely used to secure sensitive data, particularly when it is being sent over an insecure network such as the internet. RSA was first publicly described in 1977 by Ron Rivest, Adi Shamir and Leonard Adleman of the Massachusetts Institute of Technology, though the 1973 creation of a public key algorithm by British mathematician Clifford Cocks was kept classified by the U.K.'s GCHQ until 1997. Public key cryptography, also known as asymmetric cryptography, uses two different but mathematically linked keys -- one public and one private. The public key can be shared with everyone, whereas the private key must be kept secret. In RSA cryptography, both the public and the private keys can encrypt a message; the opposite key from the one used to encrypt a message is used to decrypt it. This attribute is one reason why RSA has become the most widely used asymmetric algorithm: It provides a method to assure the confidentiality, integrity, authenticity, and non-repudiation of electronic communications and data storage. Many protocols like secure shell, OpenPGP, S/MIME, and SSL/TLS rely on RSA for encryption and digital signature functions. It is also used in software programs -- browsers are an obvious example, as they need to establish a secure connection over an insecure network, like the internet, or validate a digital signature. RSA signature verification is one of the most commonly performed operations in network-connected systems. Why the RSA algorithm is used RSA derives its security from the difficulty of factoring large integers that are the product of two large prime numbers. Multiplying these two numbers is easy, but determining the original prime numbers from the total -- or factoring -- is considered infeasible due to the time it would take using even today's supercomputers. The public and private key generation algorithm is the most complex part of RSA cryptography. Two large prime numbers, p and q, are generated using the Rabin-Miller primality test algorithm. A modulus, n, is calculated by multiplying p and q. This number is used by both the public and private keys and provides the link between them. Its length, usually expressed in bits, is called the key length. The public key consists of the modulus n and a public exponent, e, which is normally set at 65537, as it's a prime number that is not too large. The e figure doesn't have to be a secretly selected prime number, as the public key is shared with everyone. The private key consists of the modulus n and the private exponent d, which is calculated using the Extended Euclidean algorithm to find the multiplicative inverse with respect to the totient of n. Read on or watch the video below for a more detailed explanation of how the RSA algorithm works. How does the RSA algorithm work? Alice generates her RSA keys by selecting two primes: p=11 and q=13. The modulus is n=p×q=143. The totient is n ϕ(n)=(p−1)x(q−1)=120. She chooses 7 for her RSA public key e and calculates her RSA private key using the Extended Euclidean algorithm, which gives her 103. Bob wants to send Alice an encrypted message, M, so he obtains her RSA public key (n, e) which, in this example, is (143, 7). His plaintext message is just the number 9 and is encrypted into ciphertext, C, as follows: Me mod n = 97 mod 143 = 48 = C When Alice receives Bob's message, she decrypts it by using her RSA private key (d, n) as follows: Cd mod n = 48103 mod 143 = 9 = M To use RSA keys to digitally sign a message, Alice would need to create a hash -- a message digest of her message to Bob -- encrypt the hash value with her RSA private key, and add the key to the message. Bob can then verify that the message has been sent by Alice and has not been altered by decrypting the hash value with her public key. If this value matches the hash of the original message, then only Alice could have sent it -- authentication and non-repudiation -- and the message is exactly as she wrote it -- integrity. Alice could, of course, encrypt her message with Bob's RSA public key -- confidentiality -- before sending it to Bob. A digital certificate contains information that identifies the certificate's owner and also contains the owner's public key. Certificates are signed by the certificate authority that issues them, and they can simplify the process of obtaining public keys and verifying the owner. RSA security relies on the computational difficulty of factoring large integers. As computing power increases and more efficient factoring algorithms are discovered, the ability to factor larger and larger numbers also increases. Encryption strength is directly tied to key size, and doubling key length can deliver an exponential increase in strength, although it does impair performance. RSA keys are typically 1024- or 2048-bits long, but experts believe that 1024-bit keys are no longer fully secure against all attacks. This is why the government and some industries are moving to a minimum key length of 2048-bits. Barring an unforeseen breakthrough in quantum computing, it will be many years before longer keys are required, but elliptic curve cryptography (ECC) is gaining favor with many security experts as an alternative to RSA to implement public key cryptography. It can create faster, smaller and more efficient cryptographic keys. Modern hardware and software are ECC-ready, and its popularity is likely to grow, as it can deliver equivalent security with lower computing power and battery resource usage, making it more suitable for mobile apps than RSA. Finally, a team of researchers, which included Adi Shamir, a co-inventor of RSA, has successfully created a 4096-bit RSA key using acoustic cryptanalysis; however, any encryption algorithm is vulnerable to attack.
<urn:uuid:1dbf9b6a-384a-4cbb-9584-59f4986af53c>
CC-MAIN-2021-10
https://searchsecurity.techtarget.com/definition/RSA?ref=hackernoon.com
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00519.warc.gz
en
0.941916
1,257
4.375
4
You probably learned about nuclear fusion in high school environmental science: It was built up as a clean, high-yield, virtually limitless source of power. Then, the bell rang, you went to lunch, and it was never spoken of again. For decades, commercial fusion energy was a great idea handicapped by the limits of plasma physics. But recent advances in material science and fusion reactors could change that One very, very hot bottle Nuclear power plants, like the one Homer Simpson works at, use fission, the splitting of uranium atoms to generate energy. Fusion does the opposite, fusing hydrogen nuclei together to release helium and energy in the process. Well-known practitioners include our very own sun, other stars in the galaxy, and Matthew McConaughey in Interstellar. The science underpinning fusion is well understood, but making it happen here on Earth is quite tricky. Scientists are looking to “basically take a star and put it in a bottle,” according to fusion expert Brandon Sorbom. - Engineering hurdles include heating the plasma up to 100 million °C (which we can do), sustaining these temperatures for extended periods of time (still working on this), and building a device capable of withstanding the pummeling of a million Arizona summers all at once (also a work in progress). - Another key challenge is creating a system that generates more power that it consumes, says Dennis Whyte, MIT professor and director of the Plasma Science and Fusion Center. The good news is, once the reaction is going, as long as fuel is continuously supplied you can laissez les bon temp[erature]s rouler. Today there are two main approaches to fusion, according to the World Nuclear Association: magnetic confinement (which, you guessed it, uses magnetic fields to contain plasma) and inertial confinement (which uses lasers or particle beams). Most of the academic fusion community has focused on magnetic confinement through tokamaks (donut-shaped containment chambers), Sorbom said. - Vocab lesson: Tokamaks derive their name from “toroidalnya kamera ee magnetnaya katushka”—Russian for the no-less-confusing “torus-shaped magnetic chamber.” For obvious reasons, we’ll be sticking to the abbreviation. These devices take a lot of manpower and resources to build, so good luck getting a large-scale project off the ground without the help of government funding or billionaire philanthropists. In the south of France, scientists from around the world are forgoing romantic walks along the Riviera to build ITER, an international fusion project that will create not only the world’s largest tokamak, but (fingers crossed) the first fusion device to generate net energy. With 35 countries collaborating—including the U.S., Russia, China, India, and EU members—it might be one of the only areas of peaceful international collaboration left. - By the late 2030s, the ITER tokamak is expected to produce up to 500 megawatts of fusion power in pulses that last 400 seconds, Danas Ridikas, head of the physics at the International Atomic Energy Agency, told the Brew. - ITER may a science-driven venture, but any effort to move the R&D needle forward benefits commercial ventures as well. Other technologies, such as supercomputing, big data analysis, and 3D printing could help accelerate progress in the field, said Ridikas. Quantum computing, which we profiled earlier this week, is expected to drive breakthroughs in fields like high energy physics. Sun’s in the bottle, so what’s next? After scientists prove they can maintain plasma for extended periods of time and generate a working device, they can then work on a fusion demonstration power plant that is able to connect to the power grid, said Ridikas. And once they have that, they can work on future commercial fusion power plants. Easy peasy, right? Sorbom, who’s chief science officer at MIT-spinoff Commonwealth Fusion Systems, was recently recognized for a breakthrough in tokamak electromagnetic systems that could make tokamaks or fusion devices smaller (and cheaper) to build. With tokamaks the size of a house instead of a football field…that could open up the field for more players and speed the path to commercial fusion energy on the grid. - By 2025, CFS and MIT are trying to build a power plant prototype (called Sparc) using the new electromagnetic system. Sparc = Kitty Hawk for fusion energy, proving it can be done but only flying a few hundred feet. - Five to 10 years after Sparc is working, Sorbom and the CFS team hope to complete Arc (a demonstration power plant that can put electricity on the grid). Arc = the transatlantic flight. Transitioning the world’s current energy production to renewables is a massive undertaking, and fusion power will help not only clean up energy production, but scale it tenfold worldwide, according to Sorbom. It will be “almost like solar energy, but you control the light switch on the sun, and you also have the dimmer switch.” ` - Bonus: Fuel (hydrogen isotopes) is theoretically limitless. - Double bonus: Nuclear energy has a bad rep, and as much as we want another season of Chernobyl to binge watch, no one wants that happening in their backyard. But with fusion, “there is no risk for a meltdown accident,” Ridikas said. “If any disturbance occurs, the plasma cools within seconds and the fusion reaction stops.” Maybe an HBO special about a power outage instead? Why now? Until recently, fusion energy was dominated by plasma physics, which kept it a pretty niche field, Sorbom said. Now on the slow and steady path to #mainstream, the community needs people of all backgrounds to get involved. - Yes, this means engineers and scientists who can help build a fusion reactor. But it also means business people who can scale, commercialize, and get fusion energy out into the real world. - One of the most exciting things about fusion R&D today is its focus on the ecosystem—figuring out what a fusion-driven economy would look like as well as the economic targets and applications, said MIT’s Whyte. A healthy dose of reality: There’s a running joke that fusion is the energy of the future…and always will be. Even if projects like CFS’s hit their benchmarks on time, fusion energy is years or decades away from realization, let alone from grid integration and global reach. But Sorbom and Whyte were both optimistic that they would see functioning fusion energy in their lifetime. Because no matter how you frame it, the promise of a clean, carbon-free baseload power that could yield four times as much as fission reactors is a good deal and worth trying for. Did you get all that? Here are some refreshers just in case The promise: Clean, carbon-free, energy with a theoretically limitless source of fuel, capable of higher yields that existing fission energy. The roadblocks: Scientists can heat plasma up to 100 million °C, but they’re still working on sustaining these temperatures for extended periods of time and building reactors that can withstand the heat. The timeline: Experts believe we’ll see working fusion energy in our lifetime. They’re still trying to build better reactors today (and after that, they have to tackle demonstration then commercial power plants), but recent breakthroughs have many optimistic. The players: Governments (U.S., EU, Russia, Japan, China, Brazil, Canada, Korea), companies (Lockheed Martin, Commonwealth Fusion Systems, General Fusion, Tokamak Energy, AGNI Energy), Academia (MIT, Princeton), and billionaires (Jeff Bezos, Bill Gates, Peter Thiel).
<urn:uuid:bff771d1-499e-4b47-b304-d9ee5f4b1114>
CC-MAIN-2021-10
https://www.morningbrew.com/emerging-tech/stories/2019/08/15/fusion-giant-step-energy
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00080.warc.gz
en
0.923245
1,665
3.578125
4
The best place to start our journey through quantum computing is to recall how classical computing works and try to extend it. Since our final quantum computing model will be a circuit model, we should informally discuss circuits first. A circuit has three parts: the “inputs,” which are bits (either zero or one); the “gates,” which represent the lowest-level computations we perform on bits; and the “wires,” which connect the outputs of gates to the inputs of other gates. Typically the gates have one or two input bits and one output bit, and they correspond to some logical operation like AND, NOT, or XOR. If we want to come up with a different model of computing, we could start regular circuits and generalize some or all of these pieces. Indeed, in our motivational post we saw a glimpse of a probabilistic model of computation, where instead of the inputs being bits they were probabilities in a probability distribution, and instead of the gates being simple boolean functions they were linear maps that preserved probability distributions (we called such a matrix “stochastic”). Rather than go through that whole train of thought again let’s just jump into the definitions for the quantum setting. In case you missed last time, our goal is to avoid as much physics as possible and frame everything purely in terms of linear algebra. Qubits are Unit Vectors The generalization of a bit is simple: it’s a unit vector in . That is, our most atomic unit of data is a vector with the constraints that are complex numbers and . We call such a vector a qubit. A qubit can assume “binary” values much like a regular bit, because you could pick two distinguished unit vectors, like and , and call one “zero” and the other “one.” Obviously there are many more possible unit vectors, such as and . But before we go romping about with what qubits can do, we need to understand how we can extract information from a qubit. The definitions we make here will motivate a lot of the rest of what we do, and is in my opinion one of the major hurdles to becoming comfortable with quantum computing. A bittersweet fact of life is that bits are comforting. They can be zero or one, you can create them and change them and read them whenever you want without an existential crisis. The same is not true of qubits. This is a large part of what makes quantum computing so weird: you can’t just read the information in a qubit! Before we say why, notice that the coefficients in a qubit are complex numbers, so being able to read them exactly would potentially encode an infinite amount of information (in the infinite binary expansion)! Not only would this be an undesirably powerful property of a circuit, but physicists’ experiments tell us it’s not possible either. So as we’ll see when we get to some algorithms, the main difficulty in getting useful quantum algorithms is not necessarily figuring out how to compute what you want to compute, it’s figuring out how to tease useful information out of the qubits that otherwise directly contain what you want. And the reason it’s so hard is that when you read a qubit, most of the information in the qubit is destroyed. And what you get to see is only a small piece of the information available. Here is the simplest example of that phenomenon, which is called the measurement in the computational basis. Definition: Let be a qubit. Call the standard basis vectors the computational basis of . The process of measuring in the computational basis consists of two parts. - You observe (get as output) a random choice of or . The probability of getting is , and the probability of getting is . - As a side effect, the qubit instantaneously becomes whatever state was observed in 1. This is often called a collapse of the waveform by physicists. There are more sophisticated ways to measure, and more sophisticated ways to express the process of measurement, but we’ll cover those when we need them. For now this is it. Why is this so painful? Because if you wanted to try to estimate the probabilities or , not only would you get an estimate at best, but you’d have to repeat whatever computation prepared for measurement over and over again until you get an estimate you’re satisfied with. In fact, we’ll see situations like this, where we actually have a perfect representation of the data we need to solve our problem, but we just can’t get at it because the measurement process destroys it once we measure. Before we can talk about those algorithms we need to see how we’re allowed to manipulate qubits. As we said before, we use unitary matrices to preserve unit vectors, so let’s recall those and make everything more precise. Qubit Mappings are Unitary Matrices Suppose is a qubit. If we are to have any mapping between vector spaces, it had better be a linear map, and the linear maps that send unit vectors to unit vectors are called unitary matrices. An equivalent definition that seems a bit stronger is: Definition: A linear map is called unitary if it preserves the inner product on . Let’s remember the inner product on is defined by and has some useful properties. - The square norm of a vector is . - Swapping the coordinates of the complex inner product conjugates the result: - The complex inner product is a linear map if you fix the second coordinate, and a conjugate-linear map if you fix the first. That is, and By the first bullet, it makes sense to require unitary matrices to preserve the inner product instead of just the norm, though the two are equivalent (see the derivation on page 2 of these notes). We can obviously generalize unitary matrices to any complex vector space, and unitary matrices have some nice properties. In particular, if is a unitary matrix then the important property is that the columns (and rows) of form an orthonormal basis. As an immediate result, if we take the product , which is just the matrix of all possible inner products of columns of , we get the identity matrix. This means that unitary matrices are invertible and their inverse is . Already we have one interesting philosophical tidbit. Any unitary transformation of a qubit is reversible because all unitary matrices are invertible. Apparently the only non-reversible thing we’ve seen so far is measurement. Recall that is the conjugate transpose of the matrix, which I’ll often write as . Note that there is a way to define without appealing to matrices: it is a notion called the adjoint, which is that linear map such that for all . Also recall that “unitary matrix” for complex vector spaces means precisely the same thing as “orthogonal matrix” does for real numbers. The only difference is the inner product being used (indeed, if the complex matrix happens to have real entries, then orthogonal matrix and unitary matrix mean the same thing). Definition: A single qubit gate is a unitary matrix . So enough with the properties and definitions, let’s see some examples. For all of these examples we’ll fix the basis to the computational basis . One very important, but still very simple example of a single qubit gate is the Hadamard gate. This is the unitary map given by the matrix It’s so important because if you apply it to a basis vector, say, , you get a uniform linear combination . One simple use of this is to allow for unbiased coin flips, and as readers of this blog know unbiased coins can efficiently simulate biased coins. But it has many other uses we’ll touch on as they come. Just to give another example, the quantum NOT gate, often called a Pauli X gate, is the following matrix It’s called this because, if we consider to be the “zero” bit and to be “one,” then this mapping swaps the two. In general, it takes to . As the reader can probably imagine by the suggestive comparison with classical operations, quantum circuits can do everything that classical circuits can do. We’ll save the proof for a future post, but if we want to do some kind of “quantum AND” operation, we get an obvious question. How do you perform an operation that involves multiple qubits? The short answer is: you represent a collection of bits by their tensor product, and apply a unitary matrix to that tensor. We’ll go into more detail on this next time, and in the mean time we suggest checking out this blog’s primer on the tensor product. Until then!
<urn:uuid:09ff398a-ecd3-4cb8-93fa-ec8321c6914e>
CC-MAIN-2021-10
https://jeremykun.com/2014/12/15/the-quantum-bit/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00121.warc.gz
en
0.936935
1,869
3.515625
4
While the scientific community holds its breath for a large-scale quantum computer that could carry out useful calculations, a team of IBM researchers has approached the problem with an entirely different vision: to achieve more and better results right now, even with the limited quantum resources that exist today. By tweaking their method, the scientists successfully simulated some molecules with a higher degree of accuracy than before, with no need for more qubits. The researchers effectively managed to pack more information into the mathematical functions that were used to carry out the simulation, meaning that the outcome of the process was far more precise, and yet came at no extra computational cost. “We demonstrate that the properties for paradigmatic molecules such as hydrogen fluoride (HF) can be calculated with a higher degree of accuracy on today’s small quantum computers,” said the researchers, at the same time priding themselves on helping quantum computers “punch above their weight”. Car manufacturer Daimler, a long-term quantum research partner of IBM’s, has shown a strong interest in the results, which could go a long way in developing higher-performing, longer-lasting and less expensive batteries. Since 2015, Daimler has been working on upgrading lithium-ion batteries to lithium-sulfur ones – a non-toxic and easily available material that would increase the capacity and speed-of-charging of electric vehicles. Designing a battery based on new materials requires an exact understanding of which compounds should come together and how. The process involves accurately describing all the characteristics of all the molecules that make up the compound, as well as the particles that make up these molecules, to simulate how the compound will react in many different environments. In other words, it is an incredibly data-heavy job, with infinite molecular combinations to test before the right one is found. The classical methods that exist today fail to render these simulations with the precision that is required for a breakthrough such as the one Daimler is working towards. “This is a big problem to develop next-generation batteries,” Heike Riel, IBM Research quantum lead, told ZDNet. “Classical computers, and the models we’ve developed in physics and chemistry for many years still cannot solve those problems.” But the task could be performed at speed by quantum computers. Qubits, and their ability to encode different information at the same time, enable quantum algorithms to run several calculations at once – and are expected, one day, to enable quantum computers to tackle problems that are seemingly impossible, in a matter of minutes. To do that, physicists need quantum computers that support many qubits; but scaling qubits is no piece of cake. Most quantum computers, including IBM’s, work with less than 100 qubits, which is nowhere near enough to simulate the complex molecules that are needed for breakthroughs such as lithium-sulfur car batteries. Some of the properties of these molecules are typically represented in computer experiments with a mathematical function called a Hamiltonian, which represents particles’ spatial functions, also called orbitals. In other words, the larger the molecule, the larger the orbital, and the more qubits and quantum operations will be needed. “We currently can’t represent enough orbitals in our simulations on quantum hardware to correlate the electrons found in complex molecules in the real world,” said IBM’s team. Instead of waiting for a larger quantum computer that could take in weighty calculations, the researchers decided to see what they could do with the technology as it stands. To compensate for resource limitations, the team created a so-called “transcorrelated” Hamiltonian – one that was transformed to contain additional information about the behavior of electrons in a particular molecule. This information, which concerns the propensity of negatively charged electrons to repel each other, cannot usually fit on existing quantum computers, because it requires too much extra computation. By incorporating the behavior of electrons directly into a Hamiltonian, the researchers therefore increased the accuracy of the simulation, yet didn’t create the need for more qubits. The method is a new step towards calculating materials’ properties with accuracy on a quantum computer, despite the limited resources available to date. “The more orbitals you can simulate, the closer you can get to reproducing the results of an actual experiment,” said the scientists. “Better modelling and simulations will ultimately result in the prediction of new materials with specific properties of interest.” IBM’s findings might accelerate the timeline of events for quantum applications, therefore, with new use cases emerging even while quantum computers work with few qubits. According to the researchers, companies like Daimler are already keen to find out more about the breakthrough. This is unlikely to shift IBM’s focus on expanding the scale of its quantum computer. The company recently unveiled a roadmap to a million-qubit system, and said that it expects a fault-tolerant quantum computer to be an achievable goal for the next ten years. According to Riel, quantum simulation is likely to be one of the first applications of the technology to witness real-world impacts. “The car batteries are a good example of this,” she said. “Soon, the number of qubits will be enough to generate valuable insights with which you can develop new materials. We’ll see quantum advantage soon in the area of quantum simulation and new materials.” IBM’s roadmap announces that the company will reach 1,000 qubits in 2023, which could mark the start of early value creation in pharmaceuticals and chemicals, thanks to the simulation of small molecules.
<urn:uuid:980b5306-8637-499e-9fac-0ef50b4a9900>
CC-MAIN-2021-10
https://kansasyhec.org/2021/01/21/less-is-more-ibm-achieves-quantum-computing-simulation-for-new-materials-with-fewer-qubits/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00085.warc.gz
en
0.945283
1,175
3.84375
4
Let's get Started! So, what are Quantum computers? Understanding Quantum computers are pretty complicated and quite confusing. So I'm going to break it down in an easy way to understand it. We all know that regular computers use bits 0 and 1 for storing data and processing tasks so for example if I have four bits in a row I can represent a bunch of numbers. Quantum bits know as Qubits are either 0 or 1 but it may still be 0 & 1 at the same time. I know! I know! it sounds very strange and confusing because it's not really quite plain to grasp, for example, let's imagine that a bit is sort of like a coin, it can either be heads or tails just like 0 or 1. it can only be either heads or tails right? What is it right now? while it's in the air Is it Heads or tails ? it's heads or tails now in a strange manner, it's heads and tails at the same moment, that doesn't even make sense before it falls in my palm as I see the coin, so then I can see what it's like, and that's the theory behind the quantum bit so it should be a 0 and a 1 at the same moment. In simple words qubits are bits with two state and each state have some probability just like in the case of coins You get it right! How it is going to change the world? Well now is where the fun begins let's suppose we have 4 bits with this we have 16 possibilities( means we can have 16 numbers) let's say that I'm trying to crack a password the password I'm trying to crack is one of the numbers we can get with this bits. A normal computer will take one number at a time and put it in the normal machine one by one till we got the right answer what if we use a quantum computer so instead of putting in these four regular bits, We put in four quantum bits, Now remember each bit is both a 0 and a 1 that means these quantum bits are all the numbers all at the same time So, when I put the quantum bits into my machine to find the right password what comes off the other end of machine is saying that I'm both right and wrong because We gave it both right and wrong answers at the same time. We still want to know what the correct password is? right? Well there's a technique called a growver operator, this is a real thing where you can sweep away all the wrong answers and what you're left behind with is the right answer So that's the beauty of Quantum computing I heard some people say that it will take the age of the universe to try and crack these codes That's how secure they are but with a quantum computer you can try them all at the same time use the growver operator to sweep away all the wrong answers and what you're left with is the right answer. So instead of taking millions of years with a regular computer you can do it in seconds with quantum computers. Are you excited to write you first quantum program Lets get started # install latest version !pip install cirq import cirq import numpy as np from cirq import Circuit from cirq.devices import GridQubit # creating circuit with 5 qubit length = 5 qubits = [cirq.GridQubit(i, j) for i in range(length) for j in range(length)] print(qubits) Applying Hadamard operation on every qubit circuit = cirq.Circuit() H1 = cirq.H(qubits) H2 = cirq.H(qubits) H3 = cirq.H(qubits) H4 = cirq.H(qubits) H5 = cirq.H(qubits) Apply CNOT operation on (0, 1), (1,2), (2,3), (3,4) , swap at (0,4) , rotating X by with pi/2 C1 = cirq.CNOT(qubits,qubits) C2 = cirq.CNOT(qubits,qubits) C3 = cirq.CNOT(qubits,qubits) C4 = cirq.CNOT(qubits,qubits) #swap S1 = cirq.SWAP(qubits,qubits) #Rotation X1 = cirq.X(qubits) X2 = cirq.X(qubits) X3 = cirq.X(qubits) X4 = cirq.X(qubits) X5 = cirq.X(qubits) Creating the moment and printing the circuit moment1 = cirq.Moment([H1]) moment2 = cirq.Moment([H2]) moment3 = cirq.Moment([H3]) moment4 = cirq.Moment([H4]) moment5 = cirq.Moment([H5]) moment6 = cirq.Moment([C1]) moment7 = cirq.Moment([C2]) moment8 = cirq.Moment([C3]) moment9 = cirq.Moment([S1]) moment10 = cirq.Moment([X1]) moment11 = cirq.Moment([X2]) moment12 = cirq.Moment([X3]) moment13 = cirq.Moment([X4]) moment14 = cirq.Moment([X5]) #circuit circuit = cirq.Circuit((moment1, moment2, moment3, moment4, moment5 ,moment6 ,moment7, moment8, moment9, moment10, moment11, moment12, moment13, moment14)) print(circuit) This is the quantum circuit you get, I will recommend you to try and play with it. I hope it's helped you in some way, Thanks for reading! Create your free account to unlock your custom reading experience.
<urn:uuid:5d7c6ddd-b3fb-4e47-b2e8-865258eacaae>
CC-MAIN-2021-10
https://hackernoon.com/quantum-programming-getting-from-0-to-1-using-cirq-6v6s32g8
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00529.warc.gz
en
0.887579
1,270
3.515625
4
IBM announced last year that they had conclusively demonstrated “quantum advantage” for the first time, proving that quantum computers are better than classical computers for some types of problems. This announcement is one of many exciting developments in quantum computing over the last few years, and comes at a time when both private and public investment in the space is accelerating. Household names like IBM, Intel, Google, and Microsoft are investing around $100MM annually to develop their own quantum systems, and well-funded startups like Rigetti and IonQ are developing systems and software of their own to compete with big-name players. Governments are also getting more involved. The EU, UK, and China have all made significant investments into quantum technologies, and the US House of Representatives recently passed a measure to provide $1.3 billion to fund a National Quantum Initiative. While it is encouraging to see so much activity, it can be difficult to look past the hype surrounding the industry and understand what it all means. In this post, I will describe where quantum computing is today, and how it is likely to be used in the coming years. What is quantum computing? Before we dive in, it is important to understand what a quantum computer is. Quantum computers are devices that take advantage of quantum mechanics to perform calculations. Whereas classical computers store data in “bits” – binary units of information that are always either 1 or 0 – quantum computers store data as qubits, which have several unusual properties. For one thing, qubits can exist in superpositions, meaning that instead of being either 1 or 0, a qubit can be a mixture of both at the same time (this comic explains it in a simple and fun way). Qubits can also be entangled with each other so their individual states are perfectly correlated. This means that measuring the state of one qubit reveals information about the state of the other. It also means that performing a calculation on the first qubit will affect the state of the second qubit in a predictable way. The future vision for quantum computing – and hurdles to getting there Thanks to properties like qubit superposition and entanglement, quantum computers are really good at many things that classical computers are bad at – like handling complex algorithms. One such algorithm that gets a lot of attention due to its potential impact on data security is Shor’s algorithm. This algorithm describes how a quantum computer could be used to find the prime factors of a large number much faster than any known classical algorithm. RSA, one of the most commonly used encryption methods in the world, relies on the fact that classical computers can easily multiply very large prime numbers, but can’t do the reverse operation (find prime factors) very well without a key. This asymmetry in difficulty is what protects RSA-encrypted data from being decrypted by anyone other than the intended recipient (who already knows what key to use). A quantum computer capable of running Shor’s algorithm would be able to decode encrypted information easily, posing a significant security threat. However, quantum computing is in its early days, and many signs indicate that it will be a long time before quantum computers achieve that particular capability. The first of these signs is the fact that quantum computing companies haven’t settled on a standard qubit. Superconducting circuit qubits and trapped ion qubits are leading now, but other technologies (e.g., silicon spin qubits, topological qubits, and photonic qubits) are being explored as well. Current systems are also very small compared to what would be needed for the problems quantum computing promises to solve. Current systems have around 100 qubits, with IonQ’s 160 trapped-ion qubit system leading the field, but a quantum computer that can run Shor’s algorithm and factor numbers that are hundreds of digits long will likely require millions of qubits. Even the most optimistic industry experts don’t think quantum computers will reach this point for at least a decade. The value of quantum computing in the near term That’s not to say that quantum computers won’t be useful until they have millions of qubits. To the contrary, some of the most exciting applications for our specialty materials clients are much nearer term. Applications in materials simulation, chemical modeling, and process optimization can be addressed by systems with just a few hundred qubits – a target that may be achieved in the next 5 years. Additionally, developments in adjacent markets are making quantum computing increasingly accessible. Many major players offer software to help people familiarize themselves with writing code for quantum computers and are working to engage directly with future users. Companies like Zapata Computing and QC Ware, who base their business models on writing algorithms and software for quantum computers, have popped up in the last few years. The growth of this ecosystem is enabling companies to explore how they will use increasingly powerful quantum computers, and understand and prepare for the impact of these systems. Deciphering the hype around a topic as complicated as quantum physics can seem overwhelming. When we’re constantly bombarded by headlines promising that quantum computing is almost ready for business, or threatening imminent security breaches, it’s hard to know whether to panic or write the whole thing off as ludicrous. Our experiences in this industry indicates the threat of RSA decryption is real and deserves to be taken seriously, but is at least a decade away. Applications for smaller quantum computers, on the other hand, have the potential to meaningfully affect many companies’ business in the near future. The ecosystem is growing, investment is accelerating, and we are rapidly nearing a time when quantum computers will solve valuable problems. Now is the perfect time to start thinking about how your business can take advantage of this exciting technology.
<urn:uuid:e1e7509c-384e-4345-9334-55c9e992d27a>
CC-MAIN-2021-10
https://newrycorp.com/insights/blog/technology-readiness-of-quantum-computing/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00371.warc.gz
en
0.95086
1,181
3.78125
4
Locke influenced that form of government, as well. 1470 Words6 Pages The Founding Fathers of the United States relied heavily on many of the principles taught by John Locke. Firstly authority of government proceeds from the people. However, I would argue that factions should play some role in our government due to the massive size and scope of the government itself and its electorate. What characteristics of the state exemplify its legitimacy? In this revolution we saw a rising movement from the people to oppose monarchy and demand a rule by the people. John Locke was a philosophical influence in both political theory and theoretical philosophy, which was embraced among the era of 1789-1914 and, The Enlightenment thinker, John Locke, greatly influenced movements like the American and French Revolutions. John Locke’s philosophy of government and the governed influenced our founding fathers as expressed in our 3 founding documents; Declaration, Constitution and Bill of Rights (first 10 amendments). Many of the principles of Locke’s Second Treatise of Government may easily be discovered in the Declaration of Independence with some minor differences in wording and order. Thomas Jefferson was not the only founding father who subscribed to these beliefs. In this revolution we saw, “Golden Era”. Jefferson received a great deal of inspiration from Locke in writing the Declaration of Independence. Locke stressed that the role of the state is to protect each individual from the will and desires of others. rest of the founding fathers demanded from the King of England stemmed from a basic desire for rights and liberties for all people, not just the wealthy barons. He single handedly developed a political system that had a focus on liberty, his work would help influence many men from both sides of the Atlantic. Did Hamilton, Madison, Jefferson, and various other so-called Founding Fathers look to the right person for inspiration? Sometimes the majority is just plain wrong, especially when reflecting on history and various social issues that have sprung up. Jefferson had many philosophical minds to ponder when writing the document, such as Aristotle and most importantly John Locke. Leonardo da Vinci opened the door to the Renaissance and William Shakespeare treated us to the best writings and plays in the English language. Because of his past occupation, who used to persuade to become a doctor, he understood how people's lives, and what was the best form of government that they need. I would argue that this system is one of the most vital aspects of our own system of government. Enlighten Thinker: John Locke John Locke (1632-1704) is a Philosopher and Physician. While he does not specifically call out Hobbes in this work, he does seem to be responding to Hobbes, passively. He was known as one of the most affective Founding Father of Enlighten movement. Looking back centuries later, how has this influence played out? Thus, men “unite for the mutual preservation of their lives, liberties, and estates” (Locke 1690, IX, 123). Copyright 2020 | MH Newsdesk lite by MH Themes, Jimmy Fallon and Bruce Springsteen Mock Chris Christie with “Born to Run” Cover, Amy Poehler Wins Golden Globe for Best Actress in Parks and Recreation – Video, Texas Gun Show Shoots Self in Foot and Shuts Down Over Background Checks, Late Night Political Jokes of the Week – GWB Bridgegate, Cold Snap, Letterman Top 10, 2014 Political Cartoons – NJ Gov. Many of the ideas of the proper role of, Finally, towards the end of the Declaration, Jefferson wrote that they were “appealing to the Supreme Judge of the World, for the Rectitude of our Intentions…And for the support of this Declaration, with a firm Reliance on the Protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes, and our sacred Honor.” Again, the similarity to Locke is found. In contrast, Hobbes was more focused on the dangers of nature and the need to form a sovereign state to aid in the interest of man’s self-preservation. “Secondly, there wants a, Writing Devices in Romeo and Juliet Essay, Strategies to Solve Addition and Subtraction Essay, Shakespeare In Love -Combination of Romantic Comedy and Shakespearean Tragedy, Inferno as a Manifestation of the Pain of Dante Alighieri Essay, Essay on The True Villian in Frankenstein. Locke had three main philosophies, religious tolerance, all men are born a blank slate, and that the divine right to rule is incorrect. Imagine if civil rights laws in the South during the 1960s were put up to a majority vote. John Locke was born on August 29, 1632 in Wrington, England, nation. James Madison's writings were also heavily influenced by Locke. November 12, 2013. He believed that people should have a direct say in the government and that absolute monarchy should not be a factor that rules everyday people in their everyday lives. Likewise, John Locke is a man who accomplished what many men could not. I would argue that they did. Locke insists upon a separation of powers that influenced the United State’s own system of checks and balances. Much like Hobbes, Locke starts with man’s original place in nature. John Locke and his ideas contributed in a major way towards the Enlightenment. It has been said that “Locke’s justification of revolt, as based on his theory of natural rights, was the background from which the Declaration sprang.” Locke’s influence appears in countless speeches and writings of the Founding Fathers. Thomas Hobbes and John Locke had very similar views on natural law and natural right. He was a very intelligent man and grew up with a good education that led him to have many opportunities not just for himself but many others too. Essays, Moral, Political, And Literary Pdf, Psalm 130:3 Meaning, Amd Quantum Computing, Maths Picture Puzzles With Answers Pdf, Disadvantages Of Tables, Asus Rog Zephyrus Duo 15, 3 Components Of Computer System, Palo Alto Real Estate, He Was Despised Orlinski, Dark Souls 3 Remastered Walkthrough, Roast In Ninja Air Fryer, Aero 15 Classic Vs 15x, Sonic Advance Shadow, Vegan Collagen Skincare, Oversized Beach Bag,
<urn:uuid:9e9f5db0-c523-4c5b-8aa2-e6873e5554e3>
CC-MAIN-2021-10
https://flemac.com/d8ccl/how-did-john-locke-influence-the-founding-fathers-40db44
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374616.70/warc/CC-MAIN-20210306070129-20210306100129-00210.warc.gz
en
0.956635
1,285
3.84375
4
Our pocket computers – you know them as smartphones – are small, fast and powerful thanks to the billions of bits and transistors packed into the chips they use. Bits make up a computer’s memory and are the smallest building blocks of traditional computing. They can have one of two values, usually represented as either a 0 or a 1. Your phone has about 256 billion bits (or 32 gigabytes). Transistors act as switches. Think of them as a roadway with a gate. A binary signal of 1 will tell the gate to allow the cars to flow down the roadway. A binary signal of 0 will tell the gate to stop all cars from traveling on the roadway. My iPhone 11 has 8.5 billion transistors. We’re now at a crossroads. And we’re bumping up against the limits of bits and transistors. Either our ability to increase the power and speed of computers will dramatically decline or we’ll discover entirely new and revolutionary technologies. The most fascinating and game-changing of those technologies is quantum computing. First, some background. Quantum computers rely on qubits instead of bits. The qubit is a combination of “quantum” and “bit.” Qubits are based on a somewhat complicated concept called quantum superposition. It’s the idea that a quantum object can be in multiple states at the same time. A qubit doesn’t have to be only a 0 or a 1. Once it is measured, it will take one of those two forms. But until it is measured, it can exist as a range of probabilities between those two results. And that’s important. Two bits taken together can be represented as either 00 or 01 or 10 or 11. But through the lens of superposition, two qubits can store all four values simultaneously. Three qubits can hold eight different values. As the number of qubits increases, the number of values increases exponentially. So compared with bits, it takes a much smaller number of qubits to give a computer incredible power. Quantum computing is an amazing theoretical construct. But can it become a reality? Incredibly, some are saying it’s already arrived. Google recently published a paper claiming it achieved a real quantum computing breakthrough. Using a machine made up of 53 qubits, the company performed a calculation in three minutes and 20 seconds that Google claims would take the world’s best “normal” supercomputer 10,000 years to complete. Other legacy companies are in various stages of developing quantum computing capability – most notably IBM and Microsoft. IBM calls Google’s claim bogus, saying a traditional supercomputer could have completed the calculation in just 2.5 days. Still, the difference between 3.5 minutes and 2.5 days is enormous. I’d like to see more than one “demo” by a single company “prove” quantum computing. But it feels real to me. We could very well be on the precipice of a new computer age. And you can be sure that dozens of startups will be leading the charge. The day before Google’s big announcement, quantum computing startup IonQ raised $55 million. IonQ wants to make its quantum computers available to other companies via the cloud. And earlier this year, it announced its own major quantum breakthrough. Another startup, a German company called HQS Quantum Simulations, raised $2.6 million in seed financing. HQS wants to use quantum computers to run simulations that can discover new materials and substances with commercial potential. It cites batteries and more efficient solar cells as two examples. The possibilities are endless. Any industry that involves highly complex calculations and simulations could be disrupted, from finance to artificial intelligence (AI). Think of all the new medicines that could be created. On the downside, quantum computers would also make current encryption practices obsolete. A whole new world of possibility is about to open up. In the meantime, the transistor side of chip technology is also making huge strides. Untether AI just raised $13 million from Intel and others to develop a new type of chip for AI that can transfer data to different parts of the chip 1,000 times more quickly than a conventional AI chip. Untether AI uses “near-memory computing” to reduce the physical distance between memory and the processing tasks, speeding up data transfer and lowering power consumption. We’re witnessing a Kitty Hawk flight moment in computing. Just as the Wright brothers proved flight was possible, Google and Untether AI are showing us that computing power the likes of which we’ve never seen is also becoming a reality. Air transit opened up a world of unimaginable uses… many good but some not so good. It created entire industries. Changed the way we travel. Transformed warfare. And made the world a much smaller place. What will these breakthroughs in computing give us? We’re about to find out.
<urn:uuid:0835b740-21e0-4281-8bc3-b2d26f09701d>
CC-MAIN-2021-10
https://earlyinvesting.com/quantum-computing-is-almost-here/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00374.warc.gz
en
0.938732
1,032
3.796875
4
3D Map of a Quantum Dot’s Potential An electron bound to a quantum dot is a conceptually simple implementation of a quantum bit (qubit), with the electron’s spin providing the qubit’s two levels, which encode information [1, 2]. To control electron spin, researchers apply some perturbation to the electron, which rotates the spin to a desired direction. This process is easier when the electron’s location and wave function are known—parameters determinable from the dot’s confining potential. But in quantum dots, researchers often lack this information . Now Leon Camenzind at the University of Basel, Switzerland, and colleagues demonstrate a technique to measure the potential binding an electron in a gallium arsenide (GaAs) quantum dot [3, 4]. Their technique could potentially be used to characterize the confining potential of other systems, providing information that could allow optimization of the efficiency and performance of these systems in quantum devices. A quantum dot acts like an artificial atom, creating a potential that confines the electron in three dimensions. The electron’s motion is limited to a region determined by the dot’s potential. For an unperturbed quantum dot, the electron’s location and wave function can be determined if the dot’s potential is known. Having this information makes it easier to manipulate the qubit, as researchers can precisely determine the electric or magnetic field they need to force the qubit to evolve into a particular state to perform some desired operation. These manipulations might involve rotating the spin or bringing two spins together so that they interact [1, 2]. Knowing the confining potential and, in turn, the electron’s wave function also allows the time-evolution of the qubit’s state to be obtained, enabling perturbations to be more accurately established and applied. The commonly used quantum-dot model assumes that the confining potential of the dot matches that of a harmonic oscillator or some closely related expression . But that isn’t always the case. Camenzind and colleagues now demonstrate an ingenious technique for determining the potential confining the electron [3, 4]. For the demonstration, the team made single-electron quantum dots using gated GaAs/aluminum gallium arsenside (AlGaAs) heterostructures, which consisted of an undoped GaAs layer and then a doped AlGaAs layer (Fig. 1). Electrons from the doped AlGaAs layer diffused toward the GaAs layer and accumulated on the GaAs side of the interface between the two materials. There they formed a two-dimensional electron gas (2DEG)—a “slab” of electrons that are free to move in the x-y plane but tightly confined in the z direction. To define the dots, the team applied voltages to metal gates placed on the surface of the device, which generated electric fields in the 2DEG. The fields create a repulsive potential over a small region of the 2DEG that has an attractive spot at its center, confining a few electrons in the x-y plane to zero dimensions (a quantum dot). Under the right conditions, the number of electrons in the dot is limited to one. To determine the confining potential, the team applied magnetic fields to the system. The experiments were carried out in two stages. First the team applied magnetic fields of varying strength in the x and then y directions. These fields reduced the width of the confining potential in the y and x directions, respectively, changing the electron’s energy. Using pulsed gate spectroscopy, the team then measured the orbital energy of the electron as a function of magnetic field intensity and compared those energies to those theoretically predicted for different dot confining potentials. From these data, they inferred the shape of the dot in the x-y plane. Then they repeated the measurements, but this time they kept the magnitude of the field fixed while varying its angle. From these measurements, they determined the width of the potential in the z direction. Their results show that the confining potential of their quantum dots had an elongated, deformed circular shape in the x-y plane and a confinement width of around 6 nm in the z direction (Fig. 1). The analysis method employed by the team required that they make an initial guess for the confining potential, which they then used to make predictions that they compared with the experimental data. In this case, the team guessed that the potential was anisotropic, with independent harmonic shapes in x and y, and that it extended with a triangular shape in the z direction. The need to guess the potential could be interpreted as a limitation of the proposed method. Another potential issue is that different z-direction confining profiles, such as triangular wells and square wells, produce very similar spectroscopic data, making it hard to determine a dot’s exact potential. That said, the method does provide a route to tackling a difficult problem using realistic assumptions. It also provides a means to extract a large amount of information about the dots from a conceptually simple model and a well-defined sequence of measurements. For example, as well as obtaining the 3D shape of the potential, the team was able to measure and to calculate the ground- and excited-state energies for dots with different potential shapes. They were also able to measure the orientation of the dot relative to the underlying GaAs layer. The work by Camenzind and colleagues represents significant progress toward single-electron control in quantum dots. The team notes that their method should be directly applicable to quantum dots made from other materials, such as silicon/silicon oxide heterostructures , as well as multiple-quantum-dot systems, for example a triple quantum dot. The next step will likely be the mapping of a double quantum dot, which should provide insights into the effect of combining two dot potentials . Researchers are in a better position to model a quantum-dot-like qubit if they know its potential, which can be optimized to improve the qubit’s performance and efficiency as it carries out calculations or stores information. - D. Loss and D. P. DiVincenzo, “Quantum computation with quantum dots,” Phys. Rev. A 57, 120 (1998). - B. E. Kane, “A silicon-based nuclear spin quantum computer,” Nature 393, 133 (1998). - L. C. Camenzind, L. Yu, P. Stano, Zimmerman, A. C. Gossard, D. Loss, and D. M. Zumbühl, “Spectroscopy of quantum dot orbitals with in-plane magnetic fields,” Phys. Rev. Lett. 122, 207701 (2019). - P. Stano, C.-H. Hsu, L. C. Camenzind, L. Yu, Dominik Zumbühl, and D. Loss, “Orbital effects of a strong in-plane magnetic field on a gate-defined quantum dot,” Phys. Rev. B 99, 085308 (2019). - J. J. Pla, K. Y. Tan, J. P. Dehollain, W. H. Lim, J. J. L. Morton, D. N. Jamieson, A. S. Dzurak, and A. Morello, “A single-atom electron spin qubit in silicon,” Nature 489, 541 (2012). - F. H. L. Koppens, C. Buizert, K. J. Tielrooij, I. T. Vink, K. C. Nowack, T. Meunier, L. P. Kouwenhoven, and L. M. K. Vandersypen, “Driven coherent oscillations of a single electron spin in a quantum dot,” Nature 442, 766 (2006).
<urn:uuid:ef56c52f-2b60-4efd-9d8a-9f34e5c28001>
CC-MAIN-2021-10
https://physics.aps.org/articles/v12/56
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00575.warc.gz
en
0.894266
1,667
3.609375
4
Macroscopic quantum entanglement achieved at room temperature In quantum physics, the creation of a state of entanglement in particles any larger and more complex than photons usually requires temperatures close to absolute zero and the application of enormously powerful magnetic fields to achieve. Now scientists working at the University of Chicago (UChicago) and the Argonne National Laboratory claim to have created this entangled state at room temperature on a semiconductor chip, using atomic nuclei and the application of relatively small magnetic fields. When two particles, such as photons, are entangled – that is, when they interact physically and are then forcibly separated – the spin direction imparted to each is directly opposite to the other. However, when one of the entangled particles has its spin direction measured, the other particle will immediately display the reverse spin direction, no matter how great a distance they are apart. This is the "spooky action at a distance" phenomenon (as Albert Einstein put it) that has already seen the rise of applications once considered science fiction, such as ultra-safe cryptography and a new realm of quantum computing. Ordinarily, quantum entanglement is a rarely observed occurence in the natural world, as particles coupled in this way first need to be in a highly ordered state before they can be entangled. In essence, this is because thermodynamic entropy dictates that a general chaos of particles is the standard state of things at the atomic level and makes such alignments exceedingly rare. Going up a scale to the macro level, and the sheer number of particles involved makes entanglement an exceptionally difficult state to achieve. "The macroscopic world that we are used to seems very tidy, but it is completely disordered at the atomic scale," said Paul Klimov, a graduate student in the Institute for Molecular Engineering (a facility formed as a cooperation between UChicago and the Argonne National Laboratory). "The laws of thermodynamics generally prevent us from observing quantum phenomena in macroscopic objects." In standard sub-atomic quantum entanglement experiments using photons, for example, very high energy value photons are generated using a laser and then directed through a nonlinear crystal. The majority of the crystals will pass straight through unimpeded, however some will undergo a process known as spontaneous parametric down-conversion (SPDC) where, simply stated, a single high-energy photon will be split into two lower-energy photons. As a result of this SPDC, the two photons will have been created entangled, with opposing spin polarizations, because they both were spawned from a single particle. At a macroscopic level, however, things aren't quite as simple, and particles such as atoms in solids and liquids are particularly difficult to wrangle into a quantum state. This is because the difficulties of overcoming quantum decoherence (put simply, where interfering wave functions from surrounding atoms cause the collapse of quantum states) in entangling particles normally means that ultra-low temperatures (around -270° C (-454° F)) and enormous magnetic fields (about 1,000 times greater than that of an average refrigerator magnet) are required. This is to keep atomic movement close to zero and contain the entangled particles, both of which reduce the likelihood of decoherence. Given that a practical application of entanglement to macroscopic particles is to enhance quantum electronic devices in real world situations and at ambient temperatures, the researchers sought a different approach to this problem. Using an infrared laser, they coaxed into order (known in scientific circles as "preferentially aligned") the magnetic states of many thousands of electrons and nuclei and then proceeded to entangle them by bombarding them with short electromagnetic pulses, just like those used in standard magnetic resonance imaging (MRI). As a result, many entangled pairs of electrons and nuclei were created in an area equal to the size and volume of a red blood cell on a Silicon Carbide (SiC) semiconductor. "We know that the spin states of atomic nuclei associated with semiconductor defects have excellent quantum properties at room temperature," said professor David Awschalom, a senior scientist at the Argonne National Laboratory. "They are coherent, long-lived and controllable with photonics and electronics. Given these quantum 'pieces,' creating entangled quantum states seemed like an attainable goal." With the techniques demonstrated used in concert with other SiC-derived devices, quantum sensors may be constructed in the near future that use entanglement to improve the sensitivity limit over and above that found in current, non-quantum sensors. As the entanglement operates at ordinary temperatures and the SiC device is biologically inert, sensing within a living being is also a potential application. "We are excited about entanglement-enhanced magnetic resonance imaging probes, which could have important biomedical applications," said Abram Falk of IBM's Thomas J. Watson Research Center and a co-author of the research findings. Aside from the usual applications in secure communication and information processing, and high-capacity, minimal error data transfer, the research team believes that other technologies, such as synchronizing global positioning satellites could also benefit from this breakthrough. The results of this research were published in the journal Science Advances. Source: University of Chicago
<urn:uuid:2afd6fb6-50dd-423e-a2c8-efa431e450f6>
CC-MAIN-2021-10
https://newatlas.com/quantum-entanglement-nuclei-university-chicago-argonne/40884/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358064.34/warc/CC-MAIN-20210227024823-20210227054823-00055.warc.gz
en
0.933022
1,068
3.578125
4
Principio de Exclusión de Pauli – Free download as Powerpoint Presentation . ppt /.pptx), PDF File .pdf), Text File .txt) or view presentation slides online. Pauli Exclusion Principle. No two electrons in an atom can have identical quantum numbers. This is an example of a general principle which applies not only to. Translation for ‘principio de exclusión de Pauli’ in the free Spanish-English dictionary and many other English translations. |Country:||Trinidad & Tobago| |Published (Last):||7 February 2012| |PDF File Size:||4.89 Mb| |ePub File Size:||1.85 Mb| |Price:||Free* [*Free Regsitration Required]| Constrain to simple back and forward steps. This effect is partly responsible for the everyday observation in the macroscopic world that two solid objects cannot be in the same place at the same time. To account for this we must use a linear combination of the two possibilities since the determination of which electron is in which state fe not possible to determine. In one dimension, bosons, as well as fermions, can obey the exclusion principle. Tipler, Paul; Llewellyn, Ralph Copy code to clipboard. This principle was formulated by Austrian physicist Wolfgang Pauli in for electrons, xe later extended principio de exclusion de pauli all fermions with his spin—statistics theorem of In white dwarfs, which do not undergo nuclear fusion, an opposing force to gravity is provided by electron degeneracy pressure. Lewisfor example, the third of his six postulates of chemical behavior states that the atom tends to hold an even number of electrons in any given shell, and especially to hold eight electrons which are normally arranged symmetrically principio de exclusion de pauli the eight corners of a cube see: Do you really want to delete this prezi? PRINCIPIO DE EXCLUSION DE PAULI by Rebeca Anchundiia on Prezi In the early 20th century it became evident that atoms and molecules with even numbers of electrons are more principio de exclusion de pauli stable than those with odd numbers of electrons. Or learning new words is more your df Send the link below via email or IM Copy. However, even this enormous rigidity can be overcome by the gravitational paulii of a massive star or by the pressure of a supernovaleading to the formation of a black hole. A firewall is blocking access to Prezi content. Neutrons are capable of producing an even higher degeneracy pressure, neutron degeneracy pressurealbeit over a shorter principio de exclusion de pauli. The wavefunction for the two electron system would be. Orincipio both bodies, atomic structure is disrupted by extreme pressure, but the stars are held in hydrostatic equilibrium by degeneracy pressurealso known as Fermi pressure. Classical mechanics Old quantum theory Bra—ket notation Hamiltonian Interference. Dictionary Conjugation Phrases Games More by bab. Why not have a go at them together! Modern Physics 4th ed. Check out this article to learn more or contact your system administrator. Delete comment or cancel. Send link to edit together this prezi using Prezi Meeting learn more: Phrases Speak like a native Useful phrases translated from Principio de exclusion de pauli into 28 languages. Advanced topics Quantum annealing Quantum chaos Quantum computing Density matrix Quantum field theory Fractional quantum mechanics Quantum gravity Quantum information science Quantum machine learning Perturbation theory quantum mechanics Relativistic quantum mechanics Scattering theory Spontaneous parametric down-conversion Quantum statistical mechanics. The chemical principio de exclusion de pauli of an element largely depend on the number of electrons in the outermost shell; atoms with different numbers of occupied paulli shells but the same number of electrons in the outermost shell have similar properties, which gives rise to the periodic table of the elements. In Elliott Lieb and coworkers showed that the Pauli principle still leads to stability in intense magnetic fields such as in neutron starsalthough at a much higher density than in ordinary matter. Invited talk at the 12th Workshop on Nuclear Astrophysics. Creating downloadable principio de exclusion de pauli, be patient. Quantum Mechanics principio de exclusion de pauli Its Emergent Macrophysics. English from of as out of in off on than to by. The Pauli exclusion principle is part of one prkncipio our most basic observations principio de exclusion de pauli nature: See more popular or the latest prezis. The Pauli exclusion principle describes the behavior of all fermions particles with “half-integer spin “while bosons particles with “integer spin” are subject to other principles. The consequence of the Pauli principle here is that electrons of the same spin are kept apart by a repulsive exchange interactionwhich is a short-range effect, acting simultaneously with the long-range electrostatic or Exdlusion force. English name of the letter D. This suggestion was first made in by Paul Ehrenfestwho pointed out that the electrons of each atom cannot all fall into the lowest-energy orbital and must occupy successively larger shells. In the case of electrons in atoms, it can be stated as follows: For this purpose he introduced a new two-valued quantum number, identified fxclusion Samuel Goudsmit and George Uhlenbeck as electron principio de exclusion de pauli. The stability of the electrons in an atom itself is unrelated to the exclusion principle, but is described by the quantum theory of the atom. Fermions include elementary particles such as quarkselectrons and neutrinos. Pauli Exclusion Principle Pauli Exclusion Principle No two electrons in an atom can have identical quantum numbers. Krane 5 November Astronomy provides a spectacular principio de exclusion de pauli of the effect of the Pauli principle, in the form of white dwarf and neutron stars. Do you really want to delete this prezi? Send link to edit together this prezi using Prezi Meeting learn more: Pauli looked for an explanation for these numbers, which were at first only empirical. In neutron starssubject to even stronger gravitational forces, electrons have merged with protons to form neutrons. This can stabilize neutron stars from further collapse, but at a smaller size and higher density than a white dwarf. Invited audience members principio de exclusion de pauli follow you as you navigate and present People invited to a presentation do not need a Prezi account This link expires 10 minutes after you close se presentation A excousion of 30 users can follow your presentation Learn more about this feature in our knowledge base article. This page was last edited on 20 Julyat
<urn:uuid:9fed345f-f393-4544-a644-379069bcc410>
CC-MAIN-2021-10
https://c-4-c.com/principio-de-exclusion-de-pauli-70/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00537.warc.gz
en
0.841182
1,427
3.625
4
Atoms are tricky to control. They can zip around, or even tunnel out of their containment. In order for new precision measurement tools and quantum devices to work—and work well—scientists need to be able to control and manipulate atoms as precisely as possible. That’s especially true for optical atomic clocks. In these clocks, a cold, excited atom’s electrons swing back and forth in what’s called a dipole, vibrating like a plucked string. Scientists rapidly count those swings with a laser, dividing a second into quadrillionths of a second. However, even the best optical atomic clocks face decoherence—the atom falls back to its ground state, the laser loses the signal, and the clock winds down. This means optical atomic clocks can only take measurements for a few seconds before the atoms need to be “reset.” Scientists are continually exploring ways to increase those coherence times. Using optical tweezers, Aaron Young, along with other members of the Kaufman and Ye groups at JILA, have reached record-setting coherence times of more than half a minute. Their findings were recently published in Nature. “The trick is to use separate sets of tweezers to prepare and measure the atoms, and to hang on to the atoms while they ring down. This makes it possible to optimize the second set of tweezers to preserve coherence for as long as possible, without having to worry about competing requirements associated with other phases of the experiment,” Young said. Optical atomic clock technology Optical atomic clocks are incredibly varied, but there are two popular means for controlling the atoms: ion traps, and optical lattices for trapping neutral atoms. Each approach has its strengths and weaknesses. Trapped ion clocks measure the oscillations of a single charged atom, or ion. That atom is pristine, well-characterized, and well-controlled, however, due to the fundamental noise associated with quantum measurements, scientists need to run the trapped ion clock many times to obtain a precise measurement. Lattice clocks, on the other hand, use standing waves of reflected lasers to form an egg carton-shaped lattice that can hold many atoms. This way, they can interrogate many thousands of atoms in parallel to obtain precise measurements in a short amount of time. But it’s difficult to control any of those thousands of atoms individually, and interactions between these atoms must be well-characterized — a rich and complicated endeavor in its own right. Controlling and preventing these interactions is where optical tweezers come in. Optical tweezers are highly-focused laser beams capable of grabbing and moving individual atoms—something the Kaufman Group has a lot of experience doing. “With the tweezers, our traps are more or less independent,” Young said. “It gives you a lot of control over what kind of traps you can make.” The group uses this extra control to preserve quantum coherence, and minimize many of the effects that can limit clocks. A hybrid clock of cigar pancakes Young and the team used lasers to create a vertical lattice of traps, like stacked pancakes. The optical tweezers pierce these pancakes, looking like little cigar-shaped tubes. This creates a two-dimensional array composed of hundreds of spherical traps that each contain a single atom. This pancake-cigar architecture allows for very quick cooling and trapping of the atoms, at which point they are easily transferred to a second set of tweezers designed specifically for clock physics. Because the atoms are well-chilled, the second set of tweezers can make very shallow traps for the clock. Shallow traps minimize the number of photons that could interfere with the atoms, and they reduce the power required for the laser, making it possible to make more traps, and trap more atoms. They can also space these traps far enough apart so the atoms cannot move around or crash into their neighbors. All of this results in record coherence times—48 seconds. To put that in perspective, if every oscillation took about a full second—like the pendulum on a grandfather clock—you would only have to wind this clock once every few billion years. “This long lifetime is related to what people call a ‘quality factor’ – it’s the number of times an oscillator swings before it rings down. The quality factor of our experiment is the highest we know of in pretty much any physical system, including, depending on how you compare them, various astronomical systems like spinning neutron stars or planetary orbits,” Young said. More than a clock “What we’ve effectively done is put 150 very coherent qubits in the same place, which serves as a really good starting point for engineering interactions,” Young said. A clock with controllable interactions could be used to engineer quantum states that allow for even more precise measurements of time. But the Kaufman and Ye Groups see potential to use this technique for another quantum device: quantum computers. With exquisite control of each high-coherence atom, the atoms can act as a qubit for the computer to perform calculations. Young and Kaufman also see this as a “zeroth order step” in physics research. Physicists are continually seeking better control over atoms to manipulate interactions between them, and study the results—and this hybrid tweezer clock is a promising means of achieving that control for long periods of time. By studying and controlling those interactions, physicists can better understand how the quantum world works, and those discoveries could lead to new advances in quantum-based technologies. Their study was published in Nature on December 17th, 2020 and was supported by a National Science Foundation Physics Frontier Center grant and a grant from the National Research Council.
<urn:uuid:d531fb94-c81e-4135-ae37-1938d19ebedf>
CC-MAIN-2021-10
https://jila-pfc.colorado.edu/highlights/tweezing-new-kind-atomic-clock
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00338.warc.gz
en
0.941953
1,198
3.78125
4
Technology companies constantly roll out their "next-generation" phones and computers with the promise that they're better than ever before. But iPhone updates can't quite compare to what a new study predicts: A next-generation computer made with an ultra-rare, never-before-seen material. After more than 30 years of searching, an international team of physicists and chemists discovered a material with a trifecta of rare and highly sought after characteristics. This material, KV3Sb5, has the potential to revolutionize the design of computer memory and processors — and is shaking up fundamental understandings of physics. This finding was published Friday in the journal Science Advances. The discovery involves the documentation of a "giant" electromagnetic effect in an already complicated and rare material. Mazhar Ali is the study's senior author and a researcher at the Max Planck Institute of Microstructure Physics. Ali tells Inverse that KV3Sb5 is something called a "metallic frustrated magnet." A material like this has been highly sought after for around 30 years because "theorists have speculated that the interplay of frustrated magnetism with traveling electrons could result in exotic properties like unconventional superconductivity and more," Ali says. "Metallic frustrated magnets are very, very rare," he explains. KV3Sb5's oddness didn't stop there: It also houses a special kind of electron called a "Dirac electron," Ali says, which means its electrons are both much faster and way lighter than your run-of-the-mill electron. Coupled with the material's malleability — it's easy to flake into individual layers and cooperative during fabrication — and it becomes one of a kind. "There is no other example material with this combination of traits," Ali remarks. How does it work — The giant electromagnetic effect that these combined characteristics make possible is something called the anomalous Hall effect (AHE.) This effect refers to a way magnets and charge interact and can either be intrinsic (meaning the structure of the material determines how electrons move through it) or extrinsic (in which certain features of the structure create more scattering). Either of these AHEs will change how electrons scatter off the material, thus changing out information is carried. "There is no other example material with this combination of traits." To explain what exactly that looks like inside this material, Ali says we can turn to soccer for an analogy. "Intrinsic is like if Christiano Ronaldo was making a curved pass around some defenders, without colliding with them, by kicking the soccer ball [the electron] in a special way," Ali says. "Extrinsic is like the ball bouncing off of a defender — aka an electron off of a magnetic scattering center —and going to the side after the collision. Most extrinsically dominated materials have a random arrangement of defenders on the field — [like the] scattering centers randomly situated throughout the crystal." Ali explains that most extrinsically arranged materials have these defenders scattered randomly through the field (or material). KV3Sb5, on the other hand, plays a tighter defense. "KV3Sb5 belongs to a class of materials known as cluster magnets," Ali says. "It has defenders grouped together and arranged on the field in a special pattern... In this scenario, the ball scatters off of the cluster of defenders, rather than a single one, and is more likely to go to the side than if just one was in the way." In KV3Sb5, this pattern is a triangle of 3 magnetic scattering centers. This is thought to underly a recently proposed spin-cluster skew scattering mechanisms linked to AHE, which, Ali says "demonstrated for the first time in this material; because it has the right ingredients to host this effect." And, because the electrons at play in this material are super-fast Dirac electrons, Ali says this is equivalent to Ronaldo kicking the ball instead of a 10-year-old. The result? A giant AHE. How can it be used — But what can you do with a material so special? One option exciting scientists is as a replacement for platinum in computing and memory technology. "The same physics that governs this AHE should also drive the spin Hall effect, where instead an electron gaining the orthogonal velocity, it is just the electron's spin," Ali explains. "Large spin Hall effects in metals are highly sought after for spintronic applications like next generation computation and memory technology." This type of technology, Ali says, is already commercially available — this IBM and Everspin — but these technologies are based on Platinum. He explains that finding a cheap and stable alternative to platinum would "be a big win" — and this finding could make that possible. Another exciting avenue for physicists to explore, says Ali, is whether or not this material could superconduct at low temperatures -- a trait that, when combined with other components, would benefit the future of quantum computing as well. Where there's one super weird, rare material, they hope to find another. By further exploring this material and those like it, scientists aim to learn more about this fundamental physics phenomena. Abstract: The anomalous Hall effect (AHE) is one of the most fundamental phenomena in physics. In the highly conductive regime, ferromagnetic metals have been the focus of past research. Here, we report a giant extrinsic AHE in KV3Sb5, an exfoliable, highly conductive semimetal with Dirac quasiparticles and a vanadium Kagome net. Even without report of long range magnetic order, the anomalous Hall conductivity reaches 15,507 Ω−1 cm−1 with an anomalous Hall ratio of ≈ 1.8%; an order of magnitude larger than Fe. Defying theoretical expectations, KV3Sb5 shows enhanced skew scattering that scales quadratically, not linearly, with the longitudinal conductivity, possibly arising from the combination of highly conductive Dirac quasiparticles with a frustrated magnetic sublattice. This allows the possibility of reaching an anomalous Hall angle of 90° in metals. This observation raises fundamental questions about AHEs and opens new frontiers for AHE and spin Hall effect exploration, particularly in metallic frustrated magnets.
<urn:uuid:801fb213-b8a0-4354-acde-715bfe9a40bc>
CC-MAIN-2021-10
https://www.inverse.com/innovation/next-gen-computer-materials
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00218.warc.gz
en
0.935141
1,318
3.578125
4
Updated: Jan 23 In today’s information society, data is one of the most valuable resources required by businesses to maintain a competitive advantage over others . Using cyberspace, data is carried across the world and into every corner of our lives. Technological advances and the expansion of cyberspace has brought data security to the forefront as the most critical problem facing the Internet in the future. As a result, businesses, as well as other actors must be able to maintain data secrecy by closely controlling who has access to it. To achieve this, data systems largely utilise cryptography, a method of protecting information through the use of codes and the scrambling of data which is only accessible to someone who can restore it to its original format. In current computer systems, cryptography is a strong inexpensive method of securing data, however, developments in the field of quantum computing threatens this but also provides an opportunity for the unconditional security offered by quantum cryptography to the Internet and data security as ever-increasing challenges arise in the future. To explain the role that quantum cryptography could play in the near future it’s best to start with role of cryptography and how quantum computing effects it. As previously mentioned, cryptography disguises data and is only accessible by someone who can restore it to its original form. In a perfect scenario that person is known by the person who encrypted the data, and they are permitting the data recipient to access it. While classic cryptography has its exploitable flaws, , cryptography is still considered to be an effective security measure. However, the development of quantum computers in the last decade threatens to render cryptography as we know it obsolete. At their core, all computers rely on their ability to store and manipulate pieces of information known as bits. These bits are stored in a binary state as zeros and ones. Quantum computers on the other hand, use quantum mechanics to manipulate information as quantum bits, known as qubits. Qubits are binary numbers like bits but have an additional state known as superposition which allows them to represent ones or zeros simultaneously. This state reduces the time it takes data to be analysed on quantum computers. This can be better understood in the following example, in 1997 IMB’s computer, Deep Blue, defeated chess champion Garry Kasparov by examining 200 million moves per second. In that second, a quantum computer would be able to calculate 1 trillion moves. So, how does this effect cryptography and the data it protects? By using quantum computers, encryptions that were previously thought to be unbreakable due to the time it would take to achieve it were cracked and proved to no longer be reliable in protecting data. In essence, quantum computers have changed the landscape of data security. The development of quantum computers and the skill they’ve demonstrated in cracking classic cryptography highlights the need for new cryptosystems which can ensure the information security of cyberspace. By using quantum computers to develop encryptions there is an increased level of information security as well as additional advantages. Firstly, it offers unconditional security. In classic cryptography two kinds of cryptosystems can be used, asymmetric key cryptosystems and symmetric key cryptosystems. Both systems encrypt data and require users to decipher the encryption using a decryption key. These key systems can resist brute force attacks that from normal computers but not from quantum computers. But, if the key system in question was generated using a quantum computer it cannot be broken. This is because of a principle of quantum mechanics called the principle of uncertainty which states that a particle’s position cannot be determined and can exist in different places with different probabilities. By using this principle, keys can be randomly generated and shared between the data sender and the recipient. But what if communication between the two is being monitored by a third person? This risk is mitigated by quantum cryptography’s second advantage, it provides sniffing detection. If information is exchanged in a public channel it is possible for an attacker to eavesdrop on that channel without detection. However, through quantum communication this isn’t possible. By using the quantum no-cloning theory any eavesdropper would be detected. This theory explains that it is impossible to replicate an identical quantum state in another system which guarantees that any attacker who attempts to delete or damage quantum information will leave a trace. These characteristics of quantum computing and cryptography, unconditional security and sniffing detection ensures data security in cyberspace unlike classic cryptography. It is clear that quantum computers are somewhat of a double-edged sword that threatens current data encryption while being its only saving grace. While the field of quantum cryptography has made substantial advances in the last decade it still faces challenges before its widespread implementation, including the need to develop more advanced hardware which would enable higher quality and longer transmission distances for quantum key exchange. However, developments in computer processing power coupled with the threat of obsolescence facing classic cryptography systems prove to be the force propelling the research and development of quantum cryptography. This technology has the potential to contribute significantly to security on a personal, commercial, and state level even if it only reaches a fraction of its expectations.
<urn:uuid:508094c0-2b61-44ee-8134-b069b5bad9a5>
CC-MAIN-2021-10
https://www.centuria-sa.org/post/quantum-cryptography-and-how-it-effects-data-security
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00459.warc.gz
en
0.950961
1,036
3.6875
4
As electronic devices using conventional materials reach their limits, research focus has shifted to the development of exotic materials with properties that can make electronic devices more efficient, lightweight, flexible, cost-effective and smart. Take a look at some promising candidates. Most of us assume that smartphones and laptops will keep getting faster and better. But that progress could come to an end in about a decade. That’s when engineers will hit the limits of cramming atom-scale circuitry onto conventional silicon chips, the brains behind every computing device today. Fortunately, chip market leaders have plenty of ideas to get around that impasse. Their plans begin with refinements to today’s technology and grow steadily more exotic. Companies are investing big in exotic forms of carbon as a way to recraft chips. Graphene, for example, is a sheet of carbon atoms just a single atomic layer thick, arranged in a hexagonal array that looks like a chickenwire fencing. Another is carbon nanotubes, which are like tiny straws made from rolled up graphene sheets. Both forms of carbon could help push miniaturisation further than what’s possible with conventional silicon. And processors could get faster even if they don’t get smaller—a big selling point. Nanotubes could become transistor building blocks, although placing them precisely is a big challenge. Researchers also envision tiny transistors made using graphene, but graphene-based chips will pose challenges. The material conducts electrical current well but doesn’t mirror silicon’s semiconductor properties. One way to keep pushing progress will involve elements drawn from other columns to either side of the group IV column—thus the term III-V materials, pronounced simply ‘three-five.’ With III-V materials, chip manufacturing stays the same but silicon will get new elements layered on top. That will help electrons flow faster, which means less voltage will be needed to get them moving. If the chips need less power, transistors can be smaller and switch faster. Researchers are creating and investigating artificial and unconventional materials with unusual electronic and magnetic properties like superconductors that transport electricity with zero losses, and very thin materials (just two or three atoms thick) that could be incorporated into transistors. The novelty of such materials makes it nearly impossible to anticipate everything that they can do. A researcher can make educated guesses about various properties, but end up seeing something entirely different. A deeper understanding of the material opens the possibility that engineers would be able to route electric currents in quantum computers much like the way they do in conventional electronics through silicon. However, creating high-quality topological insulator materials is a challenge. Since the useful properties occur on the surface, nanoscale ribbons and plates would be ideal to work with because of their large surface area. British researchers won the 2016 Nobel Prize in Physics for their theoretical explanations of strange states (topological phases) of matter in two-dimensional materials. Their work laid the foundation for predicting and explaining bizarre behaviours that experimentalists discovered at the surfaces of materials, and inside extremely thin layers. These include superconductivity—the ability to conduct electricity without resistance—and magnetism in very thin materials. Physicists are now exploring similar states of matter for potential use in a new generation of electronics including quantum computers. And the theories pioneered by the Nobel winners have been extended to develop exciting materials such as topological insulators. Topological insulators are a class of solids that conduct electricity like a metal across their surface but at the same time block the current’s flow like a rubber through their interior. Theoretical physicists predicted their existence in 2006 and experimentalists demonstrated the first such material in 2008. Engineers find a few traits of topological insulators especially exciting. One is that the electrons move in a direction determined by their spin—a quantum-mechanical property that forms the basis of magnetic data storage. Engineers hope to exploit the spin-motion connection to make superfast hard drives. Topological insulators open the door to tailoring topological electronic properties by stacking different thin sheets, or 2D materials. These exotic 2D materials could be used as a platform for energy-efficient computing (spintronics) and to solve today’s intractable challenges with quantum computing. Candidate materials for topological insulators Like graphene, the semi-metal tungsten ditelluride (WTe2) can be prepared in a single monolayer. Tellurium atoms sandwich the transition metal tungsten in each layer. These sandwiched transition metal materials are important for future electronics and photonics. Scientists have predicted that WTe2 in monolayer form has the exotic electronic properties of topological insulators. However, the surface of WTe2 oxidises in air, destroying the electronic properties. Now, researchers have made devices from WTe2 down to a single layer thick, which are air-stable and have good electrical contacts. Surprisingly, they found that in the case of a single layer, the sheet became insulating at liquid nitrogen temperatures when no gate voltage was applied. For large-enough positive or negative contact voltages, the electrical current switched on, as in a transistor. This content was originally published here.
<urn:uuid:902ce81d-2175-4e21-abe5-1eff2586b443>
CC-MAIN-2021-10
https://www.smpstroubleshooting.com/electronics-of-exotic-materials/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00100.warc.gz
en
0.931054
1,075
3.578125
4
Configuration of the security protocol: One device (center) produces the encryption key coded in the form of entangled pairs of light particles which are then transferred to the two communicating devices (Alice and Bob). Coding information in pairs of particles ensures security, as there is no third particle that can be intercepted by an “eavesdropper.” (Illustration: Department of Physics, University of Basel) How can we protect communications against “eavesdropping” if we don’t trust the devices used in the process? This is one of the main questions in quantum cryptography research. Researchers at the University of Basel and ETH Zurich have succeeded in laying the theoretical groundwork for a communication protocol that guarantees one hundred percent privacy. Hackers in possession of quantum computers represent a serious threat to today’s cryptosystems. Researchers are therefore working on new encryption methods based on the principles of quantum mechanics. However, current encryption protocols assume that the communicating devices are known, trustworthy entities. But what if this is not the case and the devices leave a back door open for eavesdropping attacks? A team of physicists led by Professor Nicolas Sangouard of the University of Basel and Professor Renato Renner of ETH Zurich have developed the theoretical foundations for a communication protocol that offers ultimate privacy protection and can be implemented experimentally. This protocol guarantees security not only against hackers with quantum computers, but also in cases where the devices used for communication are “black boxes” whose trustworthiness is a completely unknown quality. They published their results in the journal Physical Review Letters and have applied for a patent. Diluting information with noise While there are already some theoretical proposals for communication protocols with black boxes, there was one obstacle to their experimental implementation: the devices used had to be highly efficient in detecting information about the crypto key. If too many of the information units (in the form of entangled pairs of light particles) remained undetected, it was impossible to know whether they had been intercepted by a third party. The new protocol overcomes this hurdle with a trick – the researchers add artificial noise to the actual information about the crypto key. Even if many of the information units are undetected, an “eavesdropper” receives so little real information about the crypto key that the security of the protocol remains guaranteed. In this way, the researchers lowered the requirement on the detection efficiency of the devices. “Since the first small-scale quantum computers are now available, we urgently need new solutions for protecting privacy,” says Professor Sangouard. “Our work represents a significant step toward the next milestone in secure communications.” The Latest Updates from Bing News & Google News Go deeper with Bing News on: - Spirent Communications announces acquisition of octoScope to establish it as 'market leader' in the wi-fi spaceon March 4, 2021 at 11:40 pm StockMarketWire.com - Technology company Spirent Communications has acquired US-based technology company octoScope in a deal that will establish it as the 'wi-fi test leader'. The company has bought ... - Synzi Obtains Advanced Healthcare Security Measures for Patient Communications through Covax Dataon March 4, 2021 at 10:12 pm Covax Data, Inc., a visionary cybersecurity SaaS provider, today announced a data security services relationship with Synzi, LLC. - IRS Tax-Exempt Arm To Launch Secure Messaging In Summeron March 4, 2021 at 6:10 pm The Internal Revenue Service's Tax Exempt and Government Entities Division plans to roll out a secure messaging program for electronic communication with taxpayers in the summer, an acting director in ... - Cutting off stealthy interlopers: a framework for secure cyber-physical systemson March 4, 2021 at 7:26 am Cyber-physical systems (CPS), which combine modern networking with physical actuators, can be vulnerable against hackers. Recently, researchers at DGIST developed a new framework for CPSs that is ... - Globex Data CEO Interviewed on Proactive to Announce Its Multi-Currency Platform for Its Global Launch of Sekur.Com Communication Platformon March 3, 2021 at 10:00 pm Ghiai joined Steve Darling from Proactive to share news the company is launching a multi-currency platform for its secure communications solution platform Sekur in anticipation of a global mass-market ... Go deeper with Google Headlines on: Go deeper with Bing News on: - Global Quantum Computing Market (2021 to 2025) - Featuring Atos, Alphabet and Honeywell International Among Others - ResearchAndMarkets.comon March 4, 2021 at 7:58 am The "Global Quantum Computing Market 2021-2025" report has been added to ResearchAndMarkets.com's offering. The publisher has been monitoring the quantum computing market and it is poised to grow by ... - New Optical Antennas Could Overcome Data Limitson March 3, 2021 at 11:54 pm Researchers at Berkeley Lab and UC Berkeley have harnessed the properties of lightwaves that radically increase the amount of data they carry. - Quantum Cryptography Market Size to Record 38.2% CAGR During 2020-2027on March 3, 2021 at 3:46 am Selbyville, Delaware, Market Study Report LLC recently added a new title on 2020-2027 Global Quantum Cryptography Market Report from its database. The report provides study with in-depth overview, ... - Quantum Cryptography Market to Witness Massive Growth by 2025 | IBM, ID Quantique, QuintessenceLabson March 3, 2021 at 1:34 am A new business intelligence report released by Advance Market Analytics with title Global Quantum Cryptography Market Insights Forecast to 2025 This report provides a detailed overview of key factors ... - An approach for security evaluation and certification of a complete quantum communication systemon March 2, 2021 at 4:00 pm Although quantum communication systems are being deployed on a global scale, their realistic security certification is not yet available. Here we present a security evaluation and improvement protocol ...
<urn:uuid:085741b3-51c1-4940-93c7-a3eee693c25a>
CC-MAIN-2021-10
https://innovationtoronto.com/2020/06/completely-secure-communications-by-adding-noise/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00342.warc.gz
en
0.907743
1,253
3.84375
4
While quantum computers can do interesting things without dedicated memory, memory would provide a lot of flexibility in terms of the sorts of algorithms they could run and how quantum systems can interact with each other and the outside world. Building quantum memory is extremely challenging, as reading to and writing from it both have to be extremely efficient and accurate, and the memory has to do something that's very atypical of quantum systems: hold on to its state for an appreciable length of time. If we solve the problems, however, quantum memory offers some rather unusual properties. The process of writing to quantum memory is very similar to the process for quantum teleportation, meaning the memory can potentially be transmitted between different computing facilities. And since the storage device is a quantum object, there's the possibility that two qubits of memory in different locations can be entangled, essentially de-localizing the qubit's value and spreading it between two facilities. In a demonstration of that promise, Chinese researchers have entangled quantum memory at facilities over 20 kilometers apart. Separately, they have also done the entanglement with photons that have traveled through 50 kilometers of optical cable. But the process of transmitting and entangling comes with an unfortunate side-effect: it takes so long that the memory typically loses its coherence in the meantime. The basic outlines of the experiment are pretty straightforward for a process that's somewhat mind-bending. The qubits being used here are small clouds of cold atoms (about a hundred million atoms for each). They are placed in a state where the atoms are indistinguishable from a quantum perspective and thus can be treated as a single quantum object. Because a quantum state will be distributed across all the atoms simultaneously, this provides a bit more stability than other forms of quantum memory. The atom cloud's state is read and written using photons, and the atoms are placed in an optical cavity that traps these photons. This ensures that the photons have many opportunities to interact with the atom cloud, increasing the efficiency of operations. When the memory's state is set by a write photon, the atomic collective emits a second photon that indicates the success. The polarization of this photon contains information regarding the state of the atoms, so it serves as a tool for entangling the memory. Unfortunately, that photon is at a wavelength that isn't very useful, in that it tends to get lost during transmission. So the researchers sacrificed a bit of efficiency for a lot of utility. They used a device that shifts the wavelength of the photons from the near infrared to the wavelengths used in standard communications fibers. About 30 percent of the photons were lost, but the remaining ones can be transmitted at high-efficiency across existing fiber networks (provided the right hardware is put in place where the fiber ends). There are losses from filtering noise and getting photons into the fiber, but the entire process is over 30-percent efficient, end to end. In this case, the two ends were 11km apart, at the University of Science and Technology of China and the Hefei Software Park. For the entanglement, the authors created two qubits of quantum memory, generated photons from both, and sent those photons down separate cables to the Software Park. There, the photons were sent through a device that made them impossible to distinguish, entangling them. Since they, in turn, were entangled with the quantum memory that produced them, the two qubits of memory were then entangled. While they resided in the same lab, the geometry of the fibers could have been arbitrary—it was equivalent to entangling two bits of memory that were 22km apart. That's a big step up from the previous record of 1.4km. To stretch things out a bit, the researchers then turned to a long spool of cable. Two photons were sent down the cable and then manipulated so that it was impossible to determine which path they took through the cable. This again entangled them, and thus the memories that emitted the photons in the first place. The process required that the phase of the incoming photons be tracked, which is notably more difficult, and therefore dropped the overall efficiency. For a 50km-long fiber path, this led to some rather low efficiencies, on the order of 10-4. Which means the time to achieve entanglement went up—in this case to over half a second. And that's a problem, because the typical lifetime of a qubit stored in this memory is 70 microseconds, much shorter than the entanglement process. So the approach definitely falls into the "not quite ready for production" category. And that's unfortunate because the approach opens up a host of very intriguing possibilities. One is that spreading a qubit across two facilities through this delocalization could enable a single quantum calculation to be performed at remote facilities—possibly ones employing different hardware that have distinct strengths and weaknesses. And the researchers note that there's a technique called entanglement swapping that could extend the distance between memory qubits even further—provided the qubits hold on to their state. But if all of these involve some amount of error, that error will quickly pile up and make the whole thing useless. None of this should undercut the achievement demonstrated here, but it does show how far we still have to go. The inefficiencies popping up at every step of the process each represent a distinct engineering and/or physics challenge we have to tackle before any of this can be applicable to the real world.
<urn:uuid:e9e3cbbd-cf96-4c3f-853b-d6cab6fb66be>
CC-MAIN-2021-10
https://arstechnica.com/science/2020/02/researchers-entangle-quantum-memory-using-standard-optical-fiber/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00104.warc.gz
en
0.956399
1,106
3.765625
4
Quantum computing: the most transformational tech of all What is quantum computing? And what makes quantum computing applications different from ‘classical’ digital computing? As you’re probably already aware, conventional systems use a binary computer code, represented as 1 or 0. This is based on transistors that can only store information in two electrical states, On or Off. These are the binary digits, or ‘bits’, of conventional computing. It’s these binary bits that limit the kind of task regular computers can perform, and the speed at which they can do those tasks. Quantum computing is based around the strange qualities and behaviour of subatomic particles. The quantum meaning of entanglement and superposition Defying previously accepted laws of the physical world, subatomic particles can exist in two places, or two states, at the same time. This is called ‘superposition’. Even distantly separated particles can share information instantaneously, faster than the speed of light. This is ‘entanglement’. This means that, unlike bits, qubits – the basis for quantum computers – can exist in multiple states simultaneously. Transcending the 1 or 0 binary limitation, they have the potential to process exponential amounts of information. Qubits – going beyond binary A quantum machine with just a couple of qubits can process as much information as a classical 512-bit computer. Due to the exponential nature of the platform, the dynamic changes very quickly. Assuming perfect stability, 300 qubits could represent more data values than there are atoms in the observable universe. This opens the opportunity to solve highly complex problems that are well beyond the reach of any conventional computer. What is quantum computing used for presently? In 2016, IBM made a quantum computer available to the public by connecting it to the cloud, enabling outside researchers and developers to explore its possibilities. And in September 2019, IBM’s Quantum Computation Center opened. This comprises a fleet of 15 systems including the most advanced quantum computer yet available for external use. Despite these progressive steps, it’s still generally accepted that the most important quantum applications are years away. One reason is the fickleness of subatomic matter. As qubits are extremely delicate, even a small disturbance knocks particles out of a quantum state. That’s why quantum computers are kept at temperatures slightly above absolute zero, colder than outer space, since matter becomes effectively more stable the colder it gets. Even at that temperature, qubit particles typically remain in superposition for only fractions of a second. Figuring out how to keep qubits in a prolonged state of superpostition is a major challenge that scientists still need to overcome. The search is on for ‘logical qubits’ that can maintain the essential quantum state for longer. The path to fulfilling Quantum’s promise How will the arrival of the Quantum Age impact the number, categories and quality of jobs in the decades to come? Although it’s not possible right now to predict just how big an industry quantum computing will eventually be, the industry is already suffering from a major skills gap, leaving quantum computing companies struggling to find qualified recruits. The practical training of the sort made possible by IBM’s increasingly large collaborative effort, the Q Network, will be crucial to a long-term solution. This is why IBM’s previously mentioned Quantum Computation Center offers IBM clients, academic institutions, and more than 200,000 registered users access to this cutting-edge technology. A similarly innovative-minded community is rapidly growing around Qiskit, IBM’s open-source development platform for quantum computing. Educational tools such as the ‘Coding With Qiskit’ video series has already generated more than 1.5 million impressions, as well as over 10,000 hours of content consumed by users. There are also open source textbooks, written by experts in the field including several from IBM Research, as well as professors who have utilised some of the material in their own university courses. IBM Q Network partners include ExxonMobil, Daimler, JPMorgan Chase, Anthem, Delta Airlines, Los Alamos National Laboratory, Oak Ridge National Laboratory, Georgia Tech University, Keio University, Stanford University’s Q-Farm program, and Mitsubishi Chemical, among dozens of others. Last year IBM announced partnerships with the University of Tokyo and the German research company Fraunhofer-Gesellschaft, greatly expanding the company’s already broad network of quantum researchers globally. Through these efforts, IBM and others are exploring the ways quantum computing can address their most complicated problems, while training a workforce to use this technology. Quantum computing applications Once the challenges facing the full introduction of quantum computing are met, what kind of problems can we expect quantum computers to solve? Some promising applications stand out. Explore more in this video from Katie Pizzolato, Director at IBM Quantum Partners Research. Along with hyper-accurate long-term weather forecasting, new synthetic carbon-capturing materials could help reverse climate change caused by fossil fuels. By observing the way each carbon atom’s eight orbiting electrons might interact with the electrons of an almost infinite variety of other molecules, researchers hope to discover the optimum combination for binding carbon. Long-lasting batteries to store green energy Quantum computing could be utilised to effectively peer inside a batteries’ chemical reactions, leading to a better understanding of the materials and reactions that result in a more effective electrical storage. New insights into chemistry Due to the infinitely complex ways in which atoms interact with each other, almost all chemistry breakthroughs have come about through accident, intuition, or exhausting numbers of experiments. Quantum computing could make this work faster and more methodical, leading to new discoveries in energy, materials, life-saving drugs, and other fields. When balancing portfolios and pricing options, the processing of a large number of continually changing variables is complex and time intensive. Quantum computing should enable the required calculations to be performed in a matter of minutes, meaning derivatives could be bought and sold in near real time. It may all read like an ambitious wish list. But many scientists predict that the emerging era of quantum computing could lead to breakthroughs like these, while also tackling other major problems that are beyond the reach of current computing. Keeping tabs on quantum computing news Quantum computing is not a new idea. But it’s only been in recent years that a workable technology has begun to catch up with the theory. According to Gartner, “by 2023, 20% of organisations will be budgeting for quantum computing projects, up from less than 1% in 2018.” Would you like to keep up with the very latest developments in quantum computing news? The history of computing tells us that creative people around the world will find uses for these systems that no one could have predicted.
<urn:uuid:d85d3f8b-1203-478e-a9ca-a33b5c8d9d5f>
CC-MAIN-2021-10
https://www.themsphub.com/content/quantum-computing-the-most-transformational-tech-of-all/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362481.49/warc/CC-MAIN-20210301090526-20210301120526-00544.warc.gz
en
0.91725
1,458
3.875
4
A serious obstacle to evolutionary theory is the interdependent relationships between living things, called symbiosis, in which completely different forms of life depend on each other to exist. Darwin’s theory of biological change was based on competition, or survival of the fittest, among the individuals making up a species. He admitted: ‘If it could be proved that any part of the structure of any one species had been formed for the exclusive good of another species, it would annihilate my theory, for such could not have been produced through natural selection’. Symbiogenesis—the emergence of a new species through the evolutionary interdependence of two or more species—is at least as important in the history of life as survival of the fittest. Mutualism, an interaction between different species that is beneficial for all actors, is widespread throughout nature. To a large extent, mutualism has shaped, and is still shaping, life on this planet. In fact, life as we know it would not have existed without mutualistic relationships: all eukaryotic life is based on ancient endosymbiotic mutualisms between its cells and formerly independent microorganisms (e.g. mitochondria, plasmids). Other mutualisms are known to have major impact on ecosystem stability, such as specialized interactions between flowering plants and their pollinators, or seed dispersal by birds, mammals and other animals. The mutualistic relationship between humans and their agricultural crops and domesticated animals was key to the dominant role our species is now playing on our planet So, mutualistic symbiosis is a widespread phenomenon in nature. Humans have evolved to adapt our behavior to the context in which we live. However, by becoming able to change the environment to better suit our needs, humankind went a step further than simple adaptation. As a result, in the coming decades we will see that for the first time, artefacts that we have created will start to adapt themselves and their behavior based on their ecological context. In short, we will be part of their context. Hence, starting in the next decade and even more so in the further future, we will live in a dynamically changing world where we will be responding to the behavior of machines, machines will be responding to our behavior in a continuously changing fabric, and it will become progressively more difficult to distinguish cause and effect between man and machine. From symbiotic relationship to emergence of new entities: the establishment of a symbiotic relationship among (autonomous) systems as well as between them and humans. There is yet another aspect of these trends that will become apparent over the next decade. The interaction of several systems, each one independent from the others but operating in a symbiotic relationship with the others—humans included—will give rise to emergent entities that do not exist today. As an example, cities are the result of the interplay of several systems, including its citizens as a whole, as well as individuals. We can design individual systems and even attempt to design a centralized control system for a complex set of systems, such as a city. However, a city cannot be designed in a top down way, as we would do with even a very complicated system such as a manufacturing plant where everything is controlled. Just the simple fact that a city does not exist without its citizens and the impossibility of dealing or controlling each single citizen, as we would control a cog in a manufacturing plant, shows that conventional design approaches will not succeed. This emergence of novel abstract (although very concrete) entities created by these complex interactions is probably the most momentous change we are going to face in the coming decades. To steer these trends in a direction that can maximize their usefulness and minimize their drawbacks requires novel approaches in design, control, and communications that for the first time will place our tools on the same level as ourselves. The symbioses of artefacts with humans will move by little steps and has already begun. Once artefacts and systems have an autonomous intelligence they will also probably have seamless interaction capabilities that will enhance their local intelligence by making use of other entities’ intelligence. Where the sharing of intelligence will be designed, in opportunistic dynamic symbioses with other entities’ intelligence. We are already cooperating with machines. Over the coming years this cooperation will become more and more seamless to the point that we might not even perceive it; we will take it for granted. The next step is machines becoming aware (including aware of our presence and capabilities) and adapting their operation to the overall ambient. Some implants will become much smarter than today, adapting in a seamless way to the body, and conversely the body will adapt seamlessly to the implant. In the fourth decade we can expect this mutual adaptation, relying on seamless interfaces and low latency communications, to broaden beyond implants to components in an ambient that will operate in a symbiotic relationship. Intelligence will become a distributed capability giving rise to an emergent symbiotic intelligence. We are now entering into in a new era of intelligent and super-intelligent machines. No doubt, the new ear will be driven by artificial intelligence, Internet of Things, Quantum computing, Drone, Blockchain and nanotechnologies. Artificial Mutualistic symbiosis, our next evolutionary step?
<urn:uuid:5fb56b41-f68c-4c35-83c6-0aa1229f0d6c>
CC-MAIN-2021-10
http://www.arievoorburg.com/index.php/working-together/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363782.40/warc/CC-MAIN-20210302065019-20210302095019-00067.warc.gz
en
0.964045
1,059
3.875
4
Researchers have created a superconducting nanowire that would allow reliable, easy-to-use electronics. The progress could improve quantum computing, as well as magnetic sensors for brain imaging and telescope applications. Superconductors – materials that conduct electricity without resistance – are exceptional. They offer a macroscopic insight at quantum phenomena, which are typically only detectable at the atomic level. Beyond their physical peculiarity, superconductors are useful as well. They are used in medical imaging, quantum computers, and telescope cameras. New technology could boost superconducting electronics. The advance could boost quantum computing, as well as magnetic sensors for applications in brain imaging and telescopes. But the superconducting systems can be finicky. They are also costly to produce and susceptible to err from ambient noise. This may improve, thanks to studies by Karl Berggren’s group in the Department of Electrical Engineering and Computer Science. Researchers are designing a superconducting nanowire that would make more powerful superconducting electronics. The future advantages of nanowires are extracted from their simplicity, Berggren says. “At the end of the day, it’s just a wire.” Berggren will present a summary of the research at this month’s IEEE Solid-state Circuits Conference. Resistance is futile Many metals lose resistance and become superconductive at very low temperatures, typically only a few degrees above absolute zero. They are used to detect magnetic fields, particularly in highly sensitive contexts such as brain activity monitoring. They still have uses for both quantum and classical computation. Many of these superconductors are based on a system developed in the 1960s called the Josephson junction-essentially two superconductors separated by a thin insulator. “That’s what led to conventional superconducting electronics, and then ultimately to the superconducting quantum computer,” Berggren says. However, the junction of Josephson “is fundamentally quite a delicate object,” Berggren continues. This translates directly into the expense and sophistication of the manufacturing process, particularly for thin insulation later on. Josephson’s junction-based superconductors can also not play well with others: “If you try to interface it with conventional electronics, like the kinds in our phones or computers, the noise from those just swamps the Josephson junction. So, this lack of ability to control larger-scale objects is a real disadvantage when you’re trying to interact with the outside world.” To overcome these disadvantages, Berggren is developing a new technology – the superconducting nanowire – with roots older than the Josephson junction itself. In 1956, MIT electrical engineer Dudley Buck published a description of a superconductive device switch called the cryotron. The system was nothing more than two superconducting wires: one was parallel, and the other was coiled around. The cryotron functions as a lever, and when the current flows through the coiled wire, the magnetic field decreases the current flowing through the straight wire. At the time, the cryotron was significantly simpler than other types of electronic switches, such as vacuum tubes or transistors, and Buck felt that the cryotron could become the building block of computers. But in 1959, Buck died unexpectedly at the age of 32, delaying the production of the cryotron. (Since then, transistors have been scaled to microscopic sizes and are now the central logic elements of computers.) Now, Berggren is reawakening Buck’s theories about superconducting device switches. “The devices we’re making are very much like cryotrons in that they don’t require Josephson junctions,” he says. In tribute to Buck, he named his superconducting nanowire technology a nano-cryotron-though it functions a little differently from the original cryotron. The nano-cryotron uses heat to activate a transition instead of a magnetic field. In Berggren’s unit, the current passes through a superconducting, supercooled wire called the “channel.” The channel is intersected by an even smaller wire called the “choke”—like a multi-lane highway intersected by a side path. When the current is sent through the shock, the superconductivity breaks down and heats up. Once the heat travels from the choke to the main channel, it also allows the main channel to lose its superconducting state. Berggren’s group has already shown proof of concept for the use of nano-cryotrons as an electronic component. Adam McCaughan, a former Berggren student, has created a system that uses nano-cryotrons to add binary digits. And Berggren has successfully used nano-cryotrons as an interface between superconducting instruments and traditional, transistor-based electronics. Berggren says that his group’s superconducting nanowire could one day supplement or compete with Josephson’s junction-based superconducting systems. “Wires are relatively easy to make, so it may have some advantages in terms of manufacturability,” he says. He thinks that the nano-cryotron will one day find a home in superconducting quantum computers and supercooled telescope electronics. Wires have low power dissipation, but they can also be handy for energy-hungry applications, he notes. “It’s probably not going to replace the transistors in your phone, but if it could replace the transistor in a server farm or data center? That would be a huge impact.” Beyond special uses, Berggren takes a general view of his work on superconducting nanowires. “We’re doing fundamental research, here. While we’re interested in applications, we’re just also interested in: What are some different kinds of ways to do computing? As a society, we’ve really focused on semiconductors and transistors. But we want to know what else might be out there.”
<urn:uuid:b3ac72fc-8b8d-4835-a00a-cb635eccb34e>
CC-MAIN-2021-10
https://qsstudy.com/technology/nanowire-could-boost-constant-quantum-computers-and-superconducting-transistor
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00147.warc.gz
en
0.943588
1,281
4.125
4
From the moment that it was discovered that the macroscopic, classical rules that governed electricity, magnetism and light didn’t necessarily apply to the smallest, subatomic scales, a whole new view of the Universe became accessible to humanity. This quantum picture is much larger and all-encompassing than most people realize, including many professionals. Here are ten essentials of quantum mechanics that may cause you to re-examine how you picture our Universe, on the smallest scales and beyond. - Everything is quantum. It’s not like some things are quantum mechanical and others are not. Everything obeys the same laws of quantum mechanics – it’s just that quantum effects of large objects are very hard to notice. This is why quantum mechanics was a latecomer to the development of theoretical physics: it wasn’t until physicists had to explain why electrons sit on shells around the atomic nucleus that quantum mechanics became necessary to make accurate predictions. - Quantization doesn’t necessarily imply discreteness. “Quanta” are discrete chunks, by definition, but not everything becomes chunky or indivisible on short scales. Electromagnetic waves are made of quanta called “photons,” so the waves can be thought of as being discretized. And electron shells around the atomic nucleus can only have certain discrete radii. But other particle properties do not become discrete even in a quantum theory. The position of electrons in the conducting band of a metal for example is not discrete – the electron can occupy any continuous location within the band. And the energy values of the photons that make up electromagnetic waves are not discrete either. For this reason, quantizing gravity – should we finally succeed at it – does not necessarily mean that space and time have to be made discrete. (But, on the other hand, they might be.) - Entanglement not the same as superposition. A quantum superposition is the ability of a system to be in two different states at the same time, and yet, when measured, one always finds a particular state, never a superposition. Entanglement on the other hand is a correlation between two or more parts of a system – something entirely different. Superpositions are not fundamental: whether a state is or isn’t a superposition depends on what you want to measure. A state can for example be in a superposition of positions and not in a superposition of momenta – so the whole concept is ambiguous. Entanglement on the other hand is unambiguous: it is an intrinsic property of each system and the so-far best known measure of a system’s quantum-ness. (For more details, read “What is the difference between entanglement and superposition?”) - There is no spooky action at a distance. Nowhere in quantum mechanics is information ever transmitted non-locally, so that it jumps over a stretch of space without having to go through all places in between. Entanglement is itself non-local, but it doesn’t do any action – it is a correlation that is not connected to non-local transfer of information or any other observable. When you see a study where two entangled photons are separated by a great distance and then the spin of each one is measured, there is no information being transferred faster than the speed of light. In fact, if you attempt to bring the results of two observations together (which is information transmission), that information can only travel at the speed of light, no faster! What constitutes “information” was a great source confusion in the early days of quantum mechanics, but we know today that the theory can be made perfectly compatible with Einstein’s theory of Special Relativity in which information cannot be transferred faster than the speed of light. - Quantum physics an active research area. It’s not like quantum mechanics is yesterday’s news. True, the theory originated more than a century ago. But many aspects of it became testable only with modern technology. Quantum optics, quantum information, quantum computing, quantum cryptography, quantum thermodynamics, and quantum metrology are all recently formed and presently very active research areas. With the new capabilities brought about by these technologies, interest in the foundations of quantum mechanics has been reignited. - Einstein didn’t deny it. Contrary to popular opinion, Einstein was not a quantum mechanics denier. He couldn’t possibly be – the theory was so successful early on that no serious scientist could dismiss it. (In fact, it was his Nobel-winning discovery of the photoelectric effect, proving that photons acted as particles as well as waves, that was one of the foundational discoveries of quantum mechanics.) Einstein instead argued that the theory was incomplete, and believed the inherent randomness of quantum processes must have a deeper explanation. It was not that he thought the randomness was wrong, he just thought that this wasn’t the end of the story. For an excellent clarification of Einstein’s views on quantum mechanics, I recommend George Musser’s article “What Einstein Really Thought about Quantum Mechanics” (paywalled, sorry). - It’s all about uncertainty. The central postulate of quantum mechanics is that there are pairs of observables that cannot simultaneously be measured, like for example the position and momentum of a particle. These pairs are called “conjugate variables,” and the impossibility to measure both their values precisely is what makes all the difference between a quantized and a non-quantized theory. In quantum mechanics, this uncertainty is fundamental, not due to experimental shortcomings. One of the most bizarre manifestations of this is the uncertainty between energy and time, which means that unstable particles (with a short lifetime) have inherently uncertain masses, thanks to Einstein’s E=mc2. Particles like the Higgs boson, the W-and-Z bosons and the top quarks all have masses that are intrinsically uncertain by 1-10% because of their short lifetimes. - Quantum effects are not necessarily small… We do not normally observe quantum effects on long distances because the necessary correlations are very fragile. Treat them carefully enough however, and quantum effects can persist over long distances. Photons have for example been entangled over separations as much as several hundreds of kilometers. In Bose-Einstein condensates, a degenerate state of matter found at cold temperatures, up to several million of atoms have been brought into one coherent quantum state. And finally, some researchers even believe that dark matter may have quantum effects which span across entire galaxies. - …but they dominate the small scales. In quantum mechanics, every particle is also a wave and every wave is also a particle. The effects of quantum mechanics become very pronounced once one observes a particle on distances that are comparable to the associated wavelength. This is why atomic and subatomic physics cannot be understood without quantum mechanics, whereas planetary orbits are effectively unchanged by quantum behavior. - Schrödinger’s cat is dead. Or alive. But not both. It was not well-understood in the early days of quantum mechanics, but the quantum behavior of macroscopic objects decays very rapidly. This “decoherence” is due to constant interactions with the environment which are, in relatively warm and dense places like those necessary for life, impossible to avoid. This explains that what we think of as a measurement doesn’t require a human; simply interacting with the environment counts. It also explains why bringing large objects into superpositions of two different states is therefore extremely difficult and the superposition fades rapidly. The heaviest object that has so far been brought into a superposition of locations is a carbon-60 molecule, while the more ambitious have proposed to do this experiment for viruses or even heavier creatures like bacteria. Thus, the paradox that Schrödinger’s cat once raised – the transfer of a quantum superposition (the decaying atom) to a large object (the cat) – has been resolved. We now understand that while small things like atoms can exist in superpositions for extended amounts of time, a large object would settle extremely rapidly in one particular state. That’s why we never see cats that are both dead and alive. Post written by Sabine is a theoretical physicist specialized in quantum gravity and high energy physics. She also freelance writes about science.
<urn:uuid:f603c08f-d977-4a3e-82ab-ca7a0b72d515>
CC-MAIN-2021-10
https://qubitsnews.com/2016/03/23/quantum-truths-about-our-universe/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00469.warc.gz
en
0.946135
1,737
3.578125
4
Tech giants like IBM, Google, Intel, and numerous startups are racing to develop the new machine that utilizes quantum mechanical phenomena, like superposition and entanglement. Quantum computing will be extremely useful to the next generation of computing and communication technology. However, quantum computing is not going to come easily, and we don’t know anything for sure – when it will arrive and what exactly it will look like. At present, dozens of companies and research institutes are trying to use different techniques to create the most powerful computer world has ever witnessed. It will be able to efficiently solve problems that aren’t possible on existing supercomputers. The development of an actual quantum machine is in its infancy, but tons of experiments have been conducted, in which quantum operations were performed on a small scale [small no. of qubits]. To learn more, you can read all the interesting facts and the latest researches on quantum computing. Below, we’ve listed all advanced quantum processor chips developed so far. Although a fully functional quantum computer is a long term goal, these chips represent major milestones in efforts to the development of future computing technologies. 5. Rigetti 19Q Rigetti 19Q superconducting processor Rigetti Computing develops quantum integrated circuits, and the “Forest” cloud platform to help coders write quantum algorithms. It’s a full-stack company that fabricates quantum chips, builds controlling architecture, and develops algorithms for the processors. Their latest superconducting quantum processor has 19 fully programmable qubits. Recently, they demonstrated unsupervised machine learning using 19Q. They did this with their own classical/quantum hybrid algorithm for clustering. The 19Q chip is currently available as a configurable backend in Forest. You can apply for access. 4. Google Bristlecone Bristlecone processor | Qubits with nearest neighbor connectivity are represented by symbol X Bristlecone is a new quantum processor developed by Google. It’s a gate-based superconducting system that provides a testbed for research related to qubit technology, machine learning, quantum simulation, and optimization. In the next 5 years, Google is intended to achieve something they call “quantum supremacy” and facilitate the development of quantum algorithms on actual hardware. Bristlecone is scaled to a square array of 72 qubits, and it follows the physics of Google’s previous 9 qubits linear array technology. 3. Intel Tangle Lake 49-qubit quantum computing test chip | Tangle Lake In January 2018, Intel announced a 49-qubit superconducting quantum chip, named Tangle Lake. It’s a 3* 3-inch chip that will let scientists improve error correction methods and simulate computational problems. In addition, Intel also unveiled a neuromorphic research chip, named Loihi, which mimics the operations performed in the human brain. The chip is developed with the objective of making deep learning more efficient. Intel is also working on spin qubits in silicon. They are smaller than superconducting quantum bits and thus have a scaling advantage. They have already developed a spin qubit fabrication flow on 300-millimeter process technology. 2. IBM Q IBM Q was launched in 2017 as an initiative to develop commercial quantum computers for science and business. So far they’ve built and tested 2 machines – - 20-qubits superconducting quantum chip - 50-qubits prototype that will be the basis of upcoming IBM Q systems. Compared to previous quantum machines, the 20-qubits processor has nearly twice the coherence time. It has an average of 90 microseconds, whereas the previous generation quantum processor had an average coherence time of 50 microseconds. The system is developed to scale; a 50-qubits prototype yields similar performance. They have also developed the Quantum Information Software Kit (QISkit) open for public use. It allows you to execute quantum circuit-based experimental programs on a quantum circuit simulator running on the Cloud or a laptop. 1. D-Wave 2000Q Image credit: D-Wave In 2017, D-Wave announced 2000Q quantum computer and open-source software, Obsolv, that solves quadratic unconstrained binary optimization problem on both 2000Q and conventional hardware architectures 2000Q is the company’s follow up to the 1000-qubits 2X. The jump from 1000 to 2048-qubits enables researchers to deal with larger quantities of data and more complex problems. According to the company, 2000Q can outperform conventional servers by factors of 1,000 – 10,000. Temporal Defense Systems Inc. purchased 2000Q to solve critical and complex cybersecurity problems. Although they didn’t reveal the price, the machine is valued at $15 million. While D-Wave’s computers are using quantum mechanics for calculations, it is not clear if they will ever be able to solve real-world problems. For now, they are only suitable for solving optimization problems. Considering D-Wave’s pattern of doubling performance every 2 years, the company may release a 4000-qubits quantum machine in 2019.
<urn:uuid:bba09e4c-6e78-42b4-a8f9-74f7d28244b5>
CC-MAIN-2021-10
https://www.rankred.com/quantum-processors/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356140.5/warc/CC-MAIN-20210226030728-20210226060728-00190.warc.gz
en
0.91028
1,076
3.640625
4
There's been a lot of industry buzz lately around quantum and it seems to be progressing at almost immeasurable speeds. While these are exciting times for advancements in this technology, there is still a long way to go before we’ll be able to witness a quantum computer fully optimizing its capabilities. What is Quantum Computing? The Short Version Modern computers use binary digits or bits to calculate and determine solutions. A bit has a value of either a one or zero; representing either a true/false, on/off, yes/no, or +/- scenario. The bit has a defined measurable state. This form of computation works great when there are clear decisions, clear calculations, or clear answers to problems. In certain computing situations where physics, chemistry, or biology is involved, there are uncertain or no clear defined answers. This mixture of these uncertain sciences is referred to as the field of quantum mechanics. Instead of bits, a quantum computer uses quantum bits or qubits. Rather than being just a one or zero, a qubit can be a one, a zero or some unknown state all at the same time. This occurrence is called, “quantum superposition.” Many explain the difference in relation to a coin. If you flip a coin, when it lands, it's either heads or tails like a bit. If you spin a coin it has a chance of being a head or tail, but while its spinning its similar to a “superposition” state. A great way to understand the difference between computing and quantum computing is to use the analogy of solving a maze. If a computer would be tasked with solving a maze it would simply try each and every branch and turn, switching bits between left turns and right turns, eliminating options sequentially until it finds a result. The faster the computer, the faster a result would be found. If a quantum computer were tasked with solving a maze, it would explore all paths of the maze at one time keeping turns / qubits in the superposition state, analyzing all of the data, and solving the maze in one try. In the maze analogy, it may not seem like a significant time savings over a standard computer. However, the larger the maze gets or the larger the problem, the easier it is to understand the benefit of a quantum computer. The ability to solve massive chemistry problems in the medical or energy industries in unique ways at super-fast speeds is game changing. Other highly complicated systems such as weather forecasting, financial market predictions, or cryptography are also potential applications. Quantum Today and Tomorrow Major players in the quantum computing arena appear to be making progress. Both Google and IBM claim to have working 50 qubit quantum computers. Although that is a great achievement, the computers most likely are experiencing very high error rates and are somewhat unpredictable. Technical implementation challenges unique to high data processing speeds and the related power required in quantum computing include electrical interference, heat displacement, and high data rate communication via photonics. These challenges will need to be addressed as this market grows. For quantum computers to operate, the qubits must remain in a stable state, and to be stable means to be very cold. How cold? Try absolute zero or -460 degrees Fahrenheit, which can possibly be achieved with the use of liquid helium in a closed cycle system. With the challenges of interference and thermal management, among others, quantum computing solutions may at first be limited to remote access or a hybrid configuration, where some amount of qubits will be combined with a super computing solution for experimental quantum applications. Quantum Information Science (QIS) News One of the significant events that occurred recently was the implementation of the National Quantum Initiative Act, which the President of the United States of America signed into law, Dec. 21, 2018. The purpose of the Act, as stated in the document, is "To provide for a coordinated Federal program to accelerate quantum research and development for the economic and national security of the United States". There are a number of provisions and directives of the Act, but one of particular interest instructs the National Institute of Standards and Technology (NIST) to convene a "consortium" of stakeholders to discuss the measurement, standards, and cyber security needs of the emerging Quantum Information Science (QIS) industry. Which has resulted in the forming of the QED-C. What is QED-C? The Quantum Economic Development Consortium (QED-C) is a consortium of participants focused on enablement and growth of the U.S. quantum industry in computing, communications, and sensing. A diverse set of companies and academic participants are working together to identify challenges in technology, standards, and the workforce, and to address those gaps through collaboration. This is an exciting consortium of great U.S.-based organizations advancing quantum science. Why did Benchmark join? Benchmark has significant expertise in the technologies needed to advance quantum computing. Thermal management, photonics, high speed circuit design, and control electronics are key engineering and manufacturing capabilities within Benchmark’s design and product development services. The benefits and potential of quantum computing are tremendous, and Benchmark has invested in advancing our technologies to help support our customers’ next generation solutions, the consortium, and the computing industry. Quantum computing has many challenges to still overcome and is likely many years away from widespread commercial implementation. Although the development uncertainty is unmistakable, the reality is quantum computing could change the world as we know it today.
<urn:uuid:6bce2a4f-cbbd-4020-ba59-8d1764807ea0>
CC-MAIN-2021-10
https://www.bench.com/test-blog/benchmark-joins-quantum-computing-consortium-0
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00552.warc.gz
en
0.937154
1,118
3.796875
4
Scientists Achieve Direct Counterfactual Quantum Communication For The First Time Communication without particle transmission. Quantum communication is a strange beast, but one of the weirdest proposed forms of it is called counterfactual communication – a type of quantum communication where no particles travel between two recipients. Theoretical physicists have long proposed that such a form of communication would be possible, but now, for the first time, researchers have been able to experimentally achieve it – transferring a black and white bitmap image from one location to another without sending any physical particles. If that sounds a little too out-there for you, don’t worry, this is quantum mechanics, after all. It’s meant to be complicated. But once you break it down, counterfactual quantum communication actually isn’t as bizarre as it sounds. First up, let’s talk about how this differs from regular quantum communication, also known as quantum teleportation, because isn’t that also a form of particle-less information transfer? Well, not quite. Regular quantum teleportation is based on the principle of entanglement – two particles that become inextricably linked so that whatever happens to one will automatically affect the other, no matter how far apart they are. But that form of quantum teleportation still relies on particle transmission in some form or another. The two particles usually need to be together when they’re entangled before being sent to the people on either end of the message (so, they start in one place, and need to be transmitted to another before communication can occur between them). Alternatively, particles can be entangled at a distance, but it usually requires another particle, such as photons (particles of light), to travel between the two. Direct counterfactual quantum communication on the other hands relies on something other than quantum entanglement. Instead, it uses a phenomenon called the quantum Zeno effect. Very simply, the quantum Zeno effect occurs when an unstable quantum system is repeatedly measured. In the quantum world, whenever you look at a system, or measure it, the system changes. And in this case, unstable particles can never decay while they’re being measured (just like the proverbial watched kettle that will never boil), so the quantum Zeno effect creates a system that’s effectively frozen with a very high probability. If you want to delve a little deeper, the video below gives a great explanation: Counterfactual quantum communication is based on this quantum Zeno effect, and is defined as the transfer of a quantum state from one site to another without any quantum or classical particle being transmitted between them. This requires a quantum channel to run between two sites, which means there’s always a small probability that a quantum particle will cross the channel. If that happens, the system is discarded and a new one is set up. To set up such a complex system, researchers from the University of Science and Technology of China placed two single-photon detectors in the output ports of the last of an array of beam splitters. Because of the quantum Zeno effect, the system is frozen in a certain state, so it’s possible to predict which of the detectors would ‘click’ whenever photons passed through. A series of nested interferometers measure the state of the system to make sure it doesn’t change. It works based on the fact that, in the quantum world, all light particles can be fully described by wave functions, rather than as particles. So by embedding messages in light the researchers were able to transmit this message without ever directly sending a particle. The team explains that the basic idea for this set up came from holography technology. “In the 1940s, a new imaging technique – holography – was developed to record not only light intensity but also the phase of light,” the researchers write in the journal Proceedings of the National Academy of Sciences. “One may then pose the question: Can the phase of light itself be used for imaging? The answer is yes.” The basic idea is this – someone wants to send an image to Alice using only light (which acts as a wave, not a particle, in the quantum realm). Alice transfers a single photon to the nested interferometer, where it can be detected by three single-photon detectors: D0, D1, and Df. If D0 or D1 ‘click’, Alice can conclude a logic result of one or zero. If Df clicks, the result is considered inconclusive. “After the communication of all bits, the researchers were able to reassemble the image – a monochrome bitmap of a Chinese knot. Black pixels were defined as logic 0, while white pixels were defined as logic 1 … In the experiment, the phase of light itself became the carrier of information, and the intensity of the light was irrelevant to the experiment.” Not only is this a big step forward for quantum communication, the team explains it’s technology that could also be used for imaging sensitive ancient artefacts that couldn’t surprise direct light shined on them. The results will now need to be verified by external researchers to make sure what the researchers saw was a true example of counterfactual quantum communication. Either way, it’s a pretty cool demonstration of just how bizarre and unexplored the quantum world is. The research has been published in the journal Proceedings of the National Academy of Sciences.
<urn:uuid:e7a5e87a-5290-45c0-8b2d-7de330c0e7dc>
CC-MAIN-2021-10
http://thecosmicview.com/counterfactual-quantum-communication/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00553.warc.gz
en
0.929815
1,143
3.59375
4
As the amount of data in the world is rapidly increasing, so is the time required for machines to process it. Augmented Reality, Virtual Reality, Artificial Intelligence, Robotics, Real-Time Analytics, and Machine Learning algorithms are needing the cloud to be infinitely faster as well as to require unlimited computing power and an endless amount of storage. Interestingly, this is happening on heels of the slow down of Moore’s law. Chip maker Intel has signaled a slowing of Moore’s Law, a technological phenomenon that has played a role in almost every significant advancement in engineering and technology for decades. We are no longer able to cram transistors in the circuits at the velocity we have been doing. By 2025, the needs for traditional compute functionality in the cloud will be so large that it can never be built. Quantum computing’s arrival promises to revolutionize the cloud. What quantum computing provides is massively parallel processing, atomic-level storage, and security using the laws of physics rather than external cryptographic methods. If you have not begun looking at it, the time is now. The cloud will soon be powered by quantum computing, and software will be written in another way. IBM, Microsoft, Google, Intel, D-Wave have made tremendous advances this year. It is now here to push the bounds of computer performance further forward. What is Quantum Computing? Quantum computing is about making use of quantum states of subatomic particles to perform memory and processing tasks. Classical computers switch transistors encode information as bits which represent either a ONE or a ZERO. In contrast, quantum computers use the fundamental building blocks of atoms (such as electrons, protons, and photons) themselves. These subatomic particles spin, so if the spin is in one direction — up, for example — that could be the equivalent of the ONE in a conventional computer, while a particle with a down spin could be a ZERO. As per the laws of quantum physics, it may not be clear whether a particle has an up or a down spin. Or, perhaps something in between. These subatomic particles possess all of those properties all at the same time. This is called superposition. ONE qubit (a new term to refer to a quantum bit unlike classical bit) can exist simultaneously as a ZERO or a ONE. Two qubits can exist simultaneously as the four possible two-bit numbers (00, 01, 10, and 11). These superpositions allow qubits to perform multiple calculations at once rather than in sequence like a traditional machine. For example, you can compute four calculations with two qubits. What quantum computing gives you is massively parallel processing! An understandable example is Grover Search Algorithm. Think about a game where an prize is hidden behind one of four doors and you have to find the prize while opening as few doors as possible. A traditional computer will need to do, on average, a little over two operations to find the prize as it has to open each door in succession. The quantum computer, however, can locate the prize with one action because it can open all four doors at once! You can perform eight calculations with the three qubits. The number of such computations double for each additional qubit, leading to an exponential speed-up. A quantum computer comprised of 500 qubits would have a potential to do 2^500 calculations (much larger than the Shannon Number) in one operation. Top five things you should know about it - We will write programs differently. New programming paradigms and languages, new algorithms, as well as a new way of writing logic! - Quantum computer is “thousands of times” faster than a conventional computer. Google announced it has a Quantum computer that is 100 million times faster than any classical computer in its lab. - Quantum computing revolutionizes the way that we approach machine learning and artificial intelligence. It will accelerate machine learning remarkably. Quantum computers will reduce power consumption anywhere from 100 to 1000 times because quantum computers use quantum tunneling. - Quantum computing will destroy the internet security as it is known today. It would be able to crack several of today’s encryption techniques such as RSA and ECC within days if not hours. In this regards, Quantum computing is like a deja vu of discovering the use of enormous energy locked in an atom. Nuclear fission was found in 1938, nine months before the beginning of the Second World War, and it changed the world. Quantum computing could be the IT equivalent of an Atomic Bomb. We are now in a race against time to prepare modern cryptographic techniques before they get broken. New security methods that will allow us to secure data using the laws of physics rather than using external cryptographic methods. - Quantum computing is not for every problem. Classical computers are still better than Quantum computers at some tasks such as spreadsheets or desktop publishing. However, Quantum computing will be incredibly useful for solving notable chemistry issues, self driving cars coordination, financial modeling, weather forecasting, and particle physics, etc.Have you written your first quantum program yet? In the next few articles, I will go over how how to program using Quantum computing, how to determine which problems are best to be addressed between Quantum computing vs. Classical computing, how it impacts Machine Learning, and how you will develop killer apps such as self-driving car coordination as well as the challenges/solutions in the world where cryptography and quantum computing intersect. Quantum computing revolutionizes the way we approach computer science and logic. A lot of algorithms will need to be redesigned/rewritten using quantum computing paradigms – looking forward to it!PS: Isn’t this a remarkable pic? The heading picture is from the October 1927 Fifth Solvay International Conference on Electrons and Photons, where the world’s most notable physicists met to discuss the newly formulated quantum theory. Seventeen of the 29 attendees were or became Nobel Prize winners!
<urn:uuid:0fe553fb-54c8-448f-95e0-187521312509>
CC-MAIN-2021-10
https://thetechfool.com/satisfaction-lies-in-the-effort/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350942.3/warc/CC-MAIN-20210225095141-20210225125141-00031.warc.gz
en
0.939473
1,204
3.609375
4
(Source: Ozz Design/Shutterstock.com) Quantum technologies are an area, once manifested, that could change the face of many technology-based applications. Although quantum technologies are not quite there yet, scientists have already managed to create devices that can transmit data using quantum networks, albeit for a matter of nanoseconds at low temperatures. Nevertheless, gains are being made—with semiconductors currently leading the way as the fundamental building blocks—and if you look at the huge advances made in classical computing technologies over the past few decades, then quantum technologies might not be as far away as many think. Quantum technology will be valuable for many reasons, especially for anything that uses a computer chip, as it will enable more operations to be performed simultaneously—and at a greater speed than modern-day computers—while providing an extra layer of encryption that is much needed in today’s online world. Behind any quantum technology is the quantum bit—otherwise known as a qubit—and is similar, yet so very different, to a classic computing bit. Qubits are the building blocks of quantum networks, much like classical bits are in classical networks. Classical computing bits—known to many as binary bits– can take one of two forms. These are a 1 and 0. Qubits can also take the form of a 1 or 0, but there is a third form that is not possible with classical bits, and that is a superimposable form that can take the form of either a 1 or a 0. Because the superimposable form can take either form, operations can be performed in both values simultaneously—something not possible with classical networks. It is one of the fundamental reasons why quantum networks will be able to process multiple operations at much higher speeds than classical networks. Figure 1: Qubits, the building blocks of quantum networks, can come in three forms and possess infinite value. (Source: Production Perig/Shutterstock.com) Each qubit can possess an infinite value within each of the three forms. This leads to a continuum of states where each qubit becomes one and indistinguishable from each other. Although the individual qubit uses the spin of electrons and polarization of photons to store data, they can become entangled, which makes them act as a unified system. This means that each quantum network is described and used as a complete system, rather than a series of qubits. Quantum entanglement is an important phenomenon in quantum networks. Electrons, photons, atoms, and molecules can all become entangled in these networks. The entanglement within a quantum network also extends over long distances. When one part of the quantum network is measured, the properties of the corresponding entangled qubit(s) within that specific network can be deduced as a definitive value. This enables many networks to be built up, all of which have different values and properties, but where all the qubits in a single network share the same information. Quantum teleportation is another phenomenon that enables quantum technologies to function, and is similar in nature to quantum entanglement. Quantum teleportation is the process where the data and/or information held in the qubit—which is held there by the electrons spinning up or down, and by polarizing the photons in a vertical or horizontal orientation—is transported from one location to another without transporting the qubit itself. Most qubits become entangled in these networks; however, if doubt exists that they haven’t become entangled, they can be tested using coincidence correlation. Coincidence correlation assumes that an entangled network can only emit one photon at a time. You can use multiple photodetectors to see how many photons are emitted by a single network. If more than one photon is recorded at any one time, then you can assume that the quantum network is not a single-photon system, and therefore not entangled. The materials that make up the qubits are an essential part of establishing a quantum network. The quantum system is formed by manipulating physical materials, so the properties and characteristics of the materials used to build a quantum network is a major consideration. For any material to be considered as the building block of a quantum technology, it needs to possess long-lived spin states, which it can control, and be able to operate parallel qubit networks. Many physical parts also go into designing a quantum network. One of the key features the quantum system requires is an arrangement of interconnected communication lines between each network. Just like in classical computing, these communication lines run between end nodes. These nodes are representative of the information held within an individual quantum network, and this becomes more important for larger and/or complex quantum networks where a lot of different types of information are held within the quantum system. These end nodes can take many forms, although the most popular choices at the moment are: Two other physical components are crucial if a quantum network is to function as it should. These are the communication lines and quantum repeaters. The physical communication lines currently take two main forms, which are fiber-optic networks and free-space networks, and both work differently. Physical communication lines made from fiber-optic cables send a single photon by attenuating a telecommunication laser, and the path of the photon is controlled by a series of interferometers and beam splitters before it is detected and received by a photodetector. Free-space networks, on the other hand, rely on the line of sight between both ends of the communication pathway. As it stands, both can be used over long distances, but free-space networks suffer from less interference, have higher transmission rates, and are faster than fiber-optic networks. The other important component is the repeater, which ensures that the quantum network does not lose its signal or become compromised because of decoherence—which is the loss of information due to environmental noise. It is a straight-forward process in classical networks, because an amplifier simply boosts the signal. For quantum networks, it is much trickier. Quantum networks need to employ a series of trusted repeaters, quantum repeaters, error correctors, and entanglement purifying mechanisms to test the infrastructure, to keep the qubits entangled, to detect any short-range communication errors, and to minimize the degree of decoherence in the network. An extra layer of security can be incorporated into quantum networks through quantum key distribution, which utilizes the principles of quantum mechanics to perform cryptographic operations. This will be a particularly useful tool for when two people are communicating via a quantum network, or data is being transmitted from location to another. The encryption process will utilize randomly polarized photons to transmit a random number sequence. These sequences then act as keys in the cryptographic system. The theory behind these cryptographic systems is that they will use two networks—a classical channel and a quantum channel—between two different communication points, where both channels play specific roles. The classical channel is there to perform classical operations and is a way of seeing if anyone is trying to hack into the network. However, the qubits containing the data will be sent over the quantum channel, which means that the classical system can be hacked, but the hackers will not obtain any information—as no information would exist in that channel. The way that these systems will be able to tell if a network has been hacked is down to the correlation of the signal. Classical networks are highly correlated, and if any imperfections occur between the source and the receiver in the channel, then the system will know if a hack has been attempted. Although the realization of quantum technologies in everyday systems might be a while off yet, the potential is there for these technologies to revolutionize the computing and communication spaces. The ability for quantum networks to become one and be transmitted over long distances has many advantages over classical systems, which include the potential for faster data transmission types, the ability to perform multiple operations simultaneously, and for highly encrypted data communication channels. Liam Critchley is a writer, journalist and communicator who specializes in chemistry and nanotechnology and how fundamental principles at the molecular level can be applied to many different application areas. Liam is perhaps best known for his informative approach and explaining complex scientific topics to both scientists and non-scientists. Liam has over 350 articles published across various scientific areas and industries that crossover with both chemistry and nanotechnology. Liam is Senior Science Communications Officer at the Nanotechnology Industries Association (NIA) in Europe and has spent the past few years writing for companies, associations and media websites around the globe. Before becoming a writer, Liam completed master’s degrees in chemistry with nanotechnology and chemical engineering. Aside from writing, Liam is also an advisory board member for the National Graphene Association (NGA) in the U.S., the global organization Nanotechnology World Network (NWN), and a Board of Trustees member for GlamSci–A UK-based science Charity. Liam is also a member of the British Society for Nanomedicine (BSNM) and the International Association of Advanced Materials (IAAM), as well as a peer-reviewer for multiple academic journals. Privacy Centre | Terms and Conditions Copyright ©2021 Mouser Electronics, Inc. Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries. All other trademarks are the property of their respective owners. Corporate headquarters and logistics centre in Mansfield, Texas USA.
<urn:uuid:f8976f04-186b-498a-b25d-21291af1b4ec>
CC-MAIN-2021-10
https://www.mouser.in/blog/understanding-quantum-technologies
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00514.warc.gz
en
0.941849
1,936
3.828125
4
Nobody has built a quantum computer much more powerful than a pocket calculator but that hasn’t stopped people worrying about the implications of the post-quantum computing world. Most worried are the people who rely on cryptographic codes to protect sensitive information. When the first decent-sized quantum computer is switched on, previously secure codes such as the commonly used RSA algorithm will become instantly breakable. Which is why cryptographers are scurrying about looking for codes that will be secure in the post-quantum world. Today, Hang Dinh at the University of Connecticut and a couple of pals show that cryptographers have been staring at one all along. They say that a little-used code developed by the CalTech mathematician Robert McEliece in 1978 can resist all known attacks by quantum computers. First, let’s a make a distinction between symmetric and asymmetric codes. Symmetric codes use identical keys for encrypting and decrypting a message. Quantum computers can dramatically speed up an attack against these kinds of codes. However, symmetric codes have some protection. Doubling the size of the key counteracts this speed up. So it is possible for code makers to stay ahead of the breakers, at least in theory. (Although in practice, the safe money would be on the predator in this cat and mouse game. ) Asymmetric codes use different keys for encrypting and decrypting messages. In so-called public key encryption systems such as the popular RSA algorithm, a public key is available to anyone who can use it to encrypt a message. But only those with a private key can decrypt the messages and this, of course, is kept secret. The security of these systems relies on so-called trap door functions: mathematical steps that are easy to make in one direction but hard to do in the other. The most famous example is multiplication. It is easy to multiply two numbers together to get a third but hard to start with the third number and work out which two generated it, a process called factorisation. But in 1994, the mathematician Peter Shor dreamt up a quantum algorithm that could factorise much faster than any classical counterpart. Such an algorithm running on a decent quantum computer could break all known public key encryption systems like a 4-year old running amok in Legoland. Here’s a sense of how it works. The problem of factorisation is to find a number that divides exactly into another. Mathematicians do this using the idea of periodicity: a mathematical object with exactly the right periodicity should divide the number exactly, any others will not. One way to study periodicity in the classical world is to use fourier analysis, which can break down a signal into its component waves. The quantum analogue to this is the quantum fourier sampling and Shor’s triumph was to find a way to use this idea to find the periodicity of the mathematical object that reveals the factors. Thanks to Shor, any code that relies on this kind of asymmetry (ie almost all popular public key encryption systems) can be cracked using a quantum fourier attack. The McEliese cryptosystem is different. It too is asymmetric but its security is based not on factorisation but on a version of a conundrum that mathematicians call the hidden supgroup problem. What Dinh and buddies have shown is that this problem cannot be solved using quantum fourier analysis. In other words it is immune to attack by Shor’s algorithm. In fact, it is immune to any attack based on quantum fourier sampling. That’s a big deal. It means that anything encoded in this way will be safe when the next generation of quantum computers start chomping away at the more conventional public key cryptosystems. One such system is Entropy, a peer-to-peer communications network designed to resist censorship based on the McEliese cryptosystem. But Entropy is little used and there are good reasons why others have resisted the McEliese encryption system. The main problem is that both the public and private keys are somewhat unwieldy: a standard public key is a large matrix described by no fewer than 2^19 bits. That may seem less of a problem now. It’s possible that the McEleise system will suddenly become the focus of much more attention more than 30 years after its invention. However, it’s worth pointing out that while the new work guanratees safety against all known quantum attacks, it does nothing of the sort for future quantum attacks. It’s perfectly possible that somebody will develop a quantum algorithm that will tear it apart as easily as Shor’s can with the RSA algorithm. “Our results do not rule out other quantum (or classical) attacks,” says Dinh and co. So s more likely scenario for future research is that crytpographers will renew their efforts in one of the several other directions that are looking fruitful, such as lattice-based algorithms and multivariate cryptography. Either way, expect to hear a lot more about post quantum cryptography–provided the powers that be allow. Ref: arxiv.org/abs/1008.2390 : The McEliece Cryptosystem Resists Quantum Fourier Sampling Attacks
<urn:uuid:cad2d58b-13c2-441e-a4cf-b1b54113f9a4>
CC-MAIN-2015-32
http://www.technologyreview.com/view/420287/1978-cryptosystem-resists-quantum-attack/
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988250.59/warc/CC-MAIN-20150728002308-00224-ip-10-236-191-2.ec2.internal.warc.gz
en
0.944111
1,092
3.828125
4
Raman scattering or the Raman effect (pronounced: Template:IPA —) is the inelastic scattering of a photon. Discovered By Dr. C.V. Raman in liquids and by Grigory Landsberg and Leonid Mandelstam in crystals. When light is scattered from an atom or molecule, most photons are elastically scattered (Rayleigh scattering). The scattered photons have the same energy (frequency) and wavelength as the incident photons. However, a small fraction of the scattered light (approximately 1 in 10 million photons) is scattered by an excitation, with the scattered photons having a frequency different from, and usually lower than, the frequency of the incident photons. In a gas, Raman scattering can occur with a change in vibrational, rotational or electronic energy of a molecule (see energy level). Chemists are concerned primarily with the vibrational Raman effect. In 1922, Indian physicist Chandrasekhara Venkata Raman published his work on the "Molecular Diffraction of Light," the first of a series of investigations with his collaborators which ultimately led to his discovery (on 28 February 1928) of the radiation effect which bears his name. The Raman effect was first reported by C. V. Raman and K. S. Krishnan, and independently by Grigory Landsberg and Leonid Mandelstam, in 1928. Raman received the Nobel Prize in 1930 for his work on the scattering of light. In 1998 the Raman Effect was designated an ACS National Historical Chemical Landmark in recognition of its significance as a tool for analyzing the composition of liquids, gases, and solids. Raman scattering: Stokes and anti-Stokes Edit The interaction of light with matter in a linear regime allows the absorption or simultaneous emission of light precisely matching the difference in energy levels of the interacting electrons. The Raman effect corresponds, in perturbation theory, to the absorption and subsequent emission of a photon via an intermediate electron state, having a virtual energy level (see also: Feynman diagram). There are three possibilities: - no energy exchange between the incident photons and the molecules (and hence no Raman effect) - energy exchanges occur between the incident photons and the molecules. The energy differences are equal to the differences of the vibrational and rotational energy-levels of the molecule. In crystals only specific photons are allowed (solutions of the wave equations which do not cancel themselves) by the periodic structure, so Raman scattering can only appear at certain frequencies. In amorphous materials like glasses, more photons are allowed and thereby the discrete spectral lines become broad. - molecule absorbs energy: Stokes scattering. The resulting photon of lower energy generates a Stokes line on the red side of the incident spectrum. - molecule loses energy: anti-Stokes scattering. Incident photons are shifted to the blue side of the spectrum, thus generating an anti-Stokes line. These differences in energy are measured by subtracting the energy of the mono-energetic laser light from the energy of the scattered photons. The absolute value, however, doesn't depend on the process (Stokes or anti-Stokes scattering), because only the energy of the different vibrational levels is of importance. Therefore, the Raman spectrum is symmetric relative to the Rayleigh band. In addition, the intensities of the Raman bands are only dependent on the number of molecules occupying the different vibrational states, when the process began. If the sample is in thermal equilibrium, the relative numbers of molecules in states of different energy will be given by the Boltzmann distribution: Thus lower energy states will have more molecules in them than will higher (excited) energy states. Therefore, the Stokes spectrum will be more intense than the anti-Stokes spectrum. Distinction with fluorescenceEdit The Raman effect differs from the process of fluorescence. For the latter, the incident light is completely absorbed and the system is transferred to an excited state from which it can go to various lower states only after a certain resonance lifetime. The result of both processes is essentially the same: A photon with the frequency different from that of the incident photon is produced and the molecule is brought to a higher or lower energy level. But the major difference is that the Raman effect can take place for any frequency of the incident light. In contrast to the fluorescence effect, the Raman effect is therefore not a resonant effect. Selection rules Edit A Raman transition from one state to another, and therefore a Raman shift, can be activated optically only in the presence of non-zero polarizability derivative with respect to the normal coordinate (that is, the vibration or rotation): Raman-active vibrations/rotations can be identified by using almost any textbook that treats quantum mechanics or group theory for chemistry. Then, Raman-active modes can be found for molecules or crystals that show symmetry by using the appropriate character table for that symmetry group. Stimulated Raman scattering and Raman amplificationEdit Raman amplification can be obtained by using Stimulated Raman Scattering (SRS), which actually is a combination between a Raman process with stimulated emission. It is interesting for application in telecommunication fibers to amplify inside the standard material with low noise for the amplification process. However, the process requires significant power and thus imposes more stringent limits on the material. The amplification band can be up to 100 nm broad, depending on the availability of allowed photon states. Raman spectrum generation Edit For high intensity CW (continuous wave) lasers, SRS can be used to produce broad bandwidth spectra. This process can also be seen as a special case of four wave mixing, where the frequencies of the two incident photons are equal and the emitted spectra are found in two bands separated from the incident light by the phonon energies. The initial Raman spectrum is built up with spontaneous emission and is amplified later on. At high pumping levels in long fibers, higher order Raman spectra can be generated by using the Raman spectrum as a new starting point, thereby building a chain of new spectra with decreasing amplitude. The disadvantage of intrinsic noise due to the initial spontaneous process can be overcome by seeding a spectrum at the beginning, or even using a feedback loop like in a resonator to stabilize the process. Since this technology easily fits into the fast evolving fiber laser field and there is demand for transversal coherent high intensity light sources (i.e. broadband telecommunication, imaging applications), Raman amplification and spectrum generation might be widely used in the near future. Raman spectroscopy employs the Raman effect for materials analysis. The frequency of light scattered from a molecule may be changed based on the structural characteristics of the molecular bonds. A monochromatic light source (laser) is required for illumination, and a spectrogram of the scattered light then shows the deviations caused by state changes in the molecule. Raman spectroscopy is also used in combustion diagnostics. Being a completely non-intrusive technique, it permits the detection of the major species and temperature distribution inside combustors and in flames without any perturbation of the (mainly fluid dynamic and reactive) processes examined. Stimulated Raman transitions are also widely used for manipulating a trapped ion's energy levels, and thus basis qubit states. - Brillouin scattering - Raman spectroscopy - nonlinear optics - fiber amplifier - List of surface analysis methods - Raman laser - Surface Enhanced Raman Spectroscopy (SERS) - "A new radiation", Indian J. Phys., 2 (1928) 387 - http://www.uky.edu/~holler/raman.html - Herzberg, Spectra of Diatomic Molecules, Litton Educational Publishing, 1950, ISBN 0-442-03385-0, pp. 61ff and 66ff
<urn:uuid:334ec7cc-dd1f-4c1b-96c9-0d52590fb22a>
CC-MAIN-2015-32
http://physics.wikia.com/wiki/Raman_scattering
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989142.82/warc/CC-MAIN-20150728002309-00307-ip-10-236-191-2.ec2.internal.warc.gz
en
0.913375
1,635
3.703125
4
Ultraclean carbon nanotubes hold promise for advances in optical fiber communications, solar cells and LEDs Carbon atoms can assemble in numerous structural forms called allotropes, e.g. a diamond, graphite, or graphene. These forms can result in distinct properties for materials that consist of the same element. One such allotrope, a cylindrically structured molecule known as a carbon nanotube, has been the subject of much scientific research for the past twenty years because of its extraordinary tensile strength, unique electrical properties, and efficient heat conduction. It has well-established applications in nanoelectronics and more recently has attracted tremendous interest as a nanomaterial for next-generation optoelectronics (electronic devices that source, detect and control light for optical fiber communications, solar cells and LEDs) and quantum photonic devices that have the potential to revolutionize information processing, telecommunications, sensing and measurement. Despite the promise of this innovative material, its light emission has generally been dimmer than theorists had expected. The majority of experiments on carbon nanotubes to date reveal low quantum efficiencies as well as dependence on the environment and chemical processing. This is detrimental to their usefulness in devices and other applications. According to Dr. Stefan Strauf, Professor in the Department of Physics and Engineering Physics and Director of the NanoPhotonics Laboratory at Stevens Institute of Technology, “Understanding the intrinsic photophysical properties of carbon nanotubes is very interesting scientifically and also essential to realizing efficient devices.” To address these inefficiencies, Dr. Strauf and collaborators James Hone and Chee Wei Wong from Columbia University have devised an improved fabrication process for carbon nanotubes, potentially leading to brighter light sources and more effective solar cells based on the material. They were able to increase the spontaneous light emission from an individual carbon nanotube by two orders of magnitude compared to previously reported experiments. They were also able to achieve a fourfold prolonged coherence time of the light emission. The results of their work, titled “Prolonged Spontaneous Emission and Dephasing of Quantum Dot Excitons in Air-Bridged Carbon Nanotubes,” were published in the July 11 edition of Nature Communications (Issue 4, Article Number 2152, doi:10.1038/ncomms3152). “Dr. Strauf’s groundbreaking advances with carbon nanotubes represent a significant scientific breakthrough that could herald technological innovation in numerous important industries such as quantum computing and solar energy,” says Dr. Michael Bruno, Dean of the Charles V. Schaefer, Jr. School of Engineering and Science. Previous experiments have reported carbon nanotubes with spontaneous light emission times on the picosecond scale, while theorists had predicted intrinsic optical lifetimes of several nanoseconds. Dr. Strauf and his collaborators surmised that this disparity was due to masking caused by impurities in the material, which are the result of contamination from the substrate (material upon which the experimental processes take place) and surfactant (a chemical that works like a detergent to separate and disperse nanotubes in order to prevent clumping). The researchers therefore used sophisticated techniques to grow and arrange carbon nanotubes in order to mitigate unintentional impurities and reveal the true extent of the material’s optical capabilities. They prepared about 1,000 pairs of pillar posts, each 3 micrometers apart, in a silicon wafer and topped them with a metal catalyst. They then deposited the carbon in the form of an ambient chemical vapor and intensely heated the preparation, creating many carbon nanotubes that bridged the pillars. The growth suspended in air prevents the substrate and surfactant from blending into the nanotubes and diminishing their effectiveness. They also heated the nanotubes for shorter periods (2-10 minutes) than previous experiments. The shorter heating times meant less residual amorphous carbon, resulting in ultraclean nanotubes that emit a much brighter. Carbon nanotubes have attracted great interest for optoelectronics because of the unique ability of the material to maintain the stability of electron states called excitons even at room temperature, as opposed to the extreme cold usually required. An exciton comes about when a (negatively charged) electron in a carbon nanotube is excited (raised to a higher energy level) but remains bound to a positively charged “hole” in the lower energy level. The exciton thus carries energy but not a net electric charge. Photons are absorbed when the electron enters the exciton state and light is emitted when the electron recombines with the hole. The emission can be used to create devices like LEDs, lasers, and quantum light sources, while the absorption can be used to create solar cells or photodetectors. While the prolonged radiative emission is promising for device applications, the researchers also were able to maintain a longer coherence time of the emitted light from the exciton recombination in these individual carbon nanotubes, finding four-fold prolonged values compared to previous ensemble measurements. This discovery could spark new discussions about the nature of the underlying mechanism which causes the dephasing that makes it difficult to sustain quantum effects long enough to allow for practical quantum information processing. A breakthrough in preserving coherence could lead to quantum computers with unprecedented power, allowing researchers to approach unwieldy problems and rendering most cryptography obsolete. According to Dr. Rainer Martini, Director of the Department of Physics and Engineering Physics, “This work constitutes a major advance in carbon-nanotube based photonics and will generate even more interdisciplinary inquiry in this field.” Find out how to become a part of groundbreaking scientific research and innovative technologies in the Department of Physics and Engineering Physics at Stevens, or apply at Undergraduate Admissions or Graduate Admissions.
<urn:uuid:7528fec0-f955-4665-b0bb-9f66c1931eb4>
CC-MAIN-2015-32
http://research.stevens.edu/strauf-ultraclean-carbon-nanotubes
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987228.91/warc/CC-MAIN-20150728002307-00004-ip-10-236-191-2.ec2.internal.warc.gz
en
0.923875
1,197
3.65625
4
INTRO VIDEOS CLOUD COMPUTING DIRECTORY GLOSSARY ABOUT THE AUTHOR PRESS CONTACT SITE MAP Quantum computing may well be the future of most high-end data centres. This is because, as the demand to intelligently process a growing volume of online data grows, so the limits of silicon chip microprocessors are increasingly going to be reached. Sooner or later it will also become impossible to miniaturize traditional computing components further and hence to continue to achieve year-on-year increases in computer power. Today, Intel's latest microprocessors are based on an industrial process that can produce transistors only 22 nanometres wide. Further advancements in this technology are still possible. But at some point miniaturization will hit a physical limit as transistors only a few atoms in size will simply not be able to function. Enter quantum computing -- an emerging science that quite literally goes beyond the laws of conventional physics. Over the next few decades, quantum computing could be the next-wave development to deliver computer power well beyond current comprehension. Today, all of us increasingly cast digital data shadows each time we use the Internet, or even when we pass a CCTV or other camera linked into a vision recognition system. At present there is simply no way to process all of the data that every person on the planet produces. But as quantum computers arrive, the opportunity to do this may well arrive. Read on to learn more about quantum computing -- and/or watch my Explaining Quantum Computing video. Conventional computers are built from silicon chips that contain millions or billions of miniature transistors. Each of these can be turned "on" or "off" to represent a value of either "1" or "0". Conventional computers subsequently store and process data using "binary digits" or "bits". In contrast, quantum computers work with "quantum bits" or "qubits". These are represented in hardware using quantum mechanical states rather than transistors that are turned "on" or "off". For example, quantum computers may use the spin direction of a single atom to represent each qubit, or alternatively the spin direction of a single electron or the polarization orientation of a photon. Yet other quantum computing designs supercool rare metals to allow qubits to be represented by the quantum spin of a tiny magnetic field. Due to the peculiar laws of quantum mechanics, individual qubits can represent a value of "1", "0" or both numbers simultaneously. This is because the sub-atomic particles used as qubits can exist in more than one state -- or "superposition" -- at exactly the same point in time. By attaching a probability to each of these states, a single qubit can therefore process a wide range of values. In turn, this allows quantum computers to be orders of magnitude more powerful than their conventional, purely digital counterparts. The fact that qubits are more "smears of probability" than definitive, black-and-white certainties is exceptionally weird. Flip a coin and it cannot come up both heads and tails simultaneously, and yet the quantum state of a qubit can in some senses do just that. It is therefore hardly surprising that renowned nuclear physicist Niels Bohr once stated that "anyone who is not shocked by quantum theory has not understood it!" Another very bizarre thing is that the process of directly observing a qubit will actually cause its state to "collapse" to one or other of its superpositions. In practice this means that, when data is read from a qubit, the result will be either a "1" or a "0". When used to store potentially infinite amounts of "hidden" quantum data, qubits can therefore never be directly measured. This means that quantum computers need to use some of their qubits as "quantum gates" that will in turn manipulate the information stored and processed in other hidden qubits that are never directly measured or otherwise observed. Because qubits can be used to store and process not just the digital values of "1" and "0", but also many shades of grey in between, quantum computers have the potential to perform massively parallel processing. This means that quantum computers will be very effective at performing tasks -- like vision recognition, medical diagnosis, and other forms of artificial intelligence processing -- that can depend on very complex pattern matching activities way beyond the capabilities of both traditional computers and most human beings. OK, so quantum computing may sound all very theoretical (and indeed at present a lot of it actually is!). However, practical quantum computing research is now very much under way. Perhaps most notably, back in 2007 a Canadian company called D-Wave announced what it described as "the world's first commercially viable quantum computer". This was based on a 16 qubit processor -- the Rainer R4.7 -- made from the rare metal niobium supercooled into a superconducting state. Back in 2007, D-Wave demonstrated their quantum computer performing several tasks including playing Sudoku and creating a complex seating plan. Many people at the time were somewhat sceptical of D-Wave's claims. However, in December 2009, Google revealed that it had been working with D-Wave to develop quantum computing algorithms for image recognition purposes. Experiments had included using a D-Wave quantum computer to recognise cars in photographs faster than possible using any conventional computer in a Google data centre. Around this time, there was also an announcement from IBM that it was rededicating resources to quantum computing research in the "hope that a five-year push [would] produce tangible and profound improvements". In 2011, D-Wave launched a fully-commercial, 128-qubit quantum computer. Called the D-Wave One, this is described by the company as a "high performance computing system designed for industrial problems encountered by fortune 500 companies, government and academia". The D-Wave One's super-cooled 128 qubit processor is housed inside a cryogenics system within a 10 square meter shielded room. Just look at the picture here and you will see the sheer size of the thing relative to a human being. At launch, the D-Wave One cost $10 million. The first D-Wave One was sold to US aerospace, security and military giant Lockheed Martin in May 2011. D-Wave aside, other research teams are also making startling quantum computing advances. For example, in September 2010, the Centre for Quantum Photonics in Bristol in the United Kingdom reported that it had created a new photonic quantum chip. This is able to operate at normal temperatures and pressures, rather than under the extreme conditions required by the D-Wave One and most other quantum computing hardware. According to the guy in charge -- Jeremy O’Brien -- his team’s new chip may be used as the basis of a quantum computer capable of outperforming a conventional computer "within five years". Another significant quantum computing milestone was reported in January 2011 by a team from Oxford University. Here strong magnetic fields and low temperatures were used to link -- or "quantumly entangle" -- the electrons and nuclei of a great many phosphorous atoms inside a highly purified silicon crystal. Each entangled electron and nucleus was then able to function as a qubit. Most startlingly, ten billion quantumly entangled qubits were created simultaneously. If a way an be found to link these together, the foundation will have been laid for an incredibly powerful computing machine. In comparison to the 128 qubit D-Wave One, a future computer with even a fraction of a 10 billion qubit capacity could clearly possess a quite literally incomprehensible level of processing power. Quantum computing is a highly complex and bewildering field with incredible potential (though so too was microelectronics in the 1970s and we all now take that for granted!). For a far more technical overview of the topic, try reading this overview from Stanford University. You may also want to look at IBM's Quantum Computing pages, visit the Australian Centre of Excellence for Quantum Computation and Communication Technology, or browse-on-over to D-Wave's Technology Overview. Do be aware, however, that delving into any and all of these resources may well make your head hurt! Ultimately, few companies and individuals will ever own a quantum computer. Nevertheless, within a decade or two most companies and individuals are very likely to be regularly accessing quantum computers from the cloud. Not least this is because one of the first mainstream applications of quantum computing will be in online security and data encryption. Today, all online security systems rely on prime number calculations that quantum computers are potentially very good at indeed. Fairly soon, anybody with a quantum computer will therefore theoretically be able to use it to crack the security on any bank account or cloud computing resource. The only way to prevent this will be to protect and encrypt all online resources with quantum security gateways. The demand for every bank and cloud provider to invest in a quantum computer -- if only for encryption purposes -- is therefore likely to skyrocket once the technology moves beyond its currently rather costly and cumbersome experimental phase. Almost certainly signalling the potential significance of quantum computing in code-making and code-breaking, in March 2012 the National Security Centre in the United States announced that it is spending $2bn on a highly-fortified data centre with a 512 qubit quantum computer. Another major application area for quantum computing will be in the processing of Big Data. As the volume of digital data produced on Planet Earth continues to grow expotentially, so a significant potential exists to generate business and social value via its insightful interlinkage. While technologies like Hadoop are currently permitting advancements in the processing of vast data sets, it may well be the development of quantum computers that really pushes large-scale Big Data analysis into the mainstream. For more information on quantum computing, you may like to watch my Quantum Computing Video. Information on a range of other future technologies can also be found on our sister site ExplainingTheFuture.com.
<urn:uuid:c7ae2976-f01a-4d46-ba2b-48a0b38cecc4>
CC-MAIN-2015-32
http://explainingcomputers.com/quantum.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982013.25/warc/CC-MAIN-20150728002302-00206-ip-10-236-191-2.ec2.internal.warc.gz
en
0.932113
2,023
3.671875
4
Mar 18, 2012 Into to Quantum Computing Score: 4.2/5 (189 votes) This is by no means comprehensive, I was limited to 5 pages when writing this. But hopefully you'll understand a bit about the physics behind quantum computing, and some of the applications of it. At the very least, it'll be a reference point if you want to do further research. NOTE: There wasn't a good category for this, so it is now a how-to article :D Works cited is in the file attached. “Classical” computers all follow a similar design. They are based on the von Neumann architecture, and all exhibit what is known as the “von Neumann Bottleneck”. This basically means that current computers, without the use of the parallel programming, can only perform one action at a time. With current processors, this is works fine for most tasks. Processors can currently execute millions of instructions every second, and that number is constantly rising. Why is there a need for an alternate technology then? With Moore's Law stating that the density of integrated chips will double about every eighteen months, there does not seem to be much need to change. Eventually engineers are going to hit a limit as to how much they can put onto these chips. Transistors can only get so small. Not to mention Rock's Law, which states that the cost to build the plants to produce these chips will double every four year. So with the rising price of production and chip density reaching its limit, where do computers go next? There are numerous alternatives that have been theorized, the most promising and interesting of these is quantum computing. History of Quantum Computation The theory of quantum mechanics has been around for almost 200 years, but it is only been the last 30 years that these principles have been thought to be applied to computing. Many people credit Peter Shor as being the father of quantum computation. He was the first person to bring quantum computing theory to more of a reality with his algorithm, known as Shor's algorithm, but he was not the one to have the initial idea of quantum computers. The man who is credited with being the first to mention a quantum computer was Richard Feynman. He addressed the issue of classical computers not being well suited at simulating real world physics. His idea on how to fix this issue was to create a computer primarily of quantum mechanical elements which would obey quantum laws. A couple of other key figures in the development of quantum computation were Steve Wiesner and David Deutsch. Wiesner was a leading figure in looking at quantum cryptography and applied the uncertainty principle to this. David Deutsch showed that any physical property could be modeled perfectly in a quantum computer. Quantum Physics for Computation The main concept behind quantum computing is the qubit. A qubit is similar to a bit yet very different. A bit is restricted values of 0 and 1 only, and a qubit can represent also 0 and 1, but it can also represent or a superposition of both 0 and 1. The idea of superposition states that a quantum particle can exist partly in all of its possible states at once. This is what makes quantum computation so interesting. Superposition is what gives a quantum computer its parallelism, or ability to work on many computations at a single time. For example, according to Deutch, a 30 qubit computer would theoretically be able to operate at ten teraflops, or ten trillion floating-point operations every second! Classical computers today operate at gigaflop speed, or billions of floating-point operations per second. But this also happens on much more than 30 bits. A problem with this theory is that the moment a qubit is looked at in superposition, it will assume the value of either 1 or 0, essentially making it a fancy bit. It is also possible to accidentally “bump” a qubit when trying to look at it and change its value. The fix to these issues is another important concept in quantum theory known as quantum entanglement. Entanglement states that if some outside force is applied to two atoms, they will become entangled and one atom will assume the properties of the other atom. This allows physicists to look at an atom by looking at its entangled counterpart, removing the risk of “bumping” the atom holding your value. Without entanglement, quantum computers would be nothing but a really expensive and complicated digital computer. Implications for Computer Scientists Perhaps the most interesting aspect of quantum computing to computer scientists is how it affects algorithms and algorithm design. Quantum algorithms could solve some problems exponentially faster than any current technology algorithm and could solve any other problem at least as fast as current technology. One example of what quantum computers could solve much faster is number factorization. With current technology, factoring large numbers is computationally infeasible and this is off what RSA encryption is based. The ability of quantum computing to factor numbers much faster than classical computing has been demonstrated with Shor's Algorithm. Shor's Algorithm was demonstrated the first time in 2001 by a group at IBM. They used a quantum computer with seven qubits to factor out 15, which is the smallest number able to be factored by this algorithm. A reason why some calculations are much faster on a quantum computer is parallelism. Currently, parallel computing happens through the use of extra hardware and multiple computers, but with quantum computing it could all be done in a single processor. For example, if one takes one qubit in superposition and performs some calculation with yet another qubit in the same superposition, one would have four results. It would output 0/0, 0/1, 1/0, and 1/1 results. If one takes two qubits and perform an operation on two other qubits, one would get 00/00, 01/00, 10/00, 11/00, 00/01, 00/10, and so on results. Pros and Cons of Quantum Computing Quantum computing can have an interesting impact on the technology world. The most intriguing benefit is likely to be the inherit parallelism that quantum computing brings. Being able to perform exponentially more calculations at any given time compared to classical computers is important. The quantum parallelism can be considered both beneficial and consequential. Because of this parallelism, a quantum computer would be able to factor large numbers in a reasonably short amount of time, a task that is currently infeasible. The issue here is that some encryption techniques that keep a lot of important information safe, are based on the fact that factoring large numbers is currently infeasible. A quantum computer could very easily break any encryption protocol that relies on large numbers being extremely difficult to factor. This could, in theory, make all of this information no longer protected. Another benefit that comes from the quantum parallelism is being able to search large databases much faster than today. One other benefit that comes from quantum computing is true randomness. Currently in computers, random numbers are generated through a complex algorithm, so only pseudo-random numbers can be produced. One of the core concepts of quantum mechanics is the inherit randomness. For example, if a single photon is shot at a beam splitter, that photon will go one of two ways at an equal 50/50 chance. There is no way of determining exactly which way it will go, only an educated guess can be made. Aside from the benefits of quantum computing, there are some downsides to it. The obvious con being the sheer cost and complexity of developing them. In 2005, the research team of Rainer Blatt succeeded in creating a computer with a fourteen qubit register. So there is still a lot of time needed to see a practical quantum computer be made. There is just too much that can go wrong in the quantum world. Quantum computing is very interesting to think about because the possible advantages to it, but currently it is just too complex and error prone to become a practical system, however physicists are making a lot of progress in solving these issues. So what might a future of quantum computing look like? Quantum computers will not become the new PC, at least for quite a long time. But there will likely be large quantum computers that act as a server system. The personal computer world and Internet world have already become one, so having a series of “cloud” quantum computers spread out that people can connect to in order to perform some complex calculations is very believable. It is too early to say how much the average user will gain from advancements in this field, but it will definitely help large corporations and the science world. It would lead to advancements in the understanding of the quantum world in general. A quantum computer acting as a server would be near immune to denial of service attacks due to the amount of traffic a quantum computer with just a few hundred qubits could handle. This would also mean less servers would be needed to handle the world's traffic. Current technology can only advance so far. Transistors have been made smaller and smaller each year since their inception, but they can only be so small before they reach the atomic size. Once this scale is hit, transistor technology will have hit its maximum potential. There are researchers working on possible solutions to this, but quantum computing has generated most of the attention. The sheer computing power that could be gained from using a relatively small amount of quantum bits is astounding. All this power can also lead to problems. A lot of the world's financial information is encrypted using a technique of generating large numbers that are not plausible to factor back down using current technology. With this new power our encryption system could be broken in a day. There may be some consequences of quantum computing, but the benefits can be seen to outweigh them. The next generation of computing is being made right now.
<urn:uuid:bc080953-bc62-4d87-a9ff-e7b37ae49e0f>
CC-MAIN-2015-32
http://www.cplusplus.com/articles/3A07M4Gy/
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988924.75/warc/CC-MAIN-20150728002308-00055-ip-10-236-191-2.ec2.internal.warc.gz
en
0.962206
2,002
3.5
4
July 4, 2012 Scientists Make Strides Toward Quantum Computing Lee Rannals for redOrbit.com - Your Universe Online Harvard scientists claim that they have solved a problem faced in quantum computing by using diamonds. One challenge quantum computing has faced is creating quantum bits that exist in a solid-state system at room temperature. Most systems rely on complex and expensive equipment designed to trap an atom or electron in a vacuum, and then cool the entire system to nearly absolute zero, or –459.67° Fahrenheit. The Harvard team used a pair of impurities in laboratory-grown diamonds to create quantum bits, or qubits, and store information in them for nearly two-seconds. Although two-seconds doesn't seem like a long time, it is actually an increase of nearly six orders of magnitude over the life span of earlier systems. The scientists wrote in the journal Science that this is a first step in the eventual construction of a functional quantum computer. "What we´ve been able to achieve in terms of control is quite unprecedented," Professor of Physics Mikhail Lukin, leader of the research, said. "We have a qubit, at room temperature, that we can measure with very high efficiency and fidelity." He said the work is limited only by technical issues, so it would be feasible to increase the life span into the range of hours. "At that point, a host of real-world applications become possible," Lukin said. He said he envisions the system being used in applications that include "quantum cash," which is a theoretical payment system for bank transactions and credit cards that rely on coding of quantum bits to keep counterfeiters at bay. Another application, according to Lukin, would be for "quantum networks," which is a highly secure communications method that uses quantum bits to transmit data. "This research is an important step forward in research toward one day building a practical quantum computer,” graduate student Georg Kucskoo, who works in Lukin´s lab and is one of two first authors of the paper, said. "For the first time, we have a system that has a reasonable timescale for memory and simplicity, so this is now something we can pursue." During the initial experiments, the team used diamonds that contained 99 percent carbon-12 atoms, which have no spin. However, the remainder was made up of carbon-13 atoms, which is a tricky isotope that contains a spin in the atom's nucleus. “The nuclear spin of the carbon-13 makes an ideal quantum bit, because they are very isolated,” Lukin said. “Because they interact with so few outside forces, they have relatively long coherence times. Of course, the same properties that make them ideal qubits also make them difficult to measure and manipulate.” The team decided that rather than trying to find a way to measure the spin of the carbon atoms, they would use the nitrogen-vacancy (NV) centers, which are atomic-scale impurities in lab-grown diamonds, to do it for them. They developed a new technique to create crystals that were even more pure, and then bombarded the crystal with nitrogen to create the NV center. The interaction resulted in the NV center mirroring the state of the carbon atom, which means the researchers can encode a bit of information into the spin of the atom, then "read" that data by monitoring the NV center. “The system we´ve developed uses this very local probe, the NV center, to allow us to monitor that spin,” Lukin said. “As a result, for the first time, we can encode a bit of information into that spin, and use this system to read it out.” However, encoding information into the spin of the carbon-13 atom and reading it using the NV center is only a first step. The team had to determine how to take advantage of the atom's quantum properties as well. Being able to be in two states at the same time is a key principle in quantum computers. Traditional computers encode bits of information as either zero or one, while quantum computers rely on atomic-scale quantum mechanics to give quantum bits both values at once. That property allows quantum computers to perform multiple computations in parallel, making them more powerful than traditional computers. The first step, according to Lukin, is to cut the connection between the NV center and the carbon atom by using massive amounts of laser light. The second step is that the diamond crystal is bombarded with a specific set of radio frequency pulses, which suppresses the interaction between the carbon-13 atom and nearby atoms. “By limiting interactions with the carbon-13 atom, we can extend the life of the qubit and hold the data for longer,” Lukin said. “The end result is that we´re able to push the coherence time from a millisecond to nearly two seconds.”
<urn:uuid:d6189381-1231-4c56-96b1-d8f90d500ec3>
CC-MAIN-2015-32
http://www.redorbit.com/news/technology/1112650526/scientists-make-strides-toward-quantum-computing/
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990445.44/warc/CC-MAIN-20150728002310-00144-ip-10-236-191-2.ec2.internal.warc.gz
en
0.953565
1,025
3.65625
4
It might seem like something straight from the Star Trek universe, but two new research experiments—one involving a photon and the other involving a super-conducting circuit—have successfully demonstrated the teleportation of quantum bits. If that sounds like gobbledygook, don't worry. We got in touch with one of the researchers, physicist Andreas Wallraff, of the Quantum Device Lab at the Swiss Federal Institute of Technology Zurich, to explain how his team and a team based at the University of Tokyo were able to reliably teleport quantum states from one place to another. People have done this before but it hasn't necessarily been reliable. The new complementary research, which comes out in Nature today, is reliable—and therefore may have widespread applications in computing and cryptography. Before we talk about the nitty-gritty part of teleportation, we need to define a few key words. Let's start with a regular, classical bit of information, which has two possible states: 1 or 0. This binary system is used by basically all computing and computing-based devices. Information can be stored as a 1 or a 0, but not as both simultaneously. (Related: "The Physics Behind Schrodinger's Cat.") But a quantum bit of information—called a qubit—can have two values at the same time. "With the qubit, you can store more information because you have information in all of its possible states," Wallraff says. "Whereas in the classical memory system, only one can be stored." (More physics: "The Physics Behind Waterslides.") Quantum teleportation relies on something called an entangled state. An entangled state, in the words of Wallraff, is a "state of two quantum bits that share correlations." In other words, it's a state that can't be separated. If you have a classical 1 and a 0, for example, you can separate them into a 1 and a 0. But if you have qubits, the bits can be assigned both a 1 and a 0 at the same time—meaning they can't be separated into their individual components and must be described relative to each other. (If you'd like to know more about this, I recommend delving into "Quantum Entanglement" on the Caltech website.) Diving Into Teleportation Now that we have a small working vocabulary, we can delve into what Wallraff and team actually did. Let's go back to Star Trek. "People automatically think about Star Trek when they hear teleportation," says Wallraff. "In Star Trek, it's the idea of moving people from point A to B without having the person travel that distance. They disappear and then reappear." What happens in quantum teleportation is a little bit different. The bits themselves don't disappear, but the information about them does. "That's where the relation to Star Trek comes in," says Wallraff. "You can make the information disappear and then reappear at another point in space." So how does this work? Remember, we're talking about quantum bits—which can hold two possible states at the same time. "You can ask yourself, 'How can I transport the information about this bit from one place to another?'" says Wallace. "If you want to send the information about the qubit from point A to B, the information at point A [contains] 0 and 1 simultaneously." It's impossible using classical bits to transmit this information because, as we learned earlier, the information can be stored as 1s or 0s but not both. Quantum teleportation gets around this problem. (Related: "Physicists Increasingly Confident They've Found the Higgs Boson.") This is where those entangled states I mentioned earlier come into play. In quantum teleportation, a pair of quanta in an entangled state is sent to both a sender—which I'll call A—and a receiver—which I'll call B. A and B then share the entangled pair. "The sender takes one of the bits of the entangled pair, and the receiver takes the other," says Wallraff. "The sender can run a quantum computing program measuring his part of the entangled pair as well as what he wants to transport, which is a qubit in an unknown state." Let's untangle what he said: The sender—A—makes a measurement between his part of the entangled pair and what he wants to transport. Back to you, Wallraff. "So we have this measurement, and that's what is sent to the receiver via a classical bit," he says. The receiver—B—receives the measurement between A's part of the entangled pair and the unknown qubit that A wants to send. After B receives this measurement, he runs a quantum computing algorithm to manipulate his part of the entangled pair in the same way. In the process, B re-creates the unknown qubit that A sent over—without receiving the qubit itself. I realize this is confusing. But Why Is It Useful? The advances these two research groups have made may improve the way quantum bits are sent, leading to faster processors and larger-scale encryption technologies. Encryption technology—which is used by everyone from credit card companies to the NSA—is based on the fact that it's really, really hard to find factors of very large prime numbers. And quantum computing is extremely useful for factoring very large prime numbers. Dividing or multiplying numbers is fairly easy for any computer, but determining the factors of a really large 500- or 600-digit number is next to impossible for classical computers. But quantum computers can process these numbers easily and simultaneously. Credit card companies, for instance, assign users a public key to encode credit card information. The key is the product of two large prime numbers, which only the website seller knows. Without a quantum computer, it would be impossible to figure out the two prime numbers that are multiplied together to make the key-which protects your information from being shared. (For more info, read this really useful guide about the basics of quantum computing from the University of Waterloo.) "If you wanted to use classical bits to do this, it wouldn't be efficient," says Wallraff. In other words, classical computers—the ones we use now for most stuff—can't do any of the things quantum computers can do on a large scale. So while we might not be beaming Scotty up just yet, our computers, it appears, are one step closer to doing so.
<urn:uuid:dbc626b9-d44d-4c27-8e96-ca4612d7a7d5>
CC-MAIN-2015-32
http://news.nationalgeographic.com/news/2013/08/130814-physics-quantum-computing-teleportation-star-trek-qubit-science/
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989826.86/warc/CC-MAIN-20150728002309-00228-ip-10-236-191-2.ec2.internal.warc.gz
en
0.955111
1,346
3.78125
4
A New Era for Atomic Clocks (page 2) NIST's Atomic Clocks All clocks must have a regular, constant or repetitive process or action to mark off equal increments of time. Examples include the daily movement of the sun across the sky, a swinging pendulum or vibrating crystal. In the case of atomic clocks, the beat is kept by a transition between two energy levels in an atom. NIST-F1 and NIST-F2 are microwave clocks, based on a particular vibration in cesium atoms of about 9 billion cycles per second. Optical atomic clocks are based on ions or atoms vibrating at optical frequencies (visible, ultraviolet or infrared light), which are about 100,000 times higher than microwave frequencies. Because optical clocks divide time into smaller units—like a ruler with finer tick marks—they ultimately could be perhaps 100 times more accurate and stable than microwave clocks. Higher frequency is one of the features enabling improved accuracy and stability. One key advance making optical atomic clocks possible was the development of frequency combs at JILA, NIST and elsewhere. Frequency combs link optical frequencies to lower frequencies that can be correlated with microwave standards and counted. NIST's first all-optical atomic clock, and the best in the world for several years, was based on a single mercury ion. Its performance was then surpassed by NIST's quantum logic clock, based on a single aluminum ion. This clock got its nickname because it borrows techniques from experimental quantum computing. Aluminum is insensitive to changes in magnetic and electric fields and temperature, making it a great ion for atomic clocks, but it wasn't practical until NIST developed new quantum computing technologies. NIST and JILA are leaders in the development of so-called optical lattice clocks. These clocks trap thousands of heavy metal atoms in an "optical lattice" formed by intersecting laser beams. Research clocks at NIST use ytterbium atoms and JILA research clocks use strontium atoms. Thanks to the presence of so many atoms, these clocks offer the advantages of strong signals and parallel processing. In addition, the atoms are held virtually still in the lattice, reducing errors from atomic motion and collisions that otherwise would need to be corrected. Optical lattice clocks are rapidly improving, and continue to set new performance records so often that it is difficult to keep track of the latest records. Both the JILA strontium and NIST ytterbium optical lattice clocks are rapidly advancing in stability. And now, for the first time in decades, a single type of atomic clock, an optical lattice clock, simultaneously holds the records for both precision and stability – and it is likely optical lattice clock performance will continue to significantly improve. This rapid improvement in optical lattice clocks at JILA and NIST results from key scientific breakthroughs. One has been the development of extremely stable lasers, including the world's most stable laser at JILA. Another key breakthrough has been development of new theories about how atoms trapped in the optical lattices interact, and application of these theories to significantly reduce the uncertainties in optical lattice clocks. And much of the improvement results from the hard and creative work of many scientists, students and postdoctoral fellows to continually find new ways to make a series of many small improvements in clock performance. NIST also has demonstrated a calcium atomic clock that is extremely stable for short time periods. This clock has the potential to be made portable, making it attractive for commercial applications. Evaluating Atomic Clock Performance Accuracy refers to a clock's capability to measure the accepted value of the frequency at which the clock atoms vibrate, or resonate. Accuracy is crucial for time measurements that must be traced to primary standards such as NIST-F1 and NIST-F2. Technical terms for accuracy include "systematic uncertainty" or "fractional frequency uncertainty"—that is, how well scientists can define shifts from the true frequency of an atom with confidence. Cesium standards like NIST-F1 and NIST-F2 are the ultimate "rulers" for time because the definition of the SI second is based on the cesium atom. More specifically, the SI unit of frequency, the Hertz, is defined internationally by the oscillations of a cesium atom. Officially, no atomic clock can be more accurate than the best cesium clock by definition. That is, only a direct measurement of the particular cesium transition can be considered the ultimate measurement of accuracy, and all other (non-cesium) clocks can only be compared to the accuracy of a cesium clock. This is partly a semantic issue. If after further development and testing the definition of the second (or Hertz) were changed to be based on the strontium atom transition, for example, the NIST/JILA strontium atom lattice clock would become the most accurate clock in the world. To get around this measurement hurdle, NIST scientists evaluate optical atomic clocks by comparing them to each other (to obtain a ratio, or relative frequency, for which there is no official unit), and by measuring all deviations from the true resonant frequency of the atom involved, carefully accounting for all possible perturbations such as magnetic fields in the environment. The optical clock performance is also directly compared to the NIST-F1 standard. For several years both NIST ion clocks have had measured relative uncertainties much smaller than NIST-F1's. (In general literature, NIST sometimes uses the term "precise" to describe the performance of optical clocks, because it less technical and has a more positive connotation than uncertainty. Precision implies that repeated measurements fall within a particular error spread around a given value. In everyday definitions of precision, this value is not necessarily the "correct" one—you can be precise without necessarily being accurate. However, in the context of optical clocks, NIST uses precision specifically to mean the spread around the true or accepted value for the atom's resonant frequency.) Stability is another important metric for evaluating atomic clocks. NIST defines stability as how precisely the duration of each clock tick matches every other tick. Because the ticks of any atomic clock must be averaged for some period to provide the best results, a key benefit of high stability is that optimal results can be achieved very quickly. Stability is not traceable to a time standard, but in many applications stability is more important than absolute accuracy. For example, most communications and GPS positioning applications depend on synchronization of different clocks, requiring stability but not necessarily the greatest accuracy. (Other common terms for stability include precision.) The optical lattice clocks at NIST and JILA are much more stable than NIST-F1. NIST-F1 must be averaged for about 400,000 seconds (about five days) to achieve its best performance of about 1 second in 100 million years. In contrast, the ytterbium and strontium lattice clocks reach that level of performance in a few seconds of averaging, and after a few hours of averaging are about 100 times more stable than NIST-F1. NIST scientists are also working to improve the portability of next-generation atomic clocks for applications outside the laboratory.
<urn:uuid:697feae0-3723-4b16-9c14-1e70e976cbfc>
CC-MAIN-2015-32
http://www.nist.gov/pml/div688/2013_1_17_newera_atomicclocks_2.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987228.91/warc/CC-MAIN-20150728002307-00011-ip-10-236-191-2.ec2.internal.warc.gz
en
0.936922
1,483
3.765625
4
A Nov. 5, 2013 Vienna University of Technology press release (also available on EurekAlert) describes research that may make quantum optical switches possible, With just a single atom, light can be switched between two fibre optic cables at the Vienna University of Technology. Such a switch enables quantum phenomena to be used for information and communication technology. The press release goes on to describe a ‘light in a bottle’ technique which leads, the researchers hope, that they may have discovered how to create a quantum light switch, Professor Arno Rauschenbeutel and his team at the Vienna University of Technology capture light in so-called “bottle resonators”. At the surface of these bulgy glass objects, light runs in circles. If such a resonator is brought into the vicinity of a glass fibre which is carrying light, the two systems couple and light can cross over from the glass fibre into the bottle resonator. “When the circumference of the resonator matches the wavelength of the light, we can make one hundred percent of the light from the glass fibre go into the bottle resonator – and from there it can move on into a second glass fibre”, explains Arno Rauschenbeutel. A Rubidium Atom as a Light Switch This system, consisting of the incoming fibre, the resonator and the outgoing fibre, is extremely sensitive: “When we take a single Rubidium atom and bring it into contact with the resonator, the behaviour of the system can change dramatically”, says Rauschenbeutel. If the light is in resonance with the atom, it is even possible to keep all the light in the original glass fibre, and none of it transfers to the bottle resonator and the outgoing glass fibre. The atom thus acts as a switch which redirects light one or the other fibre. Both Settings at Once: The Quantum Switch In the next step, the scientists plan to make use of the fact that the Rubidium atom can occupy different quantum states, only one of which interacts with the resonator. If the atom occupies the non-interacting quantum state, the light behaves as if the atom was not there. Thus, depending on the quantum state of the atom, light is sent into either of the two glass fibres. This opens up the possibility to exploit some of the most remarkable properties of quantum mechanics: “In quantum physics, objects can occupy different states at the same time”, says Arno Rauschenbeutel. The atom can be prepared in such a way that it occupies both switch states at once. As a consequence, the states “light” and “no light” are simultaneously present in each of the two glass fibre cables. [emphasis mine] For the classical light switch at home, this would be plain impossible, but for a “quantum light switch”, occupying both states at once is not a problem. “It will be exciting to test, whether such superpositions are also possible with stronger light pulses. Somewhere we are bound to encounter a crossover between quantum physics and classical physics”, says Rauschenbeutel. This light switch is a very powerful new tool for quantum information and quantum communication. “We are planning to deterministically create quantum entanglement between light and matter”, says Arno Rauschenbeutel. “For that, we will no longer need any exotic machinery which is only found in laboratories. Instead, we can now do it with conventional glass fibre cables which are available everywhere.” Darrick Chang offers a good introduction (i.e., it’s challenging but you don’t need a physics degree to read it) and some analysis of this work in his Nov. 4, 2013 article for Physics (6, 121 (2013) DOI: 10.1103/Physics.6.121) titled: Viewpoint: A Single-Atom Optical Switch. Quantum scientists over the past two decades have dreamt of realizing powerful new information technologies that exploit the laws of quantum mechanics in their operation. While many approaches are being pursued, a prevailing choice consists of using single atoms and particles of light—single photons—as the fundamental building blocks of these technologies . In this paradigm, one envisions that single atoms naturally act as quantum processors that produce and interface with single photons, while the photons naturally act as wires to carry information between processors. Reporting in Physical Review Letters, researchers at the Vienna University of Technology, Austria, have taken an important step forward in this pursuit, by experimentally demonstrating a microphotonic optical switch that is regulated by just a single atom . This article is open access. For those willing to tackle a more challenging paper, here’s a link to and a citation for the Vienna University of Technology researchers’ paper, Fiber-Optical Switch Controlled by a Single Atom by Danny O’Shea, Christian Junge, Jürgen Volz, and Arno Rauschenbeute. Phys. Rev. Lett. 111, 193601 (2013) [5 pages] This work is behind a paywall. Minutes after publishing: here’s an image that illustrates superpositioning in a quantum switch,
<urn:uuid:da30a40d-505d-4369-8c87-f77bb28fabfe>
CC-MAIN-2015-32
http://www.frogheart.ca/?tag=superpositions
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990611.52/warc/CC-MAIN-20150728002310-00134-ip-10-236-191-2.ec2.internal.warc.gz
en
0.898674
1,095
3.578125
4
It's a machine that could calculate solutions to problems so impossibly time-consuming that even the most powerful supercomputers could never handle them. And it would do so in an instant. This is the quantum computer, made possible by the bizarre nature of quantum mechanics. And though the idea is still in its infancy, it's no fantasy. Two research teams, at Harvard University and the Max Planck Institute of Quantum Optics in Germany, have just announced that they have independently forged the building blocks for tomorrow's quantum computers. As they published today in the journal Nature (1, 2), the scientists discovered a way to hook up atoms and particles of light to create a new type of switch and logic-gate‚ quantum versions of the connecting structures that link bits of data in modern computers. When you dive down into the circuits, all modern computers are basically the same: a huge collection of data arranged with simple rules. Each piece of data is called a bit and shows just one fragment of information‚ a 0 or a 1. You can think of a bit as a lightbulb that's either shining or not. But quantum theory‚ the physics that rules the tiny world of atoms and particles‚ tells us that there are certain circumstances in which a piece of matter can be two things at the same time. It's possible to have an atom that's spinning in two opposite directions at once, or even to have your lightbulb both shining and not shining. Items with this wacky dual state are said to be in "superposition." (Physicist Niels Bohr once said, "Those who are not shocked when they first come across quantum theory cannot possibly have understood it." So don't worry if you're confused‚ Bohr was one of the founders of quantum theory.) The most important catch (there are plenty) is that this superposition state is fragile and possible only for incredibly tiny bits of matter. But for computers, this very idea poses an interesting prospect. If you could somehow harness this odd state of matter to put individual bits of information into superposition, then suddenly you've packed more data into the tiniest package possible. Your bits can now show a 0, a 1, or a combo of both. This is called a quantum bit, or a qubit. And if qubits were linked together like normal bits are linked in a computer, then you'd have a machine could calculate at insane speeds. "At this point, very small-scale quantum computers already exist," says Mikhail Lukin, the head of the Harvard research team. "We're able to link, roughly, up to a dozen qubits together. But a major challenge facing this community is scaling these systems up to include more and more qubits." The problem of adding more qubits, Lukin explains, is tied to the fragility of the superposition state. Unless the entire quantum computer is kept at extremely cold temperatures and free of any interfering particles or other noise, the superposition state will entirely collapse for all the qubits, ruining the computer. What makes this even harder is that today's qubits must be close to one another to be connected, and it takes a massive apparatus of machinery, lab equipment, and lasers to support the superposition state of just a single fleck of matter. That dumps an increasing amount of grit into the system, increasing the chance that the entire quantum computer will fail. "It's just very difficult to address one qubit without interfering with all the rest of them; to take a laser beam and shine it one particular qubit and not another," says Gerhard Rempe, the head of the Max Planck Institute of Quantum Optics research team. "And if, for example, you want to use 10,000 qubits, well, that's 10,000 lasers you have to worry about." The Ol' Gate and Switch The new quantum logic gate and switch unveiled today promise to ameliorate some of these problems. Both use a new method: They harness trapped atoms (in both cases, rubidium) that can transfer information through photons, the particles that make up light. Photons, which can be directed through fiber-optic cable, are the prime candidate for sending information at great distances and keeping qubits apart. Here is how it works: The scientists trap a heavy rubidium atom between two mirror-like sheets using a laser technique that keeps the atom relatively immobile. The scientists then send a photon straight at this atom sandwich. Normally, the photon would hit the first mirror and bounce right back where it came from. But if the atom is put in a specific energetic state, the photon will go straight through that first mirror, hang out with the atom for a moment, and then exit where it came from. As a going-away present, the photon also has a slight change in polarization. This is pretty much how any switch in a computer works. If something is "on," then one thing happens. If it's "off," then another thing happens. But here's the tricky part. The scientists can put the rubidium atom in superposition, so that it is simultaneously in that energetic state and not in the energetic state. It's on and off. Because of this, the photon both does and does not enter the mirror, mingle, and gain its polarization change. And the photon, by virtue of having both changed and not changed, carries that superposition information and can bring it to a different atom-based qubit. A similar process happens with the quantum logic gate. A normal logic gate is just a series of switches set up in a way that together, they perform a logical operation when given multiple inputs. The German team created a quantum version by having multiple photons repeatedly bounce off the mirror-trapped and superpositioned rubidium atom. Then, using another funky attribute of quantum physics called entanglement swapping, the scientists made it so that the photons share the same information. These entangled photons can become the multiple inputs required for any logic gate. Even with this new advancement, we're still a long way from building large-scale quantum computers, with thousands of qubits linked together. "We're not going to see quantum computers being built for the average American consumer in ten years, or anything like that," says Jeff Thompson, a physicist with the Harvard research team. Rempe says that while this technology seems promising for solving the qubit-closeness issue, neither team is actually attempting to link multiple qubits. And that endeavor will probably open up a new world of unknowns. Nonetheless, "It's exciting to see this [photon-based] technology is coming into its own," says Jacob Taylor, a physicist at the University of Maryland who was not involved with the projects. Whatever future difficulties arise, he says, scientists are learning valuable information about one of the most fundamental aspects of physics. Everything we know about quantum mechanics would lead us to believe that large-scale quantum computers should be theoretically possible. But even if "you couldn't build a large-scale quantum computer," he says, "that's somewhat exciting, too. That tells us that our theory of quantum mechanics might be breaking down somewhere, that we still have much to learn."
<urn:uuid:6cb68a0a-e19f-4b3c-9c80-d7ad736d1d28>
CC-MAIN-2015-32
http://www.popularmechanics.com/science/a10425/two-big-steps-toward-the-quantum-computer-16682595/
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987034.19/warc/CC-MAIN-20150728002307-00088-ip-10-236-191-2.ec2.internal.warc.gz
en
0.947583
1,486
3.90625
4
Physics student Brian Vlastakis GRD ’15 works in the lab of Yale applied physics professor Robert Schoelkopf, associate director of the Yale Institute for Nanoscience and Quantum Engineering. Vlastakis sat down with the News on Monday to discuss quantum computing. Q: Can you briefly summarize the importance of quantum computing? A: The idea for our field of quantum information and quantum computation is trying to manipulate quantum mechanics in order to perform very complicated computation algorithms. A classical computer is made up of very many digital bits that have a 0 or a 1 state. In a quantum computer, the bit is now acting quantum mechanically. A quantum mechanical bit — we call it a qubit — is forced to obey the laws of quantum mechanics, in the sense that it’s not just in one place at once. It can be both 0 and 1 at the same time, and the idea here is that you’re performing multiple calculations at once. A nice analogy that people like to use for quantum computers is that you’re kind of essentially doing the ultimate “parallel processing.” The quantum processor is sort of like having many classical processors all performing a calculation in parallel, doing separate smaller calculations and then putting them together. Q: How does the lab that you’re working in contribute to quantum information? A: We’re trying to build quantum computers, but there are many ways you can implement them. One way, what we do, is called superconducting qubits. In quantum information, you want to be able to create quantum bits, which are essentially a system with ground-state energy, representing 0, and some excited-state energy level, representing 1. You want to be able to address the transition between these states. We’re trying to create these “two-level systems,” which is just another word for a quantum bit, and we’re creating them with superconducting circuits. There are other crazy ways to make quantum bits, but what’s really nice about the way that we’re making these quantum bits is that we’re able to print them out on a circuit board. This is actually the same technique that big companies use to make regular computers. This makes the field that we’re in very exciting, because a lot of these companies say that if you guys can figure out how to control them and understand them, then we can make them. Now, we’re slowly trying to put all these components together in order to perform very rudimentary quantum algorithms. What’s exciting is that these really have a great potential to scale up and become powerful quantum computers. Q: How do you hope to expand to scaling up? A: When you have only a few quantum bits, it’s okay if they mess up every once in a while, because the probability of only one messing up is pretty slim. But if you had a million of those bits, there’s a very good chance that one of them will mess up when you’re doing your algorithm. This is actually a very difficult thing for quantum algorithms, because quantum bits are extremely sensitive to errors that might occur to them. Unfortunately for these quantum bits, any fluctuation between the 0 and 1 states actually corresponds to a completely different quantum state. So, we need to know precisely what state our bit is actually in. What this requires is something called quantum error correction. This is what almost everyone in quantum computation is striving to achieve. Being able to do quantum error correction will be the biggest stepping stone is scaling up to these very large scales of quantum bits. We’ll forever be stuck in these few qubit systems until we can sort out quantum error correction. So the big five-year goal in the field is to try to be able to perform rudimentary quantum error correction schemes. Q: How will the work of your lab contribute to the quantum error correction? A: What’s really great about using superconducting qubits is that they are circuits, so if we want to have one qubit interact with another qubit, we can just design a system where’s there’s just a wire that attaches them. This has a lot of really big advantages if we want to implement a type of quantum error correction. We can design a system where different qubits will only interact with other certain qubits. That’s one of the things we’re actively exploring right now. The thing that I’m actually looking into is seeing if we can go beyond just using a quantum bit for these sorts of error correction schemes and regular quantum algorithms. So, something that I’m looking into is using a resonator. In quantum mechanics you have these two-level systems, and then what you call “harmonic oscillators” — or “resonators.” I’m trying to look if we can use cavity resonators as a resource for some sort of quantum memory. There are many ways you can think of a cavity. Typically when we say “cavity” you think of photons, so a cavity resonator is just a box that’s trapping photons, and they’re forced to bounce back and forth inside this cavity. A typical one that most people think of is just two mirrors facing each other — if you send in light, you just get light that’s stuck bouncing back and forth. We can essentially create the same thing with these superconducting circuits.
<urn:uuid:de4cb3a4-d456-4599-87f9-63f1bd66eee2>
CC-MAIN-2015-32
http://yaledailynews.com/blog/2013/04/16/qa-quantum-computing/
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988051.33/warc/CC-MAIN-20150728002308-00288-ip-10-236-191-2.ec2.internal.warc.gz
en
0.94291
1,149
3.5
4
pairs power quantum plan Technology Research News The shortest route to practical quantum computers, which promise to be phenomenally powerful, may be through proven manufacturing processes, namely the semiconductor technology of today's computer chips. It wouldn't hurt if the machines also used aspects of quantum physics that are relatively easy to control. Researchers from Hewlett-Packard Laboratories and Qinetiq plc in England have mapped out a way to manipulate a pair of very cold electrons that could eventually lead to practical quantum computers made from quantum dots, or tiny specks of the type of semiconductor material used in electronics. The researchers showed that at low temperatures, a pair of trapped electrons operate relatively simply and can be manipulated using electric and magnetic fields. "For... two electrons in a square-shaped quantum dot, there are just two states," John Jefferson, a senior fellow at Qinetiq. The electrons repel each other to diagonally-opposite corners of the quantum dot, leaving the two electrons in one of two possible configurations: upper right corner and lower left corner, or upper left corner and lower These two states can represent the 1s and 0s of digital information; the quantum dots, or qubits, that contain them are the quantum computing equivalent of today's computer transistors, which use the presence or absence of electricity to represent 1s and 0s. Quantum computers have the potential to solve very large problems fantastically fast. The weird rules that quantum particles like atoms and electrons follow allow them to be in some mix of states at once, so a qubit can be a mix of both 1 and 0. This means that a single string of qubits can represent every possible answer to a problem at once. This allows a quantum computer to use one set of operations to check every potential answer to a problem. Today's electronic computers are much slower, in contrast, because they must check answers one at a time. Key to the researchers method is the square shape of the microscopic quantum dot -- a speck of the semiconductor gallium arsenide measuring 800 nanometers a side -- that they used to trap the electrons. A nanometer is one millionth of a millimeter. "Two electrons in a square quantum dot repel each other [to the corners] due to the usual Coulomb repulsion force between them," The Coulomb force kicks in when particles carry a charge. Particles of the same charge, like electrons, which are negatively charged, repel each Due to the weird nature of quantum particles, however, the electron pair may also jump, or tunnel, from one position, or state, to the other, said Jefferson. "This happens periodically... and the system can also be in a strange superposition state where it is partly in one state and partly in the other," he said. "This is the basis of our two-electron semiconductor The researchers showed that they could use voltage pulses and magnetic fields to take this type of qubit through all the necessary operations needed to compute, said Jefferson. This was tricky because it is not possible to turn the Coulomb force on and off, said Jefferson. "A severe potential problem with the Coulomb interaction is that it is always there," he said. The researchers showed, however, that it is possible to control the effects of the force, and thus harness it to do computing. The researchers scheme differs from many other quantum dot quantum computing designs because it uses the positions of two electrons rather than their spin, which is a quality that can be likened to a top spinning clockwise or counterclockwise. The electrons' positions determine the charge states of the quantum dot, meaning if an electron is in one corner of the quantum dot that corner has a charge. "It is often easier to manipulate charge states compared to spin states," said Jefferson. In addition, "it is... certainly easier to measure charge states compared to spin states," he To turn this building block into a practical computing device, however, the qubits must be stable. This requires "some means of preparing the qubits in a specific state, after which they have to [be affected only] according to the basic laws of quantum mechanics," said Jefferson. This includes isolating them from other interactions, he said. Practical quantum computers would require hundreds or thousands of connected qubits. "It should be possible to add more qubits," said Jefferson. There must also be a way to measure the final results when the computation has taken place, he said. The researchers showed that these requirements can theoretically be satisfied using the two-electron qubits, said Jefferson. "In principle, these criteria may be met, though to do so in a practical device would be technologically very challenging," he said. Researchers generally agree that practical quantum computing of any type is one to two decades away. "Ten to 20 years is more realistic than 2 to 5," for a practical application of the two-electronic quantum dots, Rather than using semiconductor quantum dots, the researchers' basic method could possibly be achieved more quickly and effectively using a series of individual molecules, said Jefferson. "The energy and temperature scales [for molecules] are higher and thus less prone to random errors," he added. This could address one of the main hurdles to using qubits practically, Jefferson said. "One of the main challenges is to reduce the interaction of a quantum system with its environment -- the so-called decoherence problem," he said. The other main technical challenge to using the system practically would be to produce quantum dots containing precisely two electrons, and to coax the electrons to switch states with acceptable error rates, he said. Jefferson's research colleagues were M. Fearn and D. L. J. Tipton of Qinetiq and Timothy P. Spiller of Hewlett-Packard Laboratories. They published the research in the October 30, 2002 issue of the journal Physical Review A. The research was funded by the British Ministry of Defense, the European Union, Hewlett-Packard and Qinetiq. Timeline: 10-20 years Funding: Corporate, Government TRN Categories: Physics; Quantum Computing and Communications Story Type: News Related Elements: Technical paper, "Two-Electron Quantum Dots as Scalable Qubits," Physical Review A, October 30, 2002. Interface gets the point orders metal bits Hubs increase Net risk Electron pairs power could speed storage Research News Roundup Research Watch blog View from the High Ground Q&A How It Works News | Blog Buy an ad link
<urn:uuid:0b93bcd3-749d-4636-bc13-408f8f748fb4>
CC-MAIN-2015-32
http://www.trnmag.com/Stories/2003/010103/Electron_pairs_power_quantum_plan_122502.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986444.39/warc/CC-MAIN-20150728002306-00150-ip-10-236-191-2.ec2.internal.warc.gz
en
0.914821
1,440
4.21875
4