text
stringlengths 4.06k
10.7k
| id
stringlengths 47
47
| dump
stringclasses 20
values | url
stringlengths 26
321
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.71
0.98
| token_count
int64 1.02k
2.05k
| score
float64 3.5
4.53
| int_score
int64 4
5
|
---|---|---|---|---|---|---|---|---|---|
Over 400 million transistors are packed on dual-core chips manufactured using Intel's 45nm process. That'll double soon, per Moore's Law. And it'll still be like computing with pebbles compared to quantum computing.
Quantum computing is a pretty complicated subject—uh, hello, quantum mechanics plus computers. I'm gonna keep it kinda basic, but recent breakthroughs like this one prove that you should definitely start paying attention to it. Some day, in the future, quantum computing will be cracking codes, powering web searches, and maybe, just maybe, lighting up our Star Trek-style holodecks.
Before we get to the quantum part, let's start with just "computing." It's about bits. They're the basic building block of computing information. They've got two states—0 or 1, on or off, true or false, you get the idea. But two defined states is key. When you add a bunch of bits together, usually 8 of 'em, you get a byte. As in kilobytes, megabytes, gigabytes and so on. Your digital photos, music, documents, they're all just long strings of 1s and 0s, segmented into 8-digit strands. Because of that binary setup, a classical computer operates by a certain kind of logic that makes it good at some kinds of computing—the general stuff you do everyday—but not so great at others, like finding ginormous prime factors (those things from math class), which are a big part of cracking codes.
Quantum computing operates by a different kind of logic—it actually uses the rules of quantum mechanics to compute. Quantum bits, called qubits, are different from regular bits, because they don't just have two states. They can have multiple states, superpositions—they can be 0 or 1 or 0-1 or 0+1 or 0 and 1, all at the same time. It's a lot deeper than a regular old bit. A qubit's ability to exist in multiple states—the combo of all those being a superposition—opens up a big freakin' door of possibility for computational powah, because it can factor numbers at much more insanely fast speeds than standard computers.
Entanglement—a quantum state that's all about tight correlations between systems—is the key to that. It's a pretty hard thing to describe, so I asked for some help from Boris Blinov, a professor at the University of Washington's Trapped Ion Quantum Computing Group. He turned to a take on Schrödinger's cat to explain it: Basically, if you have a cat in a closed box, and poisonous gas is released. The cat is either dead, 0, or alive, 1. Until I open the box to find out, it exists in both states—a superposition. That superposition is destroyed when I measure it. But suppose I have two cats in two boxes that are correlated, and you go through the same thing. If I open one box and the cat's alive, it means the other cat is too, even if I never open the box. It's a quantum phenomenon that's a stronger correlation than you can get in classical physics, and because of that you can do something like this with quantum algorithms—change one part of the system, and the rest of it will respond accordingly, without changing the rest of the operation. That's part of the reason it's faster at certain kinds of calculations.
The other, explains Blinov, is that you can achieve true parallelism in computing—actually process a lot of information in parallel, "not like Windows" or even other types of classic computers that profess parallelism.
So what's that good for? For example, a password that might take years to crack via brute force using today's computers could take mere seconds with a quantum computer, so there's plenty of crazy stuff that Uncle Sam might want to put it to use for in cryptography. And it might be useful to search engineers at Google, Microsoft and other companies, since you can search and index databases much, much faster. And let's not forget scientific applications—no surprise, classic computers really suck at modeling quantum mechanics. The National Institute of Science and Technology's Jonathan Home suggests that given the way cloud computing is going, if you need an insane calculation performed, you might rent time and farm it out to a quantum mainframe in Google's backyard.
The reason we're not all blasting on quantum computers now is that this quantum mojo is, at the moment, extremely fragile. And it always will be, since quantum states aren't exactly robust. We're talking about working with ions here—rather than electrons—and if you think heat is a problem with processors today, you've got no idea. In the breakthrough by Home's team at NIST—completing a full set of quantum "transport" operations, moving information from one area of the "computer" to another—they worked with a single pair of atoms, using lasers to manipulate the states of beryllium ions, storing the data and performing an operation, before transferring that information to a different location in the processor. What allowed it to work, without busting up the party and losing all the data through heat, were magnesium ions cooling the beryllium ions as they were being manipulated. And those lasers can only do so much. If you want to manipulate more ions, you have to add more lasers.
Hell, quantum computing is so fragile and unwieldy that when we talked to Home, he said much of the effort goes into methods of correcting errors. In five years, he says, we'll likely be working with a mere tens of qubits. The stage it's at right now, says Blinov, is "the equivalent of building a reliable transistor" back in the day. But that's not to say those of tens of qubits won't be useful. While they won't be cracking stuff for the NSA—you'll need about 10,000 qubits for cracking high-level cryptography—that's still enough quantum computing power to calculate properties for new materials that are hard to model with a classic computer. In other words, materials scientists could be developing the case for the iPhone 10G or the building blocks for your next run-of-the-mill Intel processor using quantum computers in the next decade. Just don't expect a quantum computer on your desk in the next 10 years.
Special thanks to National Institute of Standards and Technology's Jonathan Home and the University of Washington Professor Boris Blinov!
Still something you wanna know? Send questions about quantum computing, quantum leaps or undead cats to email@example.com, with "Giz Explains" in the subject line. | <urn:uuid:7d846710-0634-45c0-8b23-398fee4a2b4c> | CC-MAIN-2016-22 | http://gizmodo.com/5335901/giz-explains-why-quantum-computing-is-the-future-but-a-distant-one?tag=quantum-computing | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281978.84/warc/CC-MAIN-20160524002121-00178-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.937711 | 1,387 | 3.5625 | 4 |
Quantum entanglement is a state where two particles have correlated properties: when you make a measurement on one, it constrains the outcome of the measurement on the second, even if the two particles are widely separated. It's also possible to entangle more than two particles, and even to spread out the entanglements over time, so that a system that was only partly entangled at the start is made fully entangled later on.
This sequential process goes under the clunky name of "delayed-choice entanglement swapping." And, as described in a Nature Physics article by Xiao-song Ma et al., it has a rather counterintuitive consequence. You can take a measurement before the final entanglement takes place, but the measurement's results depend on whether or not you subsequently perform the entanglement.
Delayed-choice entanglement swapping consists of the following steps. (I use the same names for the fictional experimenters as in the paper for convenience, but note that they represent acts of measurement, not literal people.)
- Two independent sources (labeled I and II) produce pairs photons such that their polarization states are entangled. One photon from I goes to Alice, while one photon from II is sent to Bob. The second photon from each source goes to Victor. (I'm not sure why the third party is named "Victor".)
- Alice and Bob independently perform polarization measurements; no communication passes between them during the experiment—they set the orientation of their polarization filters without knowing what the other is doing.
- At some time after Alice and Bob perform their measurements, Victor makes a choice (the "delayed choice" in the name). He either allows his two photons from I and II to travel on without doing anything, or he combines them so that their polarization states are entangled. A final measurement determines the polarization state of those two photons.
The results of all four measurements are then compared. If Victor did not entangle his two photons, the photons received by Alice and Bob are uncorrelated with each other: the outcome of their measurements are consistent with random chance. (This is the "entanglement swapping" portion of the name.) If Victor entangled the photons, then Alice and Bob's photons have correlated polarizations—even though they were not part of the same system and never interacted.
The practicalities of delayed-choice entanglement swapping bears many similarities to other entanglement experiments. Ma et al. sent pulsed light from an ultraviolet laser through two separate beta-barium borate (BBO) crystals, which respond by emitting two photons with entangled polarizations, but equal wavelength. The BBO crystals acted as the sources labeled I and II above; the oppositely polarized photons they produced were sent down separate paths. One path for each BBO crystal led to a polarization detector ("Alice" and "Bob"), while the other passed through a fiber-optic cable 104 meters long before arriving at the "Victor" apparatus.
That little bit of cabling was enough to ensure that anything that happened at Victor occurred after Alice and Bob had done their measurements.
The choice about entangling the photons at the Victor apparatus was made by a random-number generator, and passed through a tunable bipartite state analyzer (BiSA). The BiSA contained two beam-splitters that select photons' paths depending on their polarization, along with a device that rotated the polarization of the photons. Depending on the "choice" to entangle or not, the polarization of the photons from I and II were made to correlate or left alone. Finally, the polarization of both photons at Victor were measured, and compared with the results from Alice and Bob.
Due to the 104-meter fiber-optic cable, Victor's measurements occurred at least 14 billionths of a second after those of Alice and Bob, precluding the idea that the setting of the BiSA caused the polarization results to change. While comparatively few photons made it all the way through every step of the experiment, this is due to the difficulty of measurements with so few photons, rather than a problem with the results.
Ma et al. found to a high degree of confidence that when Victor selected entanglement, Alice and Bob found correlated photon polarizations. This didn't happen when Victor left the photons alone.
Suffice it to say that facile explanations about information passing between Alice's and Bob's photons lead to violations of causality, since Alice and Bob perform their polarization measurement before Victor makes his choice about whether to entangle his photons or not. (Similarly, if you think that all the photons come from a single laser source, they must be correlated from the start, and you must answer how they "know" what Victor is going to do before he does it.)
The picture certainly looks like future events influence the past, a view any right-minded physicist would reject. The authors conclude with some strong statements about the nature of physical reality that I'm not willing to delve into (the nature of physical reality is a bit above my pay grade).
As always with entanglement, it's important to note that no information is passing between Alice, Bob, and Victor: the settings on the detectors and the BiSA are set independently, and there's no way to communicate faster than the speed of light. Nevertheless, this experiment provides a realization of one of the fundamental paradoxes of quantum mechanics: that measurements taken at different points in space and time appear to affect each other, even though there is no mechanism that allows information to travel between them. | <urn:uuid:56461ffa-8677-4cb3-99a8-3433c8d2c20b> | CC-MAIN-2016-22 | http://arstechnica.com/science/2012/04/decision-to-entangle-effects-results-of-measurements-taken-beforehand/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053209501.44/warc/CC-MAIN-20160524012649-00051-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.957715 | 1,137 | 3.8125 | 4 |
Revision as of 10:30, 11 June 2010
Monads in Haskell can be thought of as composable computation descriptions. The essence of monad is thus separation of composition timeline from the composed computation's execution timeline, as well as the ability of computation to implicitly carry extra data as pertaining to the computation itself in addition to its one (hence the name) output. This lends monads to supplementing pure calculations with features like I/O, common environment or state, and to preprocessing of computations (simplification, optimization etc.).
Each monad, or computation type, provides a means of (a) creating a description of computation to produce a given value, (b) running a computation description (CD) and returning its output to Haskell, and (c) combining a CD with a Haskell function consuming of its output and returning another CD, to create a combined one. It might also define additional primitives to provide access and/or enable manipulation of data it implicitly carries, specific to its nature.
Thus in Haskell, though it is a purely-functional language, side effects that will be performed by a computation can be dealt with and combined purely at the monad's composition time. Monads thus resemble programs in a particular DSL. While programs may describe impure effects and actions outside Haskell, they can still be combined and processed ("compiled") purely inside Haskell, creating a pure Haskell value - a CD that describes an impure calculation. The combined computations don't have to be impure and can be pure themselves as well.
Because they are very useful in practice but rather mind-twisting for the beginners, numerous tutorials that deal exclusively with monads were created (see monad tutorials).
1 Common monads
Most common applications of monads include:
- Representing failure using monadMaybe
- Nondeterminism through backtracking using monadList
- State using monadState
- Read-only environment using monadReader
- I/O using monadIO
2 Monad classMonads can be viewed as a standard programming interface to various data or control structures, which is captured by the
class Monad m where (>>=) :: m a -> (a -> m b) -> m b (>>) :: m a -> m b -> m b return :: a -> m a fail :: String -> m a
In addition to implementing the class functions, all instances of Monad should obey the following equations:
return a >>= k = k a m >>= return = m m >>= (\x -> k x >>= h) = (m >>= k) >>= h
See this intuitive explanation of why they should obey the Monad laws.
Any Monad can be made a Functor by defining
fmap ab ma = ma >>= (return . ab)
However, the Functor class is not a superclass of the Monad class. See Functor hierarchy proposal.
3 Special notationIn order to improve the look of code that uses monads Haskell provides a special syntactic sugar called
thing1 >>= (\x -> func1 x >>= (\y -> thing2 >>= (\_ -> func2 y (\z -> return z))))
which can be written more clearly by breaking it into several lines and omitting parentheses:
thing1 >>= \x -> func1 x >>= \y -> thing2 >>= \_ -> func2 y >>= \z -> return z
do x <- thing1 y <- func1 x thing2 z <- func2 y return z
4 Commutative monads
Commutative monads are monads for which the order of actions makes no difference (they commute), that is when following code:
do a <- f x b <- g y m a b
is the same as:
do b <- g y a <- f x m a b
Examples of commutative include:
5 Monad tutorials
Monads are known for being deeply confusing to lots of people, so there are plenty of tutorials specifically related to monads. Each takes a different approach to Monads, and hopefully everyone will find something useful.
See Monad tutorials.
6 Monad reference guides
An explanation of the basic Monad functions, with examples, can be found in the reference guide A tour of the Haskell Monad functions, by Henk-Jan van Tuyl.
7 Monad research
A collection of research papers about monads.
8 Monads in other languages
Implementations of monads in other languages.
- C++, doc
- CML.event ?
- Clean State monad
- Java (tar.gz)
- LINQ, more, C#, VB
- Perl6 ?
- The Unix Shell
- More monads by Oleg
- CLL: a concurrent language based on a first-order intuitionistic linear logic where all right synchronous connectives are restricted to a monad.
And possibly there exist:
- Standard ML (via modules?)
Please add them if you know of other implementations.
9 Interesting monads
A list of monads for various evaluation strategies and games:
- Identity monad
- Optional results
- Random values
- Read only state
- Writable state
- Unique supply
- ST - memory-only effects
- Global state
- Undoable state effects
- Function application
- Functions which may error
- Atomic memory transactions
- IO - unrestricted side effects
- Non-deterministic evaluation
- List monad: computations with multiple choices
- Concurrent threads
- Backtracking computations
- Region allocation effects
- LogicT: backtracking monad transformer with fair operations and pruning
- Pi calculus as a monad
- Halfs, uses a read-only and write-only monad for filesystem work.
- House's H monad for safe hardware access
- Commutable monads for parallel programming
- The Quantum computing monad
- Simple, Fair and Terminating Backtracking Monad
- Typed exceptions with call traces as a monad
- Breadth first list monad
- Continuation-based queues as monads
- Typed network protocol monad
- Non-Determinism Monad for Level-Wise Search
- Transactional state monad
- A constraint programming monad
- A probability distribution monad
There are many more interesting instance of the monad abstraction out there. Please add them as you come across each species.
- If you are tired of monads, you can easily get rid of them. | <urn:uuid:27593cbc-2f31-477c-8b2c-6dbfd9296449> | CC-MAIN-2016-22 | https://wiki.haskell.org/index.php?title=Monad&diff=34953&oldid=34949 | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276304.88/warc/CC-MAIN-20160524002116-00244-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.842867 | 1,383 | 3.53125 | 4 |
The Next Big Scientific Breakthrough: Sun-Earth Interactions
Quantum computing, nanotechnology and genetic engineering are exciting fields. But understanding the interaction between the Sun and Earth is at least as important as a scientific frontier.
The Sun Affects Clouds and Ozone, Which In Turn Affect Climate
For example, one of the world’s most prestigious science labs has just demonstrated that cosmic rays affect cloud formation – which in turn affects climate – on Earth. Because the sun’s output directly determines the amount of cosmic rays which reach the Earth, the sun is an important driver of the Earth’s climate.
And as I noted last year:
Intense solar activity can destroy ozone in the Earth’s atmosphere, thus affecting climactic temperatures. See this, this, this and this. Indeed, the effects of solar energy on ozone may be one of the main ways in which the sun influences Earth’s climate.
The Sun’s Output Changes the Rate of Radioactive Decay On Earth
Believe it or not, Stanford University News reported Tuesday that solar flares change the rate of radioactive decay of elements on Earth:
When researchers found an unusual linkage between solar flares and the inner life of radioactive elements on Earth, it touched off a scientific detective investigation that could end up protecting the lives of space-walking astronauts and maybe rewriting some of the assumptions of physics.
The radioactive decay of some elements sitting quietly in laboratories on Earth seemed to be influenced by activities inside the sun, 93 million miles away.
Is this possible?
Researchers from Stanford and Purdue University believe it is. But their explanation of how it happens opens the door to yet another mystery.
There is even an outside chance that this unexpected effect is brought about by a previously unknown particle emitted by the sun. “That would be truly remarkable,” said Peter Sturrock, Stanford professor emeritus of applied physics and an expert on the inner workings of the sun.
The story begins, in a sense, in classrooms around the world, where students are taught that the rate of decay of a specific radioactive material is a constant. This concept is relied upon, for example, when anthropologists use carbon-14 to date ancient artifacts and when doctors determine the proper dose of radioactivity to treat a cancer patient.
As the researchers pored through published data on specific isotopes, they found disagreement in the measured decay rates – odd for supposed physical constants.
Checking data collected at Brookhaven National Laboratory on Long Island and the Federal Physical and Technical Institute in Germany, they came across something even more surprising: long-term observation of the decay rate of silicon-32 and radium-226 seemed to show a small seasonal variation. The decay rate was ever so slightly faster in winter than in summer.
On Dec 13, 2006, the sun itself provided a crucial clue, when a solar flare sent a stream of particles and radiation toward Earth. Purdue nuclear engineer Jere Jenkins, while measuring the decay rate of manganese-54, a short-lived isotope used in medical diagnostics, noticed that the rate dropped slightly during the flare, a decrease that started about a day and a half before the flare.
If this apparent relationship between flares and decay rates proves true, it could lead to a method of predicting solar flares prior to their occurrence, which could help prevent damage to satellites and electric grids, as well as save the lives of astronauts in space.
The decay-rate aberrations that Jenkins noticed occurred during the middle of the night in Indiana – meaning that something produced by the sun had traveled all the way through the Earth to reach Jenkins’ detectors. What could the flare send forth that could have such an effect?
Jenkins and Fischbach guessed that the culprits in this bit of decay-rate mischief were probably solar neutrinos, the almost weightless particles famous for flying at almost the speed of light through the physical world – humans, rocks, oceans or planets – with virtually no interaction with anything.
Going back to take another look at the decay data from the Brookhaven lab, the researchers found a recurring pattern of 33 days. It was a bit of a surprise, given that most solar observations show a pattern of about 28 days – the rotation rate of the surface of the sun.
The explanation? The core of the sun – where nuclear reactions produce neutrinos – apparently spins more slowly than the surface we see. “It may seem counter-intuitive, but it looks as if the core rotates more slowly than the rest of the sun,” Sturrock said.
All of the evidence points toward a conclusion that the sun is “communicating” with radioactive isotopes on Earth, said Fischbach.
“It doesn’t make sense according to conventional ideas,” Fischbach said. Jenkins whimsically added, “What we’re suggesting is that something that doesn’t really interact with anything is changing something that can’t be changed.”
“It’s an effect that no one yet understands,” agreed Sturrock. “Theorists are starting to say, ‘What’s going on?’ But that’s what the evidence points to. It’s a challenge for the physicists and a challenge for the solar people too.”
If the mystery particle is not a neutrino, “It would have to be something we don’t know about, an unknown particle that is also emitted by the sun and has this effect, and that would be even more remarkable,” Sturrock said.
The Sun Interacts With the Earth In Numerous Other Ways
I pointed out last year that the sun affects the Earth in many more ways than scientists knew:
The sun itself also affects the Earth more than previously understood. For example, according to the European Space Agency:
Scientists … have proven that sounds generated deep inside the Sun cause the Earth to shake and vibrate in sympathy. They have found that Earth’s magnetic field, atmosphere and terrestrial systems, all take part in this cosmic sing-along.
And NASA has just discovered that “space weather” causes “spacequakes” on Earth:
Researchers using NASA’s fleet of five THEMIS spacecraft have discovered a form of space weather that packs the punch of an earthquake and plays a key role in sparking bright Northern Lights. They call it “the spacequake.”
A spacequake is a temblor in Earth’s magnetic field. It is felt most strongly in Earth orbit, but is not exclusive to space. The effects can reach all the way down to the surface of Earth itself.
“Magnetic reverberations have been detected at ground stations all around the globe, much like seismic detectors measure a large earthquake,” says THEMIS principal investigator Vassilis Angelopoulos of UCLA.
It’s an apt analogy because “the total energy in a spacequake can rival that of a magnitude 5 or 6 earthquake,” according to Evgeny Panov of the Space Research Institute in Austria.
“Now we know,” says THEMIS project scientist David Sibeck of the Goddard Space Flight Center. “Plasma jets trigger spacequakes.”
According to THEMIS, the jets crash into the geomagnetic field some 30,000 km above Earth’s equator. The impact sets off a rebounding process, in which the incoming plasma actually bounces up and down on the reverberating magnetic field. Researchers call it “repetitive flow rebuffing.” It’s akin to a tennis ball bouncing up and down on a carpeted floor. The first bounce is a big one, followed by bounces of decreasing amplitude as energy is dissipated in the carpet.
“When plasma jets hit the inner magnetosphere, vortices with opposite sense of rotation appear and reappear on either side of the plasma jet,” explains Rumi Nakamura of the Space Research Institute in Austria, a co-author of the study. “We believe the vortices can generate substantial electrical currents in the near-Earth environment.”
Acting together, vortices and spacequakes could have a noticeable effect on Earth. The tails of vortices may funnel particles into Earth’s atmosphere, sparking auroras and making waves of ionization that disturb radio communications and GPS. By tugging on surface magnetic fields, spacequakes generate currents in the very ground we walk on. Ground current surges can have profound consequences, in extreme cases bringing down power grids over a wide area.
What does this mean?
Some allege that spacequakes cause actual, physical earthquakes on Earth. I have no idea whether or not that is true.
The above-quoted NASA article concludes with a poem which implies such a connection:
a magnitude six
The poem may use artistic license rather than scientific rigor. However, some scientists do believe that the sun’s activity can even cause earthquakes, volcanic eruptions and extreme weather.
What is certain is that the science of the affect of space events on Earth is in its infancy, and that there are many fascinating discoveries in our future.
When scientists understand all of the ways that the Sun and Earth interact, we will know alot more about the Earth and our place in the universe than we do today. | <urn:uuid:2cf9a95f-eaee-4069-92b1-c2f580a0c31c> | CC-MAIN-2016-22 | http://www.washingtonsblog.com/2011/08/the-next-scientific-frontier-sun-earth-interactions.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051342447.93/warc/CC-MAIN-20160524005542-00037-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.925433 | 1,951 | 3.671875 | 4 |
by Paige Brown
Popular television shows such as “Doctor Who” have brought the idea of time travel into the vernacular of popular culture. But problem of time travel is even more complicated than one might think. LSU’s Mark Wilde has shown that it would theoretically be possible for time travelers to copy quantum data from the past.
It all started when David Deutsch, a pioneer of quantum computing and a physicist at Oxford, came up with a simplified model of time travel to deal with the paradoxes that would occur if one could travel back in time. For example, would it be possible to travel back in time to kill one’s grandfather? In the Grandfather paradox, a time traveler faces the problem that if he kills his grandfather back in time, then he himself is never born, and consequently is unable to travel through time to kill his grandfather, and so on. Some theorists have used this paradox to argue that it is actually impossible to change the past.
“The question is, how would you have existed in the first place to go back in time and kill your grandfather?” said Mark Wilde, an LSU assistant professor with a joint appointment in the Department of Physics and Astronomy and with the Center for Computation and Technology, or CCT.
Deutsch solved the Grandfather paradox originally using a slight change to quantum theory, proposing that you could change the past as long as you did so in a self-consistent manner.
“Meaning that, if you kill your grandfather, you do it with only probability one-half,” Wilde said. “Then, he’s dead with probability one-half, and you are not born with probability one-half, but the opposite is a fair chance. You could have existed with probability one-half to go back and kill your grandfather.”
But the Grandfather paradox is not the only complication with time travel. Another problem is the no-cloning theorem, or the no “subatomic Xerox-machine” theorem, known since 1982. This theorem, which is related to the fact that one cannot copy quantum data at will, is a consequence of Heisenberg’s famous Uncertainty Principle, by which one can measure either the position of a particle or its momentum, but not both with unlimited accuracy. According to the Uncertainty Principle, it is thus impossible to have a subatomic Xerox-machine that would take one particle and spit out two particles with the same position and momentum – because then you would know too much about both particles at once.
“We can always look at a paper, and then copy the words on it. That’s what we call copying classical data,” Wilde said. “But you can’t arbitrarily copy quantum data, unless it takes the special form of classical data. This no-cloning theorem is a fundamental part of quantum mechanics – it helps us reason how to process quantum data. If you can’t copy data, then you have to think of everything in a very different way.”
But what if a Deutschian closed timelike curve did allow for copying of quantum data to many different points in space? According to Wilde, Deutsch suggested in his late 20th century paper that it should be possible to violate the fundamental no-cloning theorem of quantum mechanics. Now, Wilde and collaborators at the University of Southern California and the Autonomous University of Barcelona have advanced Deutsch’s 1991 work with a recent paper in Physical Review Letters (DOI: 10.1103/PhysRevLett.111.190401). The new approach allows for a particle, or a time traveler, to make multiple loops back in time – something like Bruce Willis’ travels in the Hollywood film “Looper.”
“That is, at certain locations in spacetime, there are wormholes such that, if you jump in, you’ll emerge at some point in the past,” Wilde said. “To the best of our knowledge, these time loops are not ruled out by the laws of physics. But there are strange consequences for quantum information processing if their behavior is dictated by Deutsch’s model.”
A single looping path back in time, a time spiral of sorts, behaving according to Deutsch’s model, for example, would have to allow for a particle entering the loop to remain the same each time it passed through a particular point in time. In other words, the particle would need to maintain self-consistency as it looped back in time.
“In some sense, this already allows for copying of the particle’s data at many different points in space,” Wilde said, “because you are sending the particle back many times. It’s like you have multiple versions of the particle available at the same time. You can then attempt to read out more copies of the particle, but the thing is, if you try to do so as the particle loops back in time, then you change the past.”
To be consistent with Deutsch’s model, which holds that you can only change the past as long as you can do it in a self-consistent manner, Wilde and colleagues had to come up with a solution that would allow for a looping curve back in time, and copying of quantum data based on a time traveling particle, without disturbing the past.
“That was the major breakthrough, to figure out what could happen at the beginning of this time loop to enable us to effectively read out many copies of the data without disturbing the past,” Wilde said. “It just worked.”
However, there is still some controversy over interpretations of the new approach, Wilde said. In one instance, the new approach may actually point to problems in Deutsch’s original closed timelike curve model.
“If quantum mechanics gets modified in such a way that we’ve never observed should happen, it may be evidence that we should question Deutsch’s model,” Wilde said. “We really believe that quantum mechanics is true, at this point. And most people believe in a principle called Unitarity in quantum mechanics. But with our new model, we’ve shown that you can essentially violate something that is a direct consequence of Unitarity. To me, this is an indication that something weird is going on with Deutsch’s model. However, there might be some way of modifying the model in such a way that we don’t violate the no-cloning theorem.”
Other researchers argue that Wilde’s approach wouldn’t actually allow for copying quantum data from an unknown particle state entering the time loop because nature would already “know” what the particle looked like, as it had traveled back in time many times before.
But whether or not the no-cloning theorem can truly be violated as Wilde’s new approach suggests, the consequences of being able to copy quantum data from the past are significant. Systems for secure Internet communications, for example, will likely soon rely on quantum security protocols that could be broken or “hacked” if Wilde’s looping time travel methods were correct.
“If an adversary, if a malicious person, were to have access to these time loops, then they could break the security of quantum key distribution,” Wilde said. “That’s one way of interpreting it. But it’s a very strong practical implication because the big push of quantum communication is this secure way of communicating. We believe that this is the strongest form of encryption that is out there because it’s based on physical principles.”
Today, when you log into your Gmail or Facebook, your password and information encryption is not based on physical principles of quantum mechanical security, but rather on the computational assumption that it is very difficult for “hackers” to factor mathematical products of prime numbers, for example. But physicists and computer scientists are working on securing critical and sensitive communications using the principles of quantum mechanics. Such encryption is believed to be unbreakable – that is, as long as hackers don’t have access to Wilde’s looping closed timelike curves.
“This ability to copy quantum information freely would turn quantum theory into an effectively classical theory in which, for example, classical data thought to be secured by quantum cryptography would no longer be safe,” Wilde said. “It seems like there should be a revision to Deutsch’s model which would simultaneously resolve the various time travel paradoxes but not lead to such striking consequences for quantum information processing. However, no one yet has offered a model that meets these two requirements. This is the subject of open research.” | <urn:uuid:5f425715-001e-4b73-8747-f4221f828f99> | CC-MAIN-2016-22 | http://sites01.lsu.edu/wp/lsuresearch/2013/12/06/time-warp-lsu-researcher-shows-possibility-of-cloning-quantum-information-from-the-past/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049273667.68/warc/CC-MAIN-20160524002113-00129-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.951048 | 1,834 | 3.625 | 4 |
speed quantum crypto
Technology Research News
working to develop ultra powerful quantum computers and ultra secure quantum
cryptography systems generally use subtle aspects of particles like photons
and atoms to represent the 1s and 0s of computer information.
When these systems use photons, for example, they tend to tap polarization,
phase, or angular momentum -- aspects of light that have to do with the
orientation of a lightwave or its electric field.
Researchers from the University of Rochester are using photons to
represent data in a simpler way: a photon's position within an array of
pixels. The approach also packs more information per photon than standard
The researchers' pixel entanglement method could be used to increase
the speed of quantum cryptography systems. Quantum cryptography promises
potentially perfect security because the laws of quantum physics make it
theoretically impossible for someone eavesdropping on information transmitted
this way to go undetected. Today's systems are relatively slow, however.
The researchers method involves sending each photon of a quantum
mechanically linked, or entangled, pair of photons into identical arrays
of pixels and observing which pixels light up. Entangled photons have one
or more properties that are linked regardless of the distance between them.
Measuring one photon instantly causes the other to mirror it.
Standard ways of encoding data into photons use properties of a
photon that can be set one of two ways to represent a 1 or a 0. The researchers'
scheme packs more information per photon because the number of pixels is
the number of possible states. "[Pixel entanglement] allows us to impress
more information on the photon pairs, which... in communication schemes
can translate into higher bit rates," said Malcolm O'Sullivan-Hale, a researcher
at the University of Rochester.
The researchers scheme works by generating pairs of entangled photons
using the standard parametric downconversion method. When ultraviolet photons
are fired into a special crystal, some are split into a pair of entangled
infrared photons. The researchers then channel the entangled photons separately
through a series of lenses into identical arrays of pixels. The entangled
pairs occupy the same positions in the two arrays, which, in turn, causes
those positions, or pixels, to become entangled. The pixels that are entangled
are determined at random, and the random numbers resulting from a series
of entangled pixels makes up the secret key for encrypting information.
The researchers demonstrated their system using three-pixel arrays,
and they also showed that the method works for six-pixel arrays, said O'Sullivan-Hale.
A six-pixel array would allow a pair of entangled photons to represent three
bits of information.
Pixel entanglement could theoretically be used with much higher
numbers of pixels, and the researchers estimated that their system could
be used in 16-pixel arrays, meaning each photon pair could represent eight
bits of information. "With the possibility of using entangled states with
more [than two] levels, we foresee pixel entanglement being useful for distributing
quantum keys at high bit rates," said O'Sullivan-Hale.
Today's optical fiber does not preserve lightwaves well enough to
allow the method to work over optical networks, said O'Sullivan-Hale. "The
most readily imaginable application [of pixel entanglement] is free-space
quantum key distribution for the secure transmission of information," he
Another important advantage of pixel entanglement for quantum cryptography
is that the higher number of possible states for each photon pair makes
it harder for an eavesdropper to fool the system, said O'Sullivan-Hale.
Using the technique for practical quantum cryptography will require
preserving the entanglement over long distances, minimizing losses and detecting
photon positions with adequate resolution, said O'Sullivan-Hale.
Practical applications of pixel entanglement could be realized in
five to ten years, said O'Sullivan-Hale.
O'Sullivan-Hale's research colleagues were Irfan Ali Khan, Robert
W. Boyd and John C. Howell. They published the research in the June 7, 2005
issue of Physical Review Letters. The research was funded by the
National Science Foundation (NSF), the Army Research Office (ARO), the Office
of Naval Research (ONR), the Research Corporation, and the University of
Timeline: 5-10 years
Funding: Government; Private; University
TRN Categories: Quantum Computing and Communications; Optical
Computing, Optoelectronics and Photonics; Physics
Story Type: News
Related Elements: Technical paper, "Pixel Entanglement: Experimental
Realization of Optically Entangled d=3 and d=6 Qudits," Physical Review
Letters, June 7, 2005
carries PC soul
Letter: a short history of TRN
speed quantum crypto
ID paper and plastic
process stamps patterns
yield nano branches
moves micro machines
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog
Buy an ad link | <urn:uuid:0cb22f01-eeff-4175-826e-c77126d426bc> | CC-MAIN-2016-22 | http://www.trnmag.com/Stories/2005/081005/Pixels_speed_quantum_crypto_081005.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275836.20/warc/CC-MAIN-20160524002115-00230-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.860404 | 1,064 | 3.640625 | 4 |
Focus: Nobel Prize—Tools for Quantum Tinkering
To understand the quantum world, researchers have developed lab-scale tools to manipulate microscopic objects without disturbing them. The 2012 Nobel Prize in Physics recognizes two of these quantum tinkerers: David Wineland, of the National Institute of Standards and Technology and the University of Colorado in Boulder, and Serge Haroche, of the Collège de France and the Ecole Normale Supérieure in Paris. Two of their papers, published in 1995 and ‘96 in Physical Review Letters, exemplify their contributions. The one by Wineland and collaborators showed how to use atomic states to make a quantum logic gate, the first step toward a superfast quantum computer. The other, by Haroche and his colleagues, demonstrated one of the strange predictions of quantum mechanics—that measuring a quantum system can pull the measuring device into a weird quantum state which then dissipates over time.
A quantum system can exist in two distinct states at the same time. The challenge in studying this so-called superposition of quantum states is that any nudge from the environment can quickly push the system into one state or the other. Wineland and Haroche both designed experiments that isolate particles—ions or photons—from the environment, so that they can be carefully controlled without losing their quantum character.
Since the 1980s, Haroche has been one of the pioneers in the field of cavity quantum electrodynamics, where researchers observe a single atom interacting with a few photons inside a reflective cavity. Haroche and his colleagues can keep a photon bouncing back and forth in a centimeter-sized cavity billions of times before it escapes. But only photons of specific wavelengths determined by the cavity size can survive. Haroche’s group was one of the first to show that this wavelength selectivity could amplify or suppress the emission from an atom inside the cavity. Haroche was later able to tune a cavity so that the allowed wavelengths were close to, but not equal to, those associated with transitions in an atom, so that the photons and atom did not exchange energy. Instead, they incurred a phase change that could carry information about, for example, the number of photons in the cavity .
In 1996, Haroche’s group used such a system to study the process by which a quantum superposition settles into a single state. The researchers placed a highly excited rubidium atom in a superposition of two energy states and then sent it through a cavity containing about ten photons. The matter-light interaction “entangled” the photons and atom together, so that the photons entered their own superposition of two states (a “Schrödinger cat” state, in the team’s language), which acted as a “measurement” of the atom’s superposition state. Measuring devices don’t ordinarily remain in two states; instead, they give up their quantum nature almost immediately through interactions with the environment. However, this so-called decoherence process was expected to take longer for a “small” device with only a few particles (photons in this case).
To see this effect, the team arranged for a second atom to enter the cavity shortly after the first. Separate observations of the atoms after each passed through the cavity showed that the superposition in the photons survived for several microseconds. This was the first experimental exploration of the quantum measurement process at the so-called “mesoscopic” boundary between the macroscopic and the microscopic world, says coauthor Jean-Michel Raimond of the Pierre and Marie Curie University in Paris. “The experiment is even now described in a few standard quantum mechanics textbooks,” he says.
Wineland performed similar sorts of quantum-probing experiments through his own pioneering work with trapped ions [4, 5]. The tight confinement of ions in these electric field traps causes ion motion to be restricted to distinct quantum states, each of which represents a different frequency of bouncing back-and-forth between the electric field “walls.” These motional, or “vibrational,” states are typically independent of the internal, electronic energy states of the ion, but Wineland and others showed that laser light could transfer energy from one set of states to the other. The researchers used this laser coupling to cool an ion to the state with the slowest motion and to make the world’s most precise clocks .
In their 1995 paper, Wineland and his colleagues demonstrated the first quantum logic gate, the basic building block of a quantum computer. They trapped a single beryllium ion and prepared it with two quantum bits (quantum two-state systems, or “qubits”): one corresponding to the two lowest vibrational states and the other to a pair of electronic states. A series of laser pulses would either have no effect on the electronic qubit or would switch its value—say, from the lower- to the higher-energy state—depending on the vibrational qubit’s state. This “controlled NOT” operation did not measure either qubit, so the quantum nature of the states was preserved. “It was a simple gate, but it was interesting because it was clear how to scale the system up,” says coauthor Chris Monroe of the University of Maryland in College Park. Since then, researchers have succeeded in performing more complicated logic operations with as many as 14 ions.
“There is a beautiful duality between the two techniques,” Raimond says. Wineland traps matter particles (ions) and studies them with laser beams, while Haroche traps photons and studies them with a matter beam. “I think the match by the Nobel committee is quite perfect: Same generation, similar achievements, same global objectives,” says Raimond, “and two excellent friends.”
Michael Schirber is a freelance science writer in Lyon, France.
- P. Goy, J. M. Raimond, M. Gross, and S. Haroche, “Observation of Cavity-Enhanced Single-Atom Spontaneous Emission,” Phys. Rev. Lett. 50, 1903 (1983)
- W. Jhe, A. Anderson, E. A. Hinds, D. Meschede, L. Moi, and S. Haroche, “Suppression of Spontaneous Decay at Optical Frequencies: Test of Vacuum-Field Anisotropy in Confined Space,” Phys. Rev. Lett. 58, 666 (1987)
- S. Gleyzes, S. Kuhr, C. Guerlin, J. Bernu, S. Deléglise, U. Busk Hoff, M. Brune, J. M. Raimond, and S. Haroche, “Quantum Jumps of Light Recording the Birth and Death of a Photon in a Cavity,” Nature 446, 297 (2007)
- D. J. Wineland, R. E. Drullinger, and F. L. Walls, “Radiation-Pressure Cooling of Bound Resonant Absorbers,” Phys. Rev. Lett. 40, 1639 (1978)
- D.J. Wineland and Wayne M. Itano, “Spectroscopy of a Single Mg+ Ion,” Phys. Lett. A 82, 75 (1981)
- F. Diedrich, J. C. Bergquist, W. M. Itano, and D. J. Wineland, “Laser Cooling to the Zero-Point Energy of Motion,” Phys. Rev. Lett. 62, 403 (1989)
- Synopsis: Better timing with aluminum ions, http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.104.070802 | <urn:uuid:6b895488-964d-4661-ba1c-cb9cca30d24f> | CC-MAIN-2016-22 | http://physics.aps.org/articles/v5/114 | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276780.5/warc/CC-MAIN-20160524002116-00048-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.924681 | 1,650 | 3.625 | 4 |
SANTA FE, N.M. Researchers at Los Alamos National Laboratories claim to have originated a blueprint for room-temperature quantum computers using such optical components as beam splitters, phase shifters and photodetectors. While some scientists contend that new kinds of nonlinear optical components must be invented before economical quantum computers can be realized, the Los Alamos team counters that artful use of feedback makes it possible to use existing optical components instead.
The new approach, currently at the simulation stage, suggests that a more practical route can be followed to build effective quantum computers. Current methods use bulky and expensive equipment such as nuclear magnetic-resonance imaging systems, and the quantum states used to encode quantum bits, or "qubits,"are maintained at temperatures close to absolute zero.
However, at room temperature, photons exhibit quantum behavior, and a lot of known technology can manipulate them. "The double-slit experiment, where a single photon goes through whichever parallel slit you put a photodetector behind, clearly demonstrates the quantum-mechanical aspects of photons," said Los Alamos National Laboratories researcher Emanuel Knill. "Others thought you needed a new kind of nonlinear optical component to make quantum computers with photons. We have shown that all you need is feedback."
Knill's work was done with another Los Alamos researcher, Raymond Laflamme, and with professor Gerard Milburn of the University of Queensland, St. Lucia, Australia.
Photons can act as the data in quantum computers by virtue of their dual wave/particle nature. The famous double-slit experiment sends a single photon toward two parallel slits and locates a single photodetector behind first one slit and then the other. No matter which slit the photodetector is put behind, it always detects the single photon.
How does the photon "know"which slit to go through? The answer is that it is acting as a wave instead of a particle, and thus goes through both until it is measured by the photodetector. The act of measurement instantaneously localizes the "particle" aspect of the photon essentially causing it to "condense" behind whichever slit the measurement is made.
For the optical quantum computer blueprint provided by the labs, the phase state as polarized either vertically or horizontally works off the ability of photons to represent 1s and 0s. With all quantum bits, the phase of a photon's wave can simultaneously represent both 1 and 0, since its phase can differ depending on the exact moment it is measured. Afterward that is no longer possible; the phase has become fixed as one or the other by the very act of measurement.
"Until our work, it was thought that the only way to get photons to interact with each other was with nonlinear optics, which is very difficult to implement,"said Knill. "Nonlinear media work fine if you send laser beams through them, but if you only send single photons through, essentially nothing happens."
To provide the necessary nonlinear coupling among qubits, using photons, the team of Knill, Laflamme and Milburn fell back on one of the most useful engineering techniques ever invented feedback.
By employing feedback from the outputs of the photodetectors, they were able to simulate the effect of nonlinear media without the disadvantages of actually using them. Essentially, the optical components capable of handling single photons were bent to the service of nonlinear couplings through feedback.
"People never thought to use feedback from the result of a photodetector, but that is where our nonlinearity comes from it was there all along," Knill explained. This technique was not tried because researchers assumed they could not reuse measurements in quantum computations.
"We discovered that you can use feedback, and that you can replace a nonlinear component with it," said Laflamme.
As in all quantum-mechanical systems, the most important principle has been to preserve "coherence" that is, to make sure that the qubits remain "unobserved" in their nebulous superposition of both 1 and 0 during a calculation. Once a measurement is made of a quantum-mechanical state, the system reverts to a normal digital system and the advantage of quantum computations is lost. That was why it was thought that feedback could not work because it would destroy the quantum coherence that forms the basis for quantum algorithms.
However, Knill, Laflamme and Milburn have shown that systems that combine qubits with ordinary bits in the feedback loop can simulate nonlinear optical components. "What we do essentially is destroy coherence in one place and manage to indirectly reintroduce it elsewhere so that only the coherence we don't care about gets lost in the measurement," said Knill.
The basic idea is that the original qubits to be used in a calculation can be prepared ahead of time by entangling them with what the researchers call "helper" qubits. Entangling ensures that the helper bits maintain the same state as the originals, even after they have gone through a quantum calculation. The helper qubits can then be independently processed with standard optical components, and after the calculation, they can be measured without destroying the coherence of the originals.
The results of measuring the helper qubits are introduced into the feedback loop, which then simulates a nonlinear optical component for a single photon. There is a price for the destroyed coherence of the helper bits, however. According to the researchers, the labs' quantum computer blueprint will make more errors than the already error-prone quantum computers designed elsewhere. To compensate, the team carefully architected their design to use built-in error correction in two subsequent stages.
"The most important discovery in quantum computing in the last five years has been quantum error correction," said Laflamme. "Using quantum error correction, we can mitigate the effect of the errors we introduce with our measurements."
The resulting architecture uses three distinct stages. In stage one, helper photons are generated by entanglement and teleported to a circuit running in parallel with the main calculation. Measurement of the helper bits, after the main calculation, is then introduced into the feedback loop to simulate the effect of a nonlinear coupling between two photons.
"We know when it succeeds by measuring the helper qubit. If the outcome is good, then we go on with whatever else we are going to do in the calculations, but if it fails then we forget about what we just did and start over," said Knill.
But calculations made in this way are successful only with a quantum probability of 1/4, which necessitates the second stage of the architecture.
In stage two, the success probability of stage one can be tuned arbitrarily close to 1. Unfortunately, however, the computing resources needed to achieve 100 percent accuracy can grow exponentially. To solve this problem, the researchers used a third error-correction stage drawing on the recent work of other scientists.
By freely providing the blueprint to the research community, they hope to interest engineers in setting up real-world experiments. | <urn:uuid:8784d74d-eff9-4713-9eca-539c9197c272> | CC-MAIN-2016-22 | http://www.eetimes.com/document.asp?doc_id=1142870 | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275836.20/warc/CC-MAIN-20160524002115-00238-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.945759 | 1,451 | 3.984375 | 4 |
The one thing everyone knows about quantum mechanics is its legendary weirdness, in which the basic tenets of the world it describes seem alien to the world we live in. Superposition, where things can be in two states simultaneously, a switch both on and off, a cat both dead and alive. Or entanglement, what Einstein called “spooky action-at-distance” in which objects are invisibly linked, even when separated by huge distances.
But weird or not, quantum theory is approaching a century old and has found many applications in daily life. As John von Neumann once said: “You don’t understand quantum mechanics, you just get used to it.” Much of electronics is based on quantum physics, and the application of quantum theory to computing could open up huge possibilities for the complex calculations and data processing we see today.
Imagine a computer processor able to harness super-position, to calculate the result of an arbitrarily large number of permutations of a complex problem simultaneously. Imagine how entanglement could be used to allow systems on different sides of the world to be linked and their efforts combined, despite their physical separation. Quantum computing has immense potential, making light work of some of the most difficult tasks, such as simulating the body’s response to drugs, predicting weather patterns, or analysing big datasets.
Such processing possibilities are needed. The first transistors could only just be held in the hand, while today they measure just 14 nm – 500 times smaller than a red blood cell. This relentless shrinking, predicted by Intel founder Gordon Moore as Moore’s law, has held true for 50 years, but cannot hold indefinitely. Silicon can only be shrunk so far, and if we are to continue benefiting from the performance gains we have become used to, we need a different approach.
Advances in semiconductor fabrication have made it possible to mass-produce quantum-scale semiconductors – electronic circuits that exhibit quantum effects such as super-position and entanglement.
The image, captured at the atomic scale, shows a cross-section through one potential candidate for the building blocks of a quantum computer, a semiconductor nano-ring. Electrons trapped in these rings exhibit the strange properties of quantum mechanics, and semiconductor fabrication processes are poised to integrate these elements required to build a quantum computer. While we may be able to construct a quantum computer using structures like these, there are still major challenges involved.
In a classical computer processor a huge number of transistors interact conditionally and predictably with one another. But quantum behaviour is highly fragile; for example, under quantum physics even measuring the state of the system such as checking whether the switch is on or off, actually changes what is being observed. Conducting an orchestra of quantum systems to produce useful output that couldn’t easily by handled by a classical computer is extremely difficult.
But there have been huge investments: the UK government announced £270m funding for quantum technologies in 2014 for example, and the likes of Google, NASA and Lockheed Martin are also working in the field. It’s difficult to predict the pace of progress, but a useful quantum computer could be ten years away.
The basic element of quantum computing is known as a qubit, the quantum equivalent to the bits used in traditional computers. To date, scientists have harnessed quantum systems to represent qubits in many different ways, ranging from defects in diamonds, to semiconductor nano-structures or tiny superconducting circuits. Each of these has is own advantages and disadvantages, but none yet has met all the requirements for a quantum computer, known as the DiVincenzo Criteria.
The most impressive progress has come from D-Wave Systems, a firm that has managed to pack hundreds of qubits on to a small chip similar in appearance to a traditional processor.
The benefits of harnessing quantum technologies aren’t limited to computing, however. Whether or not quantum computing will extend or augment digital computing, the same quantum effects can be harnessed for other means. The most mature example is quantum communications.
Quantum physics has been proposed as a means to prevent forgery of valuable objects, such as a banknote or diamond, as illustrated in the image below. Here, the unusual negative rules embedded within quantum physics prove useful; perfect copies of unknown states cannot be made and measurements change the systems they are measuring. These two limitations are combined in this quantum anti-counterfeiting scheme, making it impossible to copy the identity of the object they are stored in.
The concept of quantum money is, unfortunately, highly impractical, but the same idea has been successfully extended to communications. The idea is straightforward: the act of measuring quantum super-position states alters what you try to measure, so it’s possible to detect the presence of an eavesdropper making such measurements. With the correct protocol, such as BB84, it is possible to communicate privately, with that privacy guaranteed by fundamental laws of physics.
Quantum communication systems are commercially available today from firms such as Toshiba and ID Quantique. While the implementation is clunky and expensive now it will become more streamlined and miniaturised, just as transistors have miniaturised over the last 60 years.
Improvements to nanoscale fabrication techniques will greatly accelerate the development of quantum-based technologies. And while useful quantum computing still appears to be some way off, it’s future is very exciting indeed. | <urn:uuid:47414db0-ebcb-49f4-87e9-e6e973938efa> | CC-MAIN-2016-22 | http://theconversation.com/get-used-to-it-quantum-computing-will-bring-immense-processing-possibilities-46420 | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.75/warc/CC-MAIN-20160524002114-00158-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.937516 | 1,115 | 3.59375 | 4 |
Scientists at the University of Darmstadt, in Germany, have trapped a pulse of light inside a crystal for a minute, and used it to store an image, raising the possibility of light-based computers that could work faster than today’s electronic processors and transistors.
The results could have practical significance in future computer systems that operate using light, and could pave the way for quantum computing and communications.
‘We are reaching the principal limits of conventional electronic data processing,’ said Professor Thomas Halfmann, who coordinates the EU-funded project Marie Curie Initial Training Network - Coherent Information Processing in Rare-Earth Ion Doped Solids (CIPRIS).
Light usually travels at a speed of just under 300 million metres per second, making it the fastest thing in the universe, and computer scientists and physicists believe that computers in the future need to be optical to achieve faster processing speeds.
Currently, optical technology is mainly confined to communication networks, where light carries information through optical fibres. However, at the ends of the fibres the light signals have to be converted to and from the electrical signals that computers use to process information.
In the future, it is hoped that optical quantum computers will be able to process information using light, and the first step towards this is being able to store optical data in quantum systems.
‘We need media to store light and this is what is called an optical or quantum memory,’ said Prof. Halfmann. ‘What we have done is demonstrate an optical memory in a solid-state quantum system that can store light for one minute.’
They did it by using a control laser to manipulate the speed of the light in the crystal, which contained a low concentration of ions – electrically charged atoms – of the element praseodymium. When the light source then came into contact with the crystal, it rapidly decelerated. The scientists then switched off the laser beam and the light came to a complete halt.
‘In simple terms you transfer energy from one oscillator, the light field, into the other oscillator, the atom, and there it stays and then you can retrieve it afterwards.’
Professor Thomas Halfmann, the coordinator of MC ITN - CIPRIS
Technically, the light wasn’t stopped, but it was converted into the atomic medium, Prof. Halfmann explained. ‘What happens is we convert the light pulse into something called an atomic oscillator.
‘In simple terms, you transfer energy from one oscillator, the light field, into the other oscillator, the atom, and there it stays and then you can retrieve it afterwards.’
Storing an image
The researchers imprinted an image consisting of three stripes onto the light pulse, demonstrating that they could store the image inside the crystal for a minute and then retrieve it, smashing the previous image storage record, which was less than ten microseconds.
The fact they stored an image is significant for developments in computing because, while a single light pulse contains one ‘bit’ of data, an image contains many.
‘These stripes are just one simple image you can store, but essentially you can store any image,’ Prof. Halfmann said.
Professor Halfmann in the laboratory. ©Katrin Binner/ TU Darmstadt
Usually light storage times are very short because ‘perturbing environments’ disrupt the oscillation. However, the team managed to achieve their record-breaking storage time by protecting the oscillator with magnetic fields and high-frequency pulses.
Using complex algorithms, they were able to optimise the laser beams, magnetic field and high-frequency pulses so that the oscillation lasted almost as long as theoretically possible in the crystal.
Prof. Halfmann likened the process of trapping the light to a person running through a crowd at a funfair with a briefcase full of papers. ‘When you run, you collide with people and if you collide often enough you lose your suitcase, your information, your papers.
‘So, what we do is we shield the information with these magnetic fields and we protect it somehow. It is like running through the funfair with bodyguards, big tough guys, around you. They protect you on your way through the crowd and nothing happens to you and your suitcase with your papers.’
The scientists have almost reached the theoretical storage limit of the crystal they used in this research, which is 100 seconds. But, they already have a different type of crystal set up in their laboratory and have started working with it.
Although there is some debate over the exact timeframe, the new crystal is theoretically capable of storing a light pulse for between a few hours to a week, and Prof. Halfmann believes that in two to three years they will again be very close to the limit of that crystal.
Quantum computers could revolutionise science by offering a new way of solving complex problems beyond the scope of standard transistor-based computers.
While normal computers are limited to ‘bits’, short for binary digits ( 0 or 1), quantum computers could be much more powerful because they could store information in qubits, short for quantum bits, using photons or atoms.
That’s because an atom can simultaneously have different energy states, or a photon of light may have multiple polarisations.
It means that a quantum computer would be able to solve many types of data encryption that are used today.
However, functioning quantum computers are still five or 10 years in the future, many researchers say, and it could take a couple more decades to reach the stage where quantum computers harness enough qubits to perform significant mathematical tasks.
By mimicking the ways that queen bees vibrate, scientists have created robots that can fool bees into accepting them as members of the hive, and the results are giving beekeepers new insight into population behaviour.
While farmers often turn to pesticides and herbicides to get as much produce as possible from their land, there’s something new on the menu that could employ nature’s own resources instead.
The Arctic is warming twice as fast as the global average, and the EU's new Arctic strategy will help us hear this ‘canary in the coal mine’. | <urn:uuid:c4cf26f8-e34f-4e1c-a0e3-9e06f2096359> | CC-MAIN-2016-22 | http://horizon-magazine.eu/article/scientists-stop-light-minute-breaking-records_en.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.75/warc/CC-MAIN-20160524002114-00164-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.939133 | 1,289 | 4.0625 | 4 |
fires one photon at a time
Technology Research News
The weird nature of quantum physics makes
perfectly secure communications possible. The technology has existed in
the laboratory for several years -- all that remains is figuring out how
to make it practical.
Scientists at Toshiba Research and the University of Cambridge have taken
an important step in that direction by making an electronic device that
emits single photons on demand. The device could boost the transmission
rates of secret communications and would be smaller and easier to use
than similar light sources.
uses strings of individual photons, which are the indivisible particles
of light, to make the mathematical keys that scramble secret messages.
The keys are long, random numbers in the form of bits, the ones and zeros
of digital communications. Ordinary communications transmits bits as light
pulses, but because each light pulse contains many photons an eavesdropper
could siphon some of them off to record the series of pulses to get a
copy of the key without being detected.
However, when the keys' bits are encoded in the quantum states of individual
photons -- like how they are polarized -- eavesdroppers can't escape detection.
Because a photon cannot be split, an eavesdropper can't look at it without
stopping it from reaching the intended receiver. And an eavesdropper can't
cover his tracks by making copies of the photons he intercepts because
he cannot reliably recreate their quantum states, which means the sender
and receiver can compare notes to see that some of the photons have been
When a sender and receiver know they have an uncompromised key, the sender
can use it to encrypt messages that only the receiver can unscramble.
Making practical quantum cryptographic systems requires light sources
that produce one photon at a time. A candle flame emits about one hundred
thousand trillion photons per second, many at the same time.
Even the dimmest possible ordinary light source occasionally emits two
photons at once. "We can control and trigger the emission time of the
photons," said Andrew Shields, a group leader at Toshiba Research Europe
in Cambridge, England.
Single-photon light sources are not new, but previous devices have all
been triggered by lasers. "This is a cumbersome and expensive arrangement
that would be difficult to achieve outside the laboratory," said Shields.
"The new device is driven by a voltage so [it] is more robust, compact
and would be cheaper to manufacture."
The researchers' single-photon source, a special type of light emitting
diode (LED), contains a layer of quantum dots surrounded by layers of
semiconductor material. Each quantum dot, which is a speck of semiconductor
material about 20 nanometers in diameter, holds a single electron when
a voltage is applied to the device. When the negatively- charged electron
combines with a positively-charged hole in the quantum dot, it releases
the energy as a single photon. A nanometer is one millionth of a millimeter.
The diode is capped by a metal layer with a series of small openings that
block all but a single quantum dot per opening. By pulsing electrical
current through the device, the researchers cause the quantum dots to
emit a photon per pulse.
The device can theoretically emit a photon every half a nanosecond, said
Shields. A nanosecond is one billionth of a second. But in practice the
researchers' diode does not emit a photon with every pulse.
"The efficiency has not been optimized in this prototype, so [it] is quite
low," said Shields. "If we use a cavity structure to direct more of the
light out of the device in a certain direction, we can expect efficiencies
exceeding 10 percent."
Ten percent efficiency could be good enough for practical devices. A potentially
bigger hurdle is the cold temperatures needed to run the diode. The researchers'
prototype operates at five degrees Kelvin, or -268 degrees Celsius.
"We have already seen efficient emission from quantum dots at temperatures
exceeding [-73 degrees Celsius], for which cryogen-free thermal-electric
cooling can be used," said Shields. "We hope to be able to push this further
to room temperature."
A single-photon source that is triggered by an electrical current would
be much more practical than an optically triggered single-photon source,
said Gerard Milburn, a physics professor at the University of Queensland
in Australia. "The control circuits could be integrated into the device
producing the photons and processing their detection."
Without single-photon sources, researchers have to use privacy amplification
techniques to ensure that transmitted bits remain secret, which results
in less efficient transmission rates, said Richard Hughes, a physicist
at Los Alamos National Laboratory.
This new light source technology could lead to higher secret bit rates
if it could be made into a practical device, he said. Making an electrically-driven
device is a big step in that direction, "but it would also be important
for a practical device to operate at a temperature that would not require
the user to deal with cryogens."
The researchers next steps are to increase the efficiency and raise the
operating temperature of the single-photon diode, said Shields. "There
are technological challenges to overcome, but we think we know the solutions.
We think we can make a useful device within three years," he said.
Shields' research colleagues were Zhiliang Yuan, Beata E. Kardynal and
R. Mark Stevenson of Toshiba Research, Charlene J. Lobo, Ken Cooper and
David A. Ritchie of the University of Cambridge, and Neil S. Beattie and
Michael Pepper of both institutions. They published the research in the
December 13, 2001 online issue of the journal Science. The research was
funded by Toshiba Corporation in the Engineering and Physical Sciences
Research Council of the UK.
Timeline: <3 years
Funding: Corporate; Government
TRN Categories: Optical Computing, Optoelectronics and
Photonics; Quantum Computing; Semiconductors
Story Type: News
Related Elements: Technical paper, "Electrically captured
in Single Photon Source," Science, online December 13, 2001
LED fires one photon
at a time
Chips turn more heat
on unlocked Web sites
Surgeons gain ultrasonic
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog
Buy an ad link | <urn:uuid:5cb5c151-ddad-4675-8583-9319a43494fc> | CC-MAIN-2016-22 | http://www.trnmag.com/Stories/2001/121901/LED_fires_one_photon_at_a_time_121901.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275328.63/warc/CC-MAIN-20160524002115-00204-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.908682 | 1,384 | 3.515625 | 4 |
Researchers at Washington State University have used a super-cold cloud of atoms that behaves like a single atom to see a phenomenon predicted 60 years ago and witnessed only once since.
The phenomenon takes place in the seemingly otherworldly realm of quantum physics and opens a new experimental path to potentially powerful quantum computing.
Working out of a lab in WSU's Webster Hall, physicist Peter Engels and his colleagues cooled about one million atoms of rubidium to 100 billionths of a degree above absolute zero. There was no colder place in the universe, said Engels, unless someone was doing a similar experiment elsewhere on Earth or on another planet.
At that point, the cluster of atoms formed a Bose-Einstein condensate - a rare physical state predicted by Albert Einstein and Indian theorist Satyendra Nath Bose - after undergoing a phase change similar to a gas becoming a liquid or a liquid becoming a solid. Once the atoms acted in unison, they could be induced to exhibit coherent "superradiant" behavior predicted by Princeton University physicist Robert Dicke in 1954.
"This large group of atoms does not behave like a bunch of balls in a bucket," said Engels. "It behaves as one big super-atom. Therefore it magnifies the effects of quantum mechanics."
Engels' findings appear in the journal Nature Communications. Co-author and collaborator Chuanwei Zhang, a former WSU physicist now at the University of Texas at Dallas, led the theoretical aspects of the work.
Funders include the National Science Foundation, the Army Research Office and the Defense Advanced Research Projects Agency, the cutting-edge research agency known as DARPA.
Researchers using these super-cold dilute gases have created the superradiant state in only one other situation, said Engels, using a far more complicated experiment involving coupling to photon fields. Because the coupling of atoms and photons is usually very weak, their behavior was extremely hard to observe, he said.
"What our colleague Chuanwei Zhang realized is, if you replaced the light with the motion of the particles, you got exactly the same physics," said Engels. Moreover, it's easier to observe. So while their cloud of atoms measures less than half a millimeter across, it is large enough to be photographed and measured. This gives experimenters a key tool for testing assumptions and changes in the atomic realm of quantum physics.
"We have found an implementation of the system that allows us to go in the lab and actually test the predictions of the Dicke model, and some extensions of it as well, in a system that is not nearly as complicated as people always thought it has to be for the Dicke physics," Engels said.
Ordinary physical properties change so dramatically in quantum mechanics that it can seem like a drawing by M.C. Escher. Photons can be both waves and particles. A particle can go through two spaces at the same time and, paradoxically, interfere with itself. Electrons can be oriented up or down at the same time.
This concurrent duality can be exploited by quantum computing. So where a conventional computer uses 1s and 0s to make calculations, the fundamental units of a quantum computer could be 1s and 0s at the same time. As Wired magazine recently noted, "It's a mind-bending, late-night-in-the-dorm-room concept that lets a quantum computer calculate at ridiculously fast speeds."
Peter Engels | Eurek Alert!
NASA scientist suggests possible link between primordial black holes and dark matter
25.05.2016 | NASA/Goddard Space Flight Center
The dark side of the fluffiest galaxies
24.05.2016 | Instituto de Astrofísica de Canarias (IAC)
Permanent magnets are very important for technologies of the future like electromobility and renewable energy, and rare earth elements (REE) are necessary for their manufacture. The Fraunhofer Institute for Mechanics of Materials IWM in Freiburg, Germany, has now succeeded in identifying promising approaches and materials for new permanent magnets through use of an in-house simulation process based on high-throughput screening (HTS). The team was able to improve magnetic properties this way and at the same time replaced REE with elements that are less expensive and readily available. The results were published in the online technical journal “Scientific Reports”.
The starting point for IWM researchers Wolfgang Körner, Georg Krugel, and Christian Elsässer was a neodymium-iron-nitrogen compound based on a type of...
In the Beyond EUV project, the Fraunhofer Institutes for Laser Technology ILT in Aachen and for Applied Optics and Precision Engineering IOF in Jena are developing key technologies for the manufacture of a new generation of microchips using EUV radiation at a wavelength of 6.7 nm. The resulting structures are barely thicker than single atoms, and they make it possible to produce extremely integrated circuits for such items as wearables or mind-controlled prosthetic limbs.
In 1965 Gordon Moore formulated the law that came to be named after him, which states that the complexity of integrated circuits doubles every one to two...
Characterization of high-quality material reveals important details relevant to next generation nanoelectronic devices
Quantum mechanics is the field of physics governing the behavior of things on atomic scales, where things work very differently from our everyday world.
When current comes in discrete packages: Viennese scientists unravel the quantum properties of the carbon material graphene
In 2010 the Nobel Prize in physics was awarded for the discovery of the exceptional material graphene, which consists of a single layer of carbon atoms...
The trend-forward world of display technology relies on innovative materials and novel approaches to steadily advance the visual experience, for example through higher pixel densities, better contrast, larger formats or user-friendler design. Fraunhofer ISC’s newly developed materials for optics and electronics now broaden the application potential of next generation displays. Learn about lower cost-effective wet-chemical printing procedures and the new materials at the Fraunhofer ISC booth # 1021 in North Hall D during the SID International Symposium on Information Display held from 22 to 27 May 2016 at San Francisco’s Moscone Center.
24.05.2016 | Event News
20.05.2016 | Event News
19.05.2016 | Event News
25.05.2016 | Trade Fair News
25.05.2016 | Life Sciences
25.05.2016 | Power and Electrical Engineering | <urn:uuid:9d58d7de-9db1-4cfc-8c86-6a61b89d4ae6> | CC-MAIN-2016-22 | http://www.innovations-report.com/html/reports/physics-astronomy/wsu-researchers-confirm-60-year-old-prediction-of-atomic-behavior.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276543.81/warc/CC-MAIN-20160524002116-00043-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.915703 | 1,349 | 3.859375 | 4 |
20 Most Impressive Science Fair Projects of All Time
While science fair projects still typically consist of papier mache volcanoes, LEGO robots, and crystals grown in a jar, many students these days are going above and beyond the staples, taking on projects that would even be awe-inspiring as a college thesis. From exploring the effectiveness of cancer treatments to revolutionizing the disposal of plastics, these students prove you don't have to be an adult to have amazing, world-changing ideas about science.
1. Nuclear Fusion Reactor — Thiago Olsen
With a budget of only $3,500, Michigan high school student Thiago Olsen built a nuclear fusion reactor in his garage when he was only 15 years old. How did he do it? He studied physics textbooks, used vacuum pump manuals, and surfed the Web for the best deals on parts. While his device is not self-sustaining and produces fusion only on a small scale, it's a pretty impressive feat for any teenager.
2. Diesel Hybrid Car — West Philadelphia High School
Working as a team at West Philadelphia High School, students constructed a diesel-hybrid race car that can go from zero to 60 in just four seconds. If that speed wasn't already impressive enough, the vehicle also gets more than 60 miles to the gallon. The students constructed it for entry into the Automotive X contest, with a grand prize of $10 million — the only high schoolers in the nation to do so. They are reworking their design to improve their chances of winning, and hope to get the car up to 100 mpg.
3. Chemical-Sniffing LEGO Robot — Anna Simpson
Many a science fair project involves LEGOs, but few on the level that Anna Simpson's does. Her robot, built of the plastic blocks, is capable of sniffing out toxic chemicals and other hazards, keeping humans at a safe distance. Simpson's work won her the California State Science Fair and could have a number of industrial and public safety applications if adapted.
4. Reducing CO2 Emissions — Jun Bing and Alec Wang
Using a process known as acid base neutralization, Bing and Wang developed a device capable of sequestering carbon dioxide gas released from cars (and other sources) that burn fossil fuel. Not only does it remove the harmful substance from the air, but also collects in a way so it can be stored, used or sold.
5. Plastic-Eating Microbe — Daniel Burd
Plastic that is simply dumped into landfills can take centuries to decompose, if it ever really does, but this young thinker came up with a better way. Burd beat out leading scientists to discovering a microbe that eats plastic, increasing the rate of decomposition by more than 40 percent. This project won him the Canada-Wide Science Fair and garnered a fair amount of international media attention as well.
6. Space Exploration Balloon — IES La Bisbal School
The students at this Spanish school produced a science fair project that was out of this world — literally. A team of four students sent a camera-operated weather balloon into the stratosphere, snagging atmospheric readings and stunning photographs more than 20 miles above Earth's surface.
7. Cancer And Chicken Marinades — Lauren Hodge
At just 14 years old, Lauren Hodge is getting a jumpstart on a science career with this amazing project, which won her an award at the international Google Science Fair competition. So what did she find? Some chicken marinades block carcinogenic compounds from forming when chicken is grilled — a process known to raise the level of carcinogens in meat. Among the marinades she tested, lemon juice was the most successful, so consider these stellar findings the next time you're hosting a backyard BBQ.
8. Image-Based Search Engine — David Liu
While most search engines work at dissecting the Web's textual information, David Liu's pet project is all about creating one that looks at images instead. While he is still working to perfect his software, Liu's search engine is already being used in the real world, analyzing satellite images and making relevant Web searches much more effective. An impressive feat for a 17-year-old.
9. Problems With Ovarian Cancer Treatment — Shree Bose
Taking top prize at the Google Science Fair, Bose will get to spend several weeks studying marine life in the Galapagos Islands. The work that netted her this prize is awe-inspiring, especially coming from a teenager. Bose uncovered a number of problems with popular ovarian cancer treatments and drugs, producing a report that would be more at home in a medical journal than a high school classroom. Hopefully, this will influence some changes in how treatment is doled out to suffering patients.
10. Computer Speed Enhancing Software — Kevin Ellis
Slow computers are the bane of every office worker's existence, but with the work of Kevin Ellis, an unresponsive machine may be a thing of the past. Rather than upgrading computers with more memory, Ellis has developed software that analyzes how programs are running and spreads out their needs over all the CPUs to make everything more quickly. His amazing software netted him $50,000 and the rest of the world a way to speed up computers that may have otherwise been tossed out.
11. Quantum Computing For Difficult Computational Problems — Yale Fan
Despite his name, this young genius chose Harvard over Yale to continue working on his education. Part of what got him there, undoubtedly, was this impressive bit of science. Yale's research project, titled "Adiabatic Quantum Algorithms for Boolean Satisfiability" analyzed the applications of quantum computing for solving some of the most complex and difficult computational problems. Most adults don't have half an idea what that even means, so it's all the more impressive that this teen was already studying it in high school.
12. Photodynamic Cancer Therapy — Amy Chyao
The definitive cure for cancer is still undoubtedly a long way off, but young researchers like Amy Chyao are certainly helping in the fight with innovative new ideas. Amy's science project used photodynamic therapy to target and kill cancer cells. The project was so promising, it garnered her the Intel International Science and Engineering Fair award in 2010.
13. Antarctic Submersible — Ryan Garner and Amanda Wilson
These two teens have come up with an amazing way to do research on climate change. With a budget of $5,000, the pair built an underwater rover designed to take on the challenges of some of the harshest conditions in the world — like those at the Antarctic Circle. Equipped with a camera, the device can explore and take measurements, and is currently being used by the University of California-Santa Barbara to study marine life.
14. Nuclear Weapon Detector — Taylor Wilson
16-year-old Taylor Wilson began his nuclear detection project at the age of only 11. Supported by his parents and a grant from Homeland Security, he eventually created a device that can reliably detect nuclear weapons and explosive materials as vehicles pass through his drive-through sensor.
15. Teaching Robots To Speak English — Luke Taylor
South African Luke Taylor submitted this amazing project to Google's Science Fair, which lets humans communicate more easily with robots. His software translates the English language into code that the robot can then understand and execute — allowing just about anyone, anywhere to program one to perform a variety of functions. Even more impressive? Taylor is just 13 years old.
16. Better Password Technology — Jacob Buckman
How many of your online passwords are truly secure? If you're like most people, probably not many. This young man may have come up with a solution, monitoring the biometrics of how people type to create a more secure way of gaining online account access. He discovered that passwords using the length of time between keystrokes and the length of time keys were held down could be just as accurate and potentially more secure than traditional passwords.
17. Asthma And Air Quality — Naomi Shah
Taking home top prize in her age group at the Google Science Fair, Shah's work takes a critical look at the air quality in the world today — and the impact it can have on those suffering from breathing disorders like asthma. She created a mathematical model that helps quantify the effects of air quality on symptoms. And had a few harsh words about the U.S. Clean Air Act as well, based on her findings.
18. Mind-controlled Prosthetic Limbs — Anand Srinivasan
It's hard to believe that this awe-inspiring science project came from the mind of a 14-year-old. Hooking his brain up to an EEG scanner, Srinivasan worked to test out a new method of improving mind-controlled prosthetic limbs. He found that data from the EEG could help with data classification and signal processing when using them, providing a better and more efficient user experience.
19. Managing The Power Of Household Devices — Ankush Gupta
You likely have a lot of vampires in your home, and not the sexy Hollywood kind either. These are energy vampires, and they're sucking up and wasting energy that you're paying for. Gupta has come up with a solution with this amazing science project using demotic technology. By monitoring energy use around the home, Gupta's system allows users to manage the power states of computers and other devices around the home to reduce energy usage and save money.
20. Spacecraft Navigation Software — Erika DeBenedictis
This bright, young rising star in the scientific community came up with some ingenious software for helping spacecraft move faster and use less fuel while navigating many obstacles in the vacuum of space. Her amazing software won a substantia | <urn:uuid:868e8450-3a66-4259-8497-a02d82cccf4e> | CC-MAIN-2016-22 | http://www.fourwinds10.net/siterun_data/science_technology/new_technologies_and_inventions/news.php?q=1335800491 | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276780.5/warc/CC-MAIN-20160524002116-00064-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.946201 | 1,982 | 4 | 4 |
Like the one in your car, Johannes Roßnagel's engine is a four-stroke. In four steps it compresses and heats, then expands and cools. And as with any other engine, this cycle is repeated over and over again—transforming the changing temperature into mechanical energy.
But Roßnagel's engine is no V-8. And it doesn't use internal combustion. Roßnagel, an experimental physicist at the University of Mainz in Germany, has conceived of and is in the process of building the world's tiniest engine, less than a micrometer in length. It is a machine so small it runs on a single atom. And in a recent paper in the journal Physical Review Letters, its inventors argue that, because of an interesting anomaly of quantum physics, this is also far and away the most efficient engine.
The nano engine works like this: First, using tiny electrodes, the physicists trap a single atom in a cone of electromagnetic energy. "We're using a calcium-40 ion," Roßnagel says, "but in principle the engine could be built with just about any ion at all." This electromagnetic cone is essentially the engine's housing, and squeezes tightly over the atom. The physicists then focus two lasers on each end of the cone: one at the pointy end, which heats the atom, and another at the base of the cone, which uses a process called Doppler cooling to cool the atom back down.
Because this heating and cooling slightly changes the size of the atom (more exactly, it alters the fuzzy smear of probability of where the atom exists), and the cone fits the atom so snuggly, the temperature change forces the atom to race back and forth along the length of the cone as the atom expands and contracts. For maximum efficiency, the physicists set the lasers to heat and cool at the same resonance at which the atom naturally vibrates from side to side.
The result is that, like sound waves that build upon one other, the atom's oscillation between the two ends of the cone "gets accumulated, and becomes stronger and stronger," which can be harnessed, Roßnagel says. "If you imagine that you put a second ion by the cooler side, it could absorb the mechanical energy of our engine, much like a flywheel [in a car engine]."
And the nano engine has one additional feature, one that, Roßnagel argues, increases the efficiency of the machine so much that it actually surpasses the Carnot Limit—the maximum efficiency any engine can have according to the laws of thermodynamics.
As the racing atom reaches the hot end to the cone, the researchers slightly contract and expand the sides of the cone a single time. Done at the right frequency, this action puts the moving atom into a quantum mechanical condition called a squeezed state. This means that now, as the atom continues race to the cold end of the cone, it's also slightly pulsating.
Although forcing the atom into a squeezed state doesn't actually transfer any energy, it does mean that the pulsating atom is (because of a quantum mechanical quirk) on average slightly bigger when it hits the cold end of the cone. And while the cooling phase knocks the atom out of this squeezed state, the momentary extra size gives the entire engine a boost. "You can think of it sort of like a supercharger," says Jacob Taylor, a quantum physics researcher at the University of Maryland, who was not involved in the experiment. According to Roßnagel, if you calculate the energy efficiency of this supercharged system, it's four times as efficient as it would be without the squeezing—surpassing the Carnot Limit by a large margin. This would make it the most efficient engine ever built.
However, any claims that an engine can break the laws of thermodynamics deserves extra scrutiny and skepticism. According to Taylor, this ultrahigh efficiency is only a matter of perspective. "There's no free lunch here," he says. Despite the fact that the squeezing process doesn't transfer any energy to the atom's side-to-side movement, "you still have to consider the energy that goes into the squeezing process. You're essentially taking energy from the squeezing process to turbo-boost the engine." And calculating in that squeezing energy, the engine is safely below the Carnot Limit.
Hartmut Häffner, a theoretical physicist at the University of California, Berkeley, who was not involved in the experiment, agrees. "I wouldn't accept this efficiency is just from 'the weirdness of quantum mechanics,'" says Häffner, but he adds that the proposed nano engine itself "is very interesting and very well-described. It's trying to push the boundaries of what we know about thermodynamics into a new regime."
Roßnagels argues that because the squeezing process doesn't actually transfer any energy to the atom's side-to-side movement along the cone, including it in the efficiency calculation for his nano engine is a bit arbitrary. It's like looking at the energy efficiency of a gasoline engine and incorporating in the millions of years of energy it took to create the fossil fuels, he says, or the energy it took to pump the oil out of the ground. He is generally in agreement with Taylor, though, that it all depends on how you look at it. "In general it's kind of a semantic problem," Roßnagel says. "It's where you put your camera and decide what is part of the system and what isn't part of the system."
The sheer amount of laboratory space and equipment these nano engines require means that we won't see them outside a lab anytime soon. (Or perhaps ever. Sorry, nanobots!) But Taylor says the insight we'll gain from this type of experiment can be incredibly helpful in other realms— chiefly, quantum computing. The pursuit of building computers that manipulate the funky physics of quantum mechanics to process information has already captured some of brightest minds in theoretical physics. "And in quantum computation you really need the ability to efficiently move heat around," Taylor says, "and in so far as we can better understand these heat engines, it may improve our ability in developing quantum computers down the road." | <urn:uuid:2a6208dd-2302-4ed4-9cf1-6bcb15e448ff> | CC-MAIN-2016-22 | http://www.popularmechanics.com/science/a10068/the-worlds-smallest-engine-runs-on-a-single-atom-16451781/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275328.63/warc/CC-MAIN-20160524002115-00212-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.959304 | 1,286 | 3.875 | 4 |
Are we alone?
1. We have strong evidence that that our solar system is not the only one; we know there are many other Suns with planets orbiting them.
Improved telescopes and detectors have led to the detection of dozens of new planetary systems within the past decade, including several systems containing multiple planets.
One giant leap for bug-kind
2. Some organisms can survive in space without any kind of protective enclosure.
In a European Space Agency experiment conducted in 2005, two species of lichen were carried aboard a Russian Soyuz rocket and exposed to the space environment for nearly 15 days. They were then resealed in a capsule and returned to Earth, where they were found in exactly the same shape as before the flight. The lichen survived exposure to the vacuum of space as well as the glaring ultraviolet radiation of the Sun.
Hot real estate
3. Organisms have been found living happily in scalding water with temperatures as high as 235 degrees F.
More than 50 heat-loving microorganisms, or hyperthermophiles, have been found thriving at very high temperatures in such locations as hot springs in WyomingÕs Yellowstone National Park and on the walls of deep-sea hydrothermal vents. Some of these species multiply best at 221 degrees F, and can reproduce at up to 235 degrees F.
Has E.T. already phoned home?
4. We now have evidence that some form of life exists beyond Earth, at least in primitive form.
While many scientists speculate that extraterrestrial life exists, so far there is no conclusive evidence to prove it. Future missions to Mars, the Jovian moon Europa and future space telescopes such as the Terrestrial Planet Finder will search for definitive answers to this ageless question.
To infinity, and beyond!
5. We currently have the technology necessary to send astronauts to another star system within a reasonable timespan. The only problem is that such a mission would be overwhelmingly expensive.
Even the the unmanned Voyager spacecraft, which left our solar system years ago at a breathtaking 37,000 miles per hour, would take 76,000 years to reach the nearest star. Because the distances involved are so vast, interstellar travel to another star within a practical timescale would require, among other things, the ability the move a vehicle at or near the speed of light. This is beyond the reach of today's spacecraft -- regardless of funding.
Fellowship of the rings
6. All of the gas giant planets in our solar system (Jupiter, Saturn, Uranus and Neptune) have rings.
Saturn's rings are the most pronounced and visible, but they aren't the only ones.
May the force be with you
7. In the "Star Wars" films, the Imperial TIE Fighters are propelled by ion engines (TIE stands for Twin Ion Engine). While these spacecraft are fictional, real ion engines power some of todayÕs spacecraft.
Ion propulsion has long been a staple of science fiction novels, but in recent years it has been successfully tested on a number of unmanned spacecraft, most notably NASAÕs Deep Space 1. Launched in 1998, Deep Space 1 rendezvoused with a distant asteroid and then with a comet, proving that ion propulsion could be used for interplanetary travel.
A question of gravity
8. There is no gravity in deep space.
If this were true, the moon would float away from the Earth, and our entire solar system would drift apart. While itÕs true that gravity gets weaker with distance, it can never be escaped completely, no matter how far you travel in space. Astronauts appear to experience "zero-gravity" because they are in continuous free-fall around the Earth.
9. The basic premise of teleportation -- made famous in TVÕs "Star Trek" -- is theoretically sound. In fact, scientists have already ÒteleportedÓ the quantum state of individual atoms from one location to another.
As early as the late 1990s, scientists proved they could teleport data using photons, but the photons were absorbed by whatever surface they struck. More recently, physicists at the University of Innsbruck in Austria and at the National Institute of Standards and Technology in Boulder, Colorado, for the first time teleported individual atoms using the principle of quantum entanglement.
Experts say this technology eventually could enable the invention of superfast "quantum computers." But the bad news, at least for sci-fi fans, is that experts donÕt foresee being able to teleport people in this manner.
Good day, Suns-shine
10. Tatooine, Luke Skywalker's home planet in the "Star Wars" films, has two Suns -- what astronomers would call a binary star system. Scientists have discovered recently that planets really can form within such systems.
Double-stars, or binary systems, are common in our Milky Way galaxy. Among the more than 100 new planets discovered in recent years, some have been found in binary systems, including16 Cygni B and 55 Cancri A. (But so far, no one has found a habitable planet like Luke Skywalker's Tatooine.) | <urn:uuid:2012c92b-3bd0-4cbe-8808-d01c5a916b26> | CC-MAIN-2016-22 | http://www.nasa.gov/multimedia/mmgallery/fact_fiction_nonflash_prt.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276305.39/warc/CC-MAIN-20160524002116-00029-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.937163 | 1,059 | 3.953125 | 4 |
If quantum computers are ever going to perform all those expected feats of code-breaking and number crunching, then their component qubits---tiny ephemeral quantum cells held in a superposition of internal states---will have to be protected from intervention by the outside world. In other words, decoherence, the loss of the qubits' quantum integrity, has to be postponed. Now theoretical physicists at the Joint Quantum Institute (JQI) and the University of Maryland have done an important step forward to understand qubits in a real-world setup. In a new study they show, for the first time, that qubits can successfully exist in a so called topological superconductor material even in the presence of impurities in the material and strong interactions among participating electrons. To see how qubits can enter into their special coherence-protection program, courtesy of "Majorana particles," an exotic form of excitation, some groundwork has to be laid.
Most designs for qubits involve materials where quantum effects are important. In one such material, superconductors (SC), electrons pair up and can then enter into a large ensemble, a supercurrent, which flows through the material without suffering energy loss. Another material is a sandwich of semiconductors which support the quantum Hall effect (QHE). Here, very low temperatures and a powerful external magnetic field force electrons in a thin boundary layer to execute tiny cyclone motions (not exactly, but ok--also isn't a cyclone a storm?). At the edge of these layers, the electrons, unable to trace out a complete circular path, will creep along the edge, where they constitute a net electrical current.
One of the most interesting and useful facts about these electrons at the edge is that they move in one direction. They cannot scatter backwards no matter how many impurities (which in ordinary conductors can lead to energy dissipation) may be in the material. If, furthermore, the electrons can be oriented according to their spin---their intrinsic angular momentum---then we get what is called the quantum spin Hall effect (QSH). In this case all electrons with spin up will circulate around the material (at the edge) in one direction, while electrons with spin down will circulate around in the opposite direction.
The QHE state is depicted in figure 1.
In some materials the underlying magnetism of the nuclei in the atoms making of the material is so strong than no external magnet is needed to create the Hall effects. Mercury-cadmium-telluride compounds are examples of materials called topological insulators. Insulators (not sure how this sentence was supposed to start, but grammatically is currently confusing) because even as electrons move around the edge of the material with very little loss of energy, the interior of these 3-dimensional structures is an insulator; no current flows. The "topological" is a bit harder to explain. Partly the flow of current on the outside bespeaks of geometry: the electrons flow only at the edge and are unable (owing to quantum interactions) from scattering backwards if they meet an impediment.
But topology in this case has more to do with the way in which the motion of the electrons in these materials are described in terms of "dispersion relations." Just as waves of white light will be dispersed into a spectrum of colors when the waves strike the oblique side of a prism, so electron waves (electrons considered as quantum waves) will be "dispersed," in the sense that electrons with the same energy might have different momenta, depending on how the electrons move through the material in question.
The idea of electron dispersal is often depicted in the form of an energy-level diagram. In insulators (the left panel of Figure 2) electrons remain in a valence band; they don't have enough energy to visit the conduction band of energies; hence the electrons do not move; the material is an insulator against electricity. In a conductor (middle part) the conduction and valence bands overlap. In the QHE (right panel) electrons in the interior of the material also do not move along; the bulk of the material is an insulator. But for electrons at the edge there is a chance for movement into the conduction band.
Now for the topology: just as a coffee cup is equivalent to a donut topologically---either can be transformed into the other by stretching but not by any tearing---so here the valence band can be transformed into a conduction band (at least for edge states) no matter what impurities might be present in the underlying material. In other words, the "topological" nature of the material offers some protection for the flow of electrons against the otherwise-dissipating effects of impurities.
The marvelous properties of superconductors and topological materials can be combined. If a one-dimensional topological specimen---a nanowire made from indium and arsenic---is draped across a superconductor (niobium, say) then the superconductivity can extend into the wire (proximity effect). And in this conjunction of materials, still another hotly-pursued effect can come into play.
One last concept is needed here---Majorana particles---named for the Italian physicist Ettore Majorana, who predicted in 1937 the existence of a class of particle that would serve as its own antiparticle. Probably this object would not exist usefully in the form of a single real particle but would, rather, appear in a material as a quasiparticle, an ensemble excitation of many electrons.
Some scientists believe that qubits made from Majorana pulses excited in topological materials (and benefitting from the same sort of topological protection that benefits, say, electrons in QHE materials) would be much more immune from decoherence than other qubits based on conventional particles. Specifically Sankar Das Sarma and his colleagues at the University of Maryland (JQI and the Condensed Matter Theory Center) predicted that Majorana particles would appear in topological quantum nanowires. In fact part of the Majorana excitation would appear at both ends of the wire. These predictions were borne out. It is precisely the separation of these two parts (each of which constitutes a sort of "half electron") that confers some of the anticipated coherence-protection: a qubit made of that Majorana excitation would not be disrupted by merely a local irregularity in the wire.
A recent experiment in Holland provides preliminary evidence for exactly this occurrence (***).
ROBUST QUBITS AMID DISORDER
One of the authors of the new study, Alejandro Lobos, said that the earlier Maryland prediction, useful as it was, was still somewhat idealistic in that it didn't fully grapple with the presence of impurities, a fact of life which all engineers of actual computers must confront. This is what the new paper, which appears in the journal Physical Review Letters, addresses.
The problem of impurities or defects (which flowing electrons encounter as a form of disorder) is especially important for components which are two or even one dimensional in nature. The same is true for the repulsive force among electrons. "In 3-dimensional materials," said Lobos, "electrons (and their screening clouds of surrounding holes) can avoid each other thanks to the availability of space. They can just go around each other. In 1-D materials, this is not possible, since electrons cannot pass each other. In 1D, if one electron wants to move, it has to move all the other electrons! This ensures that excitations in a 1D metal are necessarily collective, as opposed to the single-particle excitations existing in a 3D metal.
So, in summary, the new Maryland work shows that disorder and electron interactions, two things that normally work to disrupt superconductivity, can be overcome with careful engineering of the material.
"A number of important theoretical studies before ours have focused on the destabilizing effects of either disorder or interaction on topological superconductors," said Lobos. "These studies showed the extent to which a topological superconductor could survive under these effects separately. But to make contact with real materials, disorder and interactions have to be considered on equal footing and simultaneously, a particular requirement imposed by the one-dimensional geometry of the system. It was then an important question to determine if it was possible to stabilize a topological superconductor under their simultaneous presence. The good news is that the answer is yes: despite their detrimental effect, there is still a sizable range of parameters where topological superconductors hosting Majorana excitations can exist. That's the main result of our study, which will be useful to understand and characterize topological superconductors in more realistic situations."
(*) The Joint Quantum Institute is operated jointly by the National Institute of Standards and Technology in Gaithersburg, MD and the University of Maryland in College Park.
(**) "Interplay of disorder and interaction in Majorana quantum wires," Alejandro M. Lobos, Roman M. Lutchyn, and S. Das Sarma, Physical Review Letters, 5 October 2012, http://prl.
(***) Link to earlier Majorana JQI press release and several pertinent research papers: http://www.
Alejandro M. Lobos, (301)405-0603, firstname.lastname@example.org | <urn:uuid:4df74d0c-32d3-4927-aa2f-79f6e3c3e791> | CC-MAIN-2016-22 | http://www.eurekalert.org/pub_releases/2012-10/jqi-ts100912.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051035374.76/warc/CC-MAIN-20160524005035-00003-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.935444 | 1,936 | 3.578125 | 4 |
Quantum Computers Are a Quantum Leap Closer
Source Newsroom: Purdue University
Newswise — A new breed of faster, more powerful computers based on quantum mechanics may be a step closer to reality, report scientists from Purdue and Duke universities.
By linking a pair of tiny "puddles" of a few dozen electrons sandwiched inside a semiconductor, researchers have enabled these two so-called "quantum dots" to become parts of a transistor - the vital switching component in computer chips. Future computers that use quantum dots to store and process digital information might outperform conventional computer circuits because of both the new transistors' smaller size and their potential to solve problems that would take centuries on today's machines.
"This is a very promising candidate for quantum computation," said Albert M. Chang, who is an adjunct professor of physics in Purdue's School of Science. "We believe this research will allow large numbers of quantum-dot switches to work together as a group, which will be necessary if they are ever to function as a computer's brain, or memory.
"For the market, quantum computers mean better encryption methods and heightened data security. For science, our research may help address the longstanding mystery of the relationship between the classical physics of the world we see every day, and the peculiar world of quantum physics that governs the tiny particles inside atoms."
The research will appear in the current (April 30) issue of Physical Review Letters. The lead author is Jeng-Chung Chen, who received his doctorate at Purdue and is now at the University of Tokyo. Co-authors are Chang, who in 2003 relocated from Purdue to Duke University, where he is a professor of physics, and Michael. R. Melloch, a professor in Purdue's School of Electrical and Computer Engineering.
As computer circuits grow ever smaller, manufacturers draw nearer to the time when their chips' tiny on-off switches - representing the 1's and 0's of binary information, or bits - can be made comparable in size to a single molecule. At smaller scales, the laws of classical physics will no longer apply to the switches, but will be replaced by the laws of the subatomic world. These laws, described by quantum physics, can appear strange to the uninitiated.
"An electron, for example, can behave like a particle or a wave at times, and it has the odd ability to seemingly be in two different states at once," Chang said. "Physicists need a different set of words and concepts to describe the behavior of objects that can do such counterintuitive things. One concept we use is the 'spin' of an electron, which we loosely imagine as being similar to the way the Earth spins each day on its axis. But it also describes a sort of ordering electrons must obey in one another's presence: When two electrons occupy the same space, they must pair with opposite spins, one electron with 'up' spin, the other 'down.'"
Spin is one property that physicists seek to harness for memory storage. After collecting 40 to 60 paired electrons in a puddle within a semiconductor wafer of gallium arsenide and aluminum gallium arsenide, the team then added a single additional unpaired electron to the puddle. This extra electron imparted a net spin of up or down to the entire puddle, which they call a quantum dot. The team also built a second quantum dot nearby with the same net spin.
"When isolated from one another, the two net spins would not seek to pair with each other," Chang said. "But we have a special method of 'tuning' the two-dot system so that, despite the similar spins, the two unpaired electrons became 'entangled' - they begin to interact with one another."
The team used eight tiny converging wires, or "gates," to deposit the electrons in the dots one by one and then electronically fine-tune the dots' properties so they would become entangled. With these gates, the team was able to slowly tune the interacting dots so they are able to exist in a mixed, down-up and up-down configuration simultaneously. In each dot, an up or down configuration would represent a 1 or 0 in a quantum bit, or "qubit," for possible use in memory chips.
"Entanglement is a key property that would help give a quantum computer its power," Chang said. "Because each system exists in this mixed, down-up configuration, it may allow us to create switches that are both on and off at the same time. That's something current computer switches can't do."
Large groups of qubits could be used to solve problems that have myriad potential solutions that must be winnowed down quickly, such as factoring the very large numbers used in data encryption.
"A desktop computer performs single operations one after another in series," Chang said. "It's fast, but if you could do all those operations together, in parallel rather than in series, it can be exponentially faster. In the encryption world, solving some problems could take centuries with a conventional computer."
But for a quantum computer, whose bits can be in two quantum states at once - both on and off at the same time - many solutions could, in theory, be explored simultaneously, allowing for a solution in hours rather than lifetimes.
"These computers would have massive parallelism built right in, allowing for the solution of many tough problems," Chang said. "But for us physicists, the possibilities of quantum computers extend beyond any single application. There also exists the potential to explore why there seem to be two kinds of reality in the universe - one of which, in everyday language, is said to stop when you cross the border 'into the interior of the atom.'"
Because a quantum computer would require all its qubits to behave according to quantum rules, its processor could itself serve as a laboratory for exploring the quantum world.
"Such a computer would have to exhibit 'quantum coherence,' meaning its innards would be a large-scale system with quantum properties rather than classical ones," Chang said. "When quantum systems interact with the classical world, they tend to lose their coherence and decay into classical behavior, but the quantum-dot system we have built exhibits naturally long-lasting coherence. As an entire large-scale system that can behave like a wave or a particle, it may provide windows into the nature of the universe we cannot otherwise easily explore."
The system would not have to be large; each dot has a width of only about 200 nanometers, or billionths of a meter. About 5,000 of them placed end to end would stretch across the diameter of a grain of sand. But Chang said that his group's system had another, greater advantage even than its minuscule size.
"Qubits have been created before using other methods," he said. "But ours have a potential advantage. It seems possible to scale them up into large systems that can work together because we can control their behavior more effectively. Many systems are limited to a handful of qubits at most, far too few to be useful in real-world computers."
For now, though, the team's qubit works too slowly to be used as the basis of a marketable device. Chang said the team would next concentrate on improving the speed at which they can manipulate the spin of the electrons.
"Essentially, what we've done is just a physics experiment, no more," he said. "In the future, we'll need to manipulate the spin at very fast rates. But for the moment, we have, for the first time, demonstrated the entanglement of two quantum dots and shown that we can control its properties with great precision. It offers hope that we can reach that future within a decade or so."
This research was funded in part by the National Science Foundation.
STORY AND PHOTO CAN BE FOUND AT:
As part of an effort to make superpowerful quantum computers, Purdue University researchers have created "quantum dots" in a semiconducting material known as gallium arsenide. The quantum dots (the two small circular areas shown adjacent to one other in the center of the image) are puddles of about 40-60 electrons. Together the dots can form part of transistors in which the electrons' spin, a quantum mechanical property, could be harnessed to make logic gates for next-generation computer chips. Each dot measures only about 180 nanometers (billionths of a meter) in diameter - about 5,000 of them could stretch across the width of a grain of sand. (Illustration by Albert Chang, Duke University Department of Physics)
A publication-quality illustration is available at http://ftp.purdue.edu/pub/uns/+2004/chang-parallel.jpg
Transition Between Quantum States in a Parallel-Coupled Double-Quantum-Dot
J.C. Chen, A.M. Chang, and M.R. Melloch* - Department of Physics, Purdue University; *Electrical and Computer Engineering
Strong electron and spin correlations in a double-quantum-dot (DQD) can give rise to different quantum states. We observe a continuous transition from a Kondo state exhibiting a single-peak Kondo resonance to another exhibiting a double peak by increasing the inter-dot-coupling (t) in a parallel-coupled DQD. The transition into the double-peak state provides evidence for spin entanglement between the excess-electron on each dot. Toward the transition, the peak splitting merges and becomes substantially smaller than t because of strong Coulomb effects. Our device tunability bodes well for future quantum computation applications. | <urn:uuid:c71d1d29-1b9a-4ae5-b788-672517757924> | CC-MAIN-2016-22 | http://www.newswise.com/articles/view/504684/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276543.81/warc/CC-MAIN-20160524002116-00051-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.945188 | 1,996 | 3.5625 | 4 |
Spooky Atomic Clocks
Spooky Atomic Clocks
NASA-supported researchers hope to improve
high-precision clocks by entangling their atoms.
January 23, 2004: Einstein called it "spooky action at a distance." Now NASA-funded researchers are using an astonishing property of quantum mechanics called "entanglement" to improve atomic clocks--humanity's most precise way to measure time. Entangled clocks could be as much as 1000 times more stable than their non-entangled counterparts.
This improvement would benefit pilots, farmers, hikers--in short, anyone who uses the Global Positioning System (GPS). Each of the 24+ GPS satellites carries four atomic clocks on board. By triangulating time signals broadcast from orbit, GPS receivers on the ground can pinpoint their own location on Earth
Right: Quantum entanglement does some mind-bending things. In this laser experiment entangled photons are teleported from one place to another.
NASA uses atomic clocks for spacecraft navigation. Geologists use them to monitor continental drift and the slowly changing spin of our planet. Physicists use them to check theories of gravity. An entangled atomic clock might keep time precisely enough to test the value of the Fine Structure Constant, one of the fundamental constants of physics.
Through its office of Biological and Physical Research, NASA recently awarded a grant to Kuzmich and his colleagues to support their research. Kuzmich has studied quantum entanglement for the last 10 years and has recently turned to exploring how it can be applied to atomic clocks.
Einstein never liked entanglement. It seemed to run counter to a central tenet of his theory of relativity: nothing, not even information, can travel faster than the speed of light. In quantum mechanics, all the forces of nature are mediated by the exchange of particles such as photons, and these particles must obey this cosmic speed limit. So an action "here" can cause no effect "over there" any sooner than it would take light to travel there in a vacuum.
But two entangled particles can appear to influence one another instantaneously, whether they're in the same room or at opposite ends of the Universe. Pretty spooky indeed.
Quantum entanglement occurs when two or more particles interact in a way that causes their fates to become linked: It becomes impossible to consider (or mathematically describe) each particle's condition independently of the others'. Collectively they constitute a single quantum state.
Left: Making a measurement on one entangled particle affects the properties of the other instantaneously. Image by Patrick L. Barry.
Two entangled particles often must have opposite values for a property
-- for example, if one is spinning in "up" direction, the
other must be spinning in the "down" direction. Suppose
you measure one of the entangled particles and, by doing so, you nudge
it "up." This causes the entangled partner to spin "down."
Making the measurement "here" affected the other particle "over there"
instantaneously, even if
the other particle was a million miles away.
While physicists and philosophers grapple with the implications for the nature of causation and the structure of the Universe, some physicists are busy putting entanglement to work in applications such as "teleporting" atoms and producing uncrackable encryption.
At the heart of every atomic clock lies a cloud of atoms, usually cesium or rubidium. The natural resonances of these atoms serve the same purpose as the pendulum in a grandfather clock. Tick-tock-tick-tock. A laser beam piercing the cloud can count the oscillations and use them to keep time. This is how an atomic clock works.
Right: Lasers are a key ingredient of atomic clocks--both the ordinary and entangled variety. Click on the image to learn more.
"The best atomic clocks on Earth today are stable to about one part in 1015," notes Kuzmich. That means an observer would have to watch the clock for 1015 seconds or 30 million years to see it gain or lose a single second.
The precision of an atomic clock depends on a few things, including the number of atoms being used. The more atoms, the better. In a normal atomic clock, the precision is proportional to the square-root of the number of atoms. So having, say, 4 times as many atoms would only double the precision. In an entangled atomic clock, however, the improvement is directly proportional to the number of atoms. Four times more atoms makes a 4-times better clock.
Using plenty of atoms, it might be possible to build a "maximally entangled clock stable to about one part in 1018," says Kuzmich. You would have to watch that clock for 1018 seconds or 30 billion years to catch it losing a single second.
Kuzmich plans to use the lasers already built-in to atomic clocks to create the entanglement.
"We will measure the phase of the laser light passing through the cloud of atoms," he explains. Measuring the phase "tweaks the laser beam," and if the frequency of the laser has been chosen properly, tweaking the beam causes the atoms to become entangled. Or, as one quantum physicist might say to another, "such a procedure amounts to a quantum non-demolition (QND) measurement on the atoms, and results in preparation of a Squeezed Spin State."
Above: Georgia Institute of Technology professor of physics Alex Kuzmich.
How soon an entangled clock could be built--much less launched into
space aboard a hypothetical new generation of GPS satellites--is difficult
to predict, cautions Kuzmich. The research is still at the stage of
just demonstrating the principle. Building a working prototype is
probably several years away.
But thanks to research such as this, having still-better atomic clocks available to benefit science and technology is only a matter of time.
Tick-Tock Atomic Clock -- (Science@NASA) Scientists are building atomic clocks that keep time with mind-boggling precision. Such devices will help farmers, physicists, and interstellar travelers alike.
NASA's Office of Biological and Physical Research supports studies of fundamental physics for the benefit of people on Earth and in space.
What is an atomic second?In an atomic clock,
the steady "tick" of an electronic oscillator is kept steady by comparing
it to the natural frequency of an atom -- usually cesium-133. When
a cesium atom drops from one particular energy level to another, a
microwave photon emerges. The wave-like photon oscillates like a pendulum
in an old-style clock. When it has oscillated precisely 9,192,631,770
times -- by decree of the Thirteenth General
Conference on Weights and Measures in 1967 -- we know that one "atomic
second" has elapsed.
Join our growing list of subscribers - sign up for our express news deliveryand you will receive a mail message every time we post a new story!!! | <urn:uuid:4df7b766-fd12-48ab-9de1-39b223088b0a> | CC-MAIN-2016-22 | http://science1.nasa.gov/science-news/science-at-nasa/2004/23jan_entangled/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277313.92/warc/CC-MAIN-20160524002117-00115-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.917255 | 1,455 | 3.828125 | 4 |
From UPSC perspective, the following things are important :
Prelims level : Qubit, superposition.
Mains level : Paper 3- What do you understand by quantum technology? What are its applications? How it is different from the classical computer technology?
The article suggests that the corona crisis would speed up research in the field of quantum computing. The tremendous speed offered by quantum computers will help us find a cure for diseases like Covid-19 in a much shorter duration. This article explains the limitations of classical computers, working of quantum technology, and how quantum computer overcomes these limitations.
Use of supercomputer to find the cure of Covid-19
- The whole world is pressurized into quickly discovering a vaccine and a cure for covid-19.
- IBM’s Summit, the world’s fastest supercomputer, was used for running numerous simulations and computations.
- These simulations and computations help scientists find promising molecules to fight the pandemic.
- The latest update says the Summit has been able to identify 77 candidate molecules that researchers can use in trials.
- This was achieved in just two days, while, traditionally, it has taken months to make such progress.
Computing capacity as a limit on molecular discoveries
- Today, faster molecular discoveries are limited by computing capacity.
- Molecular discoveries are also limited by the need for scientists to write codes for harnessing the computing power.
- It is no secret that classical computing power is plateauing (e. it is not growing anymore)
- And till we have scalable artificial intelligence (AI) and machine learning (ML), scientists will have to write code for not only different scenarios but also for different computing platforms.
- So, what we need today is more computing power.
The following points explain the limits of classical computers. Pay attention to the Moore’s law, and how it explains the development of semiconductor technologies and in turn computers as a whole.
What is the solution to the limits of classical computers?
- Given that we have already neared the peak of classical computing, the solution probably is quantum computing.
- Not just vaccines, quantum computing can accelerate many innovations, such as hyper-individualized medicines, 3-D printed organs, search engines for the physical world etc.
- All innovations currently constrained by the size of transistors used in classical computing chips can be unleashed through quantum computing.
- Moore’s law: In 1965, Gordon Moore had said the number of transistors that can be packed into a given unit of space will double about every two years.
- Subsequently, in an interview in 2005, he himself admitted that this law can’t continue forever.
- He had said: “It is the nature of exponential functions, they eventually hit a wall.”
- Over the last 60 years, we reaped the benefits of Moore’s law in many ways.
- For instance, compared to initial days of the Intel 4004, the modern 14nm processors deliver way bigger impact—3,500 times better performance and 90,000 times improved efficiency, at 1/60,000th the cost!
- Yet, we are also seeing his 2005 statement coming true. All the experts agree that the ‘wall’ is very near.
- So, what next? The answer again is probably the same—quantum computing.
Quantum technology is one of the emerging and revolutionary technologies, you should be aware of the terms and general principle which lies at the heart of such technology. So, terms like superposition, qubit, binary etc are important if you want to answer a questions related to this technology.
Quantum computing and its applications
- It is no more a concept, there are working models available on the cloud.
- How it works: Quantum computing uses the ability of sub-atomic particles to exist in multiple states simultaneously, until it is observed.
- The concept of qubits: Unlike classical computers that can store information in just two values, that is 1 or 0, quantum computing uses qubits that can exist in any superposition of these values,
- This superposition enables quantum computers to solve in seconds problems which a classical computer would take thousands of years to crack.
- Applications: The application of this technology is enormous, and just to cite a few, it can help with the discovery of new molecules, optimize financial portfolios for different risk scenarios.
- It can also crack RSA encryption keys, detect stealth aircraft, search massive databases in a split second and truly enable AI.
Investment in the development of technology
- In the Union budget this year, the Indian government announced investments of ₹8,000 crores for developing quantum technologies and applications.
- Globally, too, countries and organizations are rushing to develop this technology and have already invested enormous capital towards its research.
Historically, unprecedented crises have always created more innovations than routine challenges or systematic investments. Coincidentally, current times pose similar opportunities in disguise for the development of quantum technologies.
Back2Basics: Difference between bit and qubit
- A binary digit, characterized as 0 and 1, is used to represent information in classical computers.
- A binary digit can represent up to one bit of information, where a bit is the basic unit of information.
- In classical computer technologies, a processed bit is implemented by one of two levels of low DC voltage.
- And whilst switching from one of these two levels to the other, a so-called forbidden zone must be passed as fast as possible, as electrical voltage cannot change from one level to another instantaneously.
- There are two possible outcomes for the measurement of a qubit—usually taken to have the value “0” and “1”, like a bit or binary digit.
- However, whereas the state of a bit can only be either 0 or 1, the general state of a qubit according to quantum mechanics can be a coherent superposition of both.
- Moreover, whereas a measurement of a classical bit would not disturb its state, a measurement of a qubit would destroy its coherence and irrevocably disturb the superposition state.
- It is possible to fully encode one bit in one qubit.
- However, a qubit can hold more information, e.g. up to two bits using superdense coding.
- For a system of n components, a complete description of its state in classical physics requires only n bits, whereas in quantum physics it requires 2n−1 complex numbers. | <urn:uuid:c9e4c9b4-db16-4c12-8df6-ecc4692d0a67> | CC-MAIN-2022-05 | https://www.civilsdaily.com/news/virus-outbreak-can-potentially-spur-the-next-quantum-leap-for-computing/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00654.warc.gz | en | 0.920947 | 1,361 | 3.5 | 4 |
A decade ago, quantum computing was still something of a parlor game. Quantum-computer advocates could make bold claims about one promising technology or another because no one had yet figured out how to string together more than a handful of quantum bits (qubits).
Times have changed. IBM now has a 50-qubit machine, Intel is at 49 qubits, and Google has developed a 72-qubit device. And in September, Pennsylvania State University researchers announced they’d built the framework for a 125-qubit compute engine.
However, unlike the more mature devices from IBM, Intel, and Google, the foundational element for the proof-of-concept Penn State system is not the computer chip but rather the atomic clock.
The neutral-atom quantum computer, proposed by the Penn State group and other researchers around the world, uses a cesium atom in a laser trap (the gold standard of precision timekeeping) as the quantum bit on which the compute engine is based.
“There’s no quantum-mechanical system we understand better than an atom,” says David Weiss, a professor of physics at Penn State. His group published a paper in Nature announcing that they’d used lasers to suspend and cool 125 cesium atoms in the shape of a cube, with each atom held 5 micrometers from its nearest neighbors. (The qubits can be loaded, cooled, and shielded from interference. But the group hasn’t yet developed the logic gates or error correction necessary to make it run.)
Atomic clocks use a well-studied characteristic of these ultracooled and stabilized atoms as the basis for a tick to mark the passage of time. Called the hyperfine split, it involves the spin of each atom’s outermost electron. (One second is universally defined today as 9,192,631,770 periods of the radiation given off from the hyperfine split in cesium.)
For a quantum computer, the idea is to use the same set of cesium quantum states used by an atomic clock. But the cesium atoms, as part of a quantum computer, rely on a quantum property not used in the atomic clock. Like all qubits, those in the cesium-atom quantum computer can occupy one hyperfine state (call it 0) or a slightly higher energy state (call it 1) or, at the core of quantum computing, an in-between state that’s a little bit 0 and a little bit 1, called quantum superposition.
To perform quantum computations using an array of atoms, the atoms must be entangled. To achieve this, Weiss explains, lasers carefully kick an individual atom inside the 125-qubit 3D array into a highly excited electronic state and then cool it back down. The entire system is so sensitive, he says, that cesium atoms near the target atom sense its excitation and de-excitation, which is enough to entangle at least a portion of the atoms in the array.
Atomic Order: These images show various configurations of cesium atoms held by lasers in a grid. The presence of an illuminated dot indicates an atom is trapped in place. The absence of a dot indicates an empty parking space. Image: Weiss Laboratory/Penn State
Mark Saffman, a physics professor at the University of Wisconsin–Madison, says his group’s 2D arrays of trapped cesium atoms can maintain their delicate quantum states for 10 seconds or more (Saffman notes that this figure comes from Weiss’s research team). By contrast, a typical operation (say, multiplying one set of qubits by another) might take a microsecond or less. So the potential is inherent in the system, Saffman says, to run many operations before its quantum states collapse due to noise. “By exciting these atomic qubits to highly excited states using laser beams, we can turn on, at will, very strong interactions,” he says.
There are still trade-offs that make neutral-atom quantum computing a challenge, says William Phillips, a physics professor at the University of Maryland and cowinner of the 1997 Nobel Prize in Physics for his work on laser atom traps.
“The lack of long-range, strong Coulomb interactions means that it is easier to put lots of atoms into a small volume, but it also means that it is harder to manipulate the atoms—that is, to perform quantum gates rapidly,” Phillips says.
Yet, says Dana Anderson, CEO of Boulder, Colo.–based ColdQuanta, now that individual atoms can be reliably stabilized and cooled to below 100 nanokelvins, much of the fundamental science is in place. Anderson says ColdQuanta is working to realize Saffman and Weiss’s vision of neutral atoms as the basis for quantum computers or simulators.
“Once you can get atoms down that cold, we have line of sight to a lot of quantum technologies,” Anderson says. “Whether we’re doing a quantum clock or quantum computing, it’s the same stuff that goes inside.”
Weiss says his 3D array could possibly scale up to 1,728 qubits, arranged in 12 columns and rows, with current technology. However, little could be done with so many qubits until his group and others develop stronger error-correction measures.
And whether Weiss’s 3D arrays or the 2D arrays preferred by Saffman and ColdQuanta are more feasible in the long term remains an open question. For now, “I recognize these problems to be solvable,” Anderson says. “It’s very much an engineering challenge.”
This article appears in the December 2018 print issue as “Atomic Clocks Inspire New Qubits.” | <urn:uuid:0de3c302-87f3-48dd-9ebe-5ca1573a0729> | CC-MAIN-2022-05 | https://spectrum.ieee.org/quantum-computing-atomic-clocks-make-for-longerlasting-qubits | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300810.66/warc/CC-MAIN-20220118092443-20220118122443-00374.warc.gz | en | 0.928218 | 1,206 | 3.53125 | 4 |
Physicists at JILA have for the first time observed chemical reactions near absolute zero, demonstrating that chemistry is possible at ultralow temperatures and that reaction rates can be controlled using quantum mechanics, the peculiar rules of submicroscopic physics.
The new results and techniques, described in the Feb. 12 issue of Science,* will help scientists understand previously unknown aspects of how molecules interact, a key to advancing biology, creating new materials, producing energy and other research areas. The new JILA work also will aid studies of quantum gases (in which particles behave like waves) and exotic physics spanning the quantum and macroscopic worlds. It may provide practical tools for “designer chemistry” and other applications such as precision measurements and quantum computing
Scientists have long known how to control the internal states of molecules, such as their rotational and vibrational energy levels. In addition, the field of quantum chemistry has existed for decades to study the effects of the quantum behavior of electrons and nuclei—constituents of molecules. But until now scientists have been unable to observe direct consequences of quantum mechanical motions of whole molecules on the chemical reaction process. Creating simple molecules and chilling them almost to a standstill makes this possible by presenting a simpler and more placid environment that can reveal subtle, previously unobserved chemical phenomena.
By precisely controlling the ultracold molecules’ internal states—electronic energy levels, vibrations, rotations and nuclear spin (or angular momentum)—while also controlling the molecular motions at the quantum level, JILA scientists can study how the molecules scatter or interact with each other quantum mechanically. They were able to observe how the quantum effects of the molecule as a whole dictate reactivity. This new window into molecular behavior has allowed the observation of long-range interactions in which quantum mechanics determines whether two molecules should come together to react or stay apart. Thus the JILA work pushes the field in new directions and expands the standard conception of chemistry.
The JILA quantum chemistry experiments were performed with a gas containing up to 1 trillion molecules per cubic centimeter at temperatures of a few hundred billionths of a Kelvin (nanokelvins) above absolute zero (minus 273 degrees Celsius or minus 459 degrees Fahrenheit). Each molecule consists of one potassium atom and one rubidium atom. The molecules have a negative electric charge on the potassium side and a positive charge on the rubidium side, so they can be controlled with electric fields.
By measuring how many molecules are lost over time from a gas confined inside a laser-based optical trap, at different temperatures and under various other conditions, the JILA team found evidence of heat-producing chemical reactions in which the molecules must have exchanged atoms, broken chemical bonds, and forged new bonds. Theoretical calculations of long-range quantum effects agree with the experimental observations.
In conventional chemistry at room temperature, molecules may collide and react to form different compounds, releasing heat. In JILA’s ultracold experiments, quantum mechanics reigns and the molecules spread out as ethereal rippling waves instead of acting as barbell-like solid particles. They do not collide in the conventional sense. Rather, as their quantum mechanical wave properties overlap, the molecules sense each other from as much as 100 times farther apart than would be expected under ordinary conditions. At this distance the molecules either scatter from one another or, if quantum conditions are right, swap atoms. Scientists expect to be able to control long-range interactions by creating molecules with specific internal states and “tuning” their reaction energies with electric and magnetic fields.
The JILA team produced a highly dense molecular gas and found that, although molecules move slowly at ultralow temperatures, reactions can occur very quickly. However, reactions can be suppressed using quantum mechanics. For instance, a cloud of molecules in the lowest-energy electronic, vibrational and rotational states reacts differently if the nuclear spins of some molecules are flipped. If a cloud of molecules is divided 50/50 into two different nuclear spin states, reactions proceed 10 to 100 times faster than if all molecules possess the same spin state. Thus, by purifying the gas (by preparing all molecules in the same spin state), scientists can deliberately suppress reactions.
The JILA experimental team attributes these results to the fact the molecules are fermions, one of two types of quantum particles found in nature. (Bosons are the second type.) Two identical fermions cannot be in the same place at the same time. This quantum behavior of fermions manifests as a suppression of the chemical reaction rate in the ultralow temperature gas. That is, molecules with identical nuclear spins are less likely to approach each other and react than are particles with opposite spins.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements. | <urn:uuid:fde91858-533e-4d45-b05c-7f65d7845979> | CC-MAIN-2022-05 | https://www.nextbigfuture.com/2010/02/scientists-control-chemical-reactions.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00214.warc.gz | en | 0.920599 | 1,111 | 3.609375 | 4 |
Hue-ing to quantum computingBy Eric Smalley, Technology Research News
The starting gun has sounded in the marathon of developing solid-state quantum computers, and one lead team jockeying for position is betting that shining different color lasers on impure diamonds will get them across the finish line.
The researchers are building their quantum computer using spectral hole burning, which tunes atoms or molecules trapped in a transparent solid to specific light wavelengths, or colors.
The researchers have tuned nitrogen atoms embedded in diamond to a range of slightly different wavelengths, said Selim M. Shahriar, a research scientist in the Research Laboratory of Electronics at the Massachusetts Institute of Technology. The differences in color are imperceptible to humans, he added.
Each atom is tuned to two wavelengths. If a laser beam of one of the wavelengths hits it, the atom will emit light of the other wavelength, Shahriar said. In addition, a pair of atoms each tuned to two wavelengths can be linked to each other. For example, if atom A is tuned to wavelengths 1 and 2 and atom B is tuned to wavelengths 2 and 3 and the atoms are hit with lasers tuned to wavelengths 1 and 3, both atoms emit light of wavelength 2, he
This allows the atoms to be coupled by quantum entanglement. When two atoms are entangled, a change in the state of one is immediately reflected by a corresponding change in the other regardless of the physical distances between the atoms.
An atom can serve as a quantum bit, or qubit, because it spins in one of two directions, and its spins can represent the ones and zeros of binary computing. Because isolated bits are of little use, linking atoms is a prerequisite for quantum computing.
The researchers expect their spectral hole burning technique to yield 300 or more qubits, Shahriar said. That number is significant because a 300-qubit quantum computer would be able to factor numbers larger than any conventional computer will likely ever be able to handle.
"The experiment is already in progress. We have already demonstrated that each atom has the two-color response that we need. We have already demonstrated how we can line [the atoms] all up to be spinning in the same direction. That's the starting point of the quantum computer," Shahriar said.
How long the qubits last is as important as the number of qubits. Qubits are fragile because the slightest influence from the outside environment can knock the atoms out of their quantum state. The nitrogen-infused diamond spectral hole burning technique would probably last long enough to yield 40,000 quantum operations, Shahriar said.
"You need to be able to do more operations, but there are ways to increase that number," he said.
The other early favorites in the race for solid-state quantum computing are techniques based on superconductors, electron spins in quantum dots and nuclear spins in semiconductors.
"It's very important to pursue a lot of different things at this stage because it's very unclear exactly what type of hardware is going to be useful in the long run," said John Preskill, professor of theoretical physics and director of the Institute for Quantum Information at the California Institute of Technology. "So it's a healthy thing that there are a lot of different ideas floating around, spectral hole burning being one of them."
The first step toward solid-state quantum computers is demonstrating good control over a qubit in a system "which has at least the potential to be scaled up," Preskill said.
Other researchers have demonstrated seven-qubit systems using nuclear magnetic resonance (NMR). However, NMR techniques are not expected to scale up significantly, hence the race to develop solid-state quantum computing. Solid-state devices are based on semiconductors or other crystalline solids.
Schemes that are good candidates for quantum computing should support reliably readable results, reliable preparation of the initial states of their qubits, and logic gates with good fidelity, Preskill said. NEC researchers in Japan have gone the furthest in solid-state quantum computing with a superconducting implementation in which they have established a qubit, he said.
The nitrogen-diamond spectral hole team is in the last year of a three-year project to establish the viability of the technique, Shahriar said.
"We expect to demonstrate quantum entanglement within nine months," he said. "At the end of the next three-year [period] we expect to have at least 10 of these atoms coupled to one another. And that'll be a pretty significant step."
Though useful quantum computers are at least 20 years away, quantum information processing could be used for secure communications in five to ten years, Shahriar said.
Shahriar's colleagues are Philip R. Hemmer of the U.S. Air Force, Seth Lloyd and Jeffery A. Bowers of MIT, and Alan E. Craig of Montana State University. The research is funded by the Air Force Office of Scientific Research, the Army Research Office and the National Security Agency.
Timeline: 5-10 years; >20 years
TRN Categories: Quantum Computing
Story Type: News
Related Elements: Technical paper "Solid State Quantum Computing Using Spectral Holes" posted on CoRR
September 20, 2000
Hue-ing to quantum computing
Robots emerge from simulation
Software sorts Web data
Processor design tunes memory on the fly
Superconducting transistor debuts
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog | Books
Buy an ad link
Ad links: Clear History
Buy an ad link
© Copyright Technology Research News, LLC 2000-2006. All rights reserved. | <urn:uuid:149c493f-abbb-4083-8df1-9b732c1781d4> | CC-MAIN-2022-05 | http://trnmag.com/Stories/092000/Spectral_Hole.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00338.warc.gz | en | 0.923488 | 1,227 | 3.78125 | 4 |
Quantum computer keeps it simple
Technology Research News
Quantum computers promise to be fantastically
fast at solving certain problems like cracking codes and searching large
databases, which provides plenty of incentive for overcoming the tremendous
obstacles involved in building them.
The basic component of quantum computers, the qubit, is made from
an atom or subatomic particle, and quantum computers require that qubits
exchange information, which means the interactions between these absurdly
tiny objects must be precisely controlled.
Researchers from the University of Oxford and University College
London in England have proposed a type of quantum computer that could
greatly simplify the way qubits interact.
The scheme allows qubits to be constantly connected to each other
instead of repeatedly connected and disconnected, and it allows a computer's
qubits to be controlled all at once, said Simon Benjamin, a senior research
fellow at the University of Oxford in England. Global control is a fairly
unconventional idea that "allows you to send control signals to all the
elements of the device at once instead of having to separately wire up
each element," he said.
The scheme can be implemented with different types of qubits.
A common type uses the spin of an electron. Electrons can be oriented
in one of two directions, spin up and spin down. These are analogous to
the poles of a kitchen magnet and can represent the 1s and 0s of computer
Key to the potential power of quantum computers is a weird trait
of quantum particles like electrons. When an electron is isolated from
its environment, it enters into superposition, meaning it is in some mix
of both spin up and spin down.
Linking two qubits that are in superposition makes it possible
for a quantum computer to examine all of the possible solutions to a problem
at once. But controlling how two qubits interact is extremely challenging,
said Benjamin. Qubits "must be made to talk to each other, and when the
operation is over they must be made to stop talking," he said.
In traditional quantum computing schemes that use electron spins,
pairs of qubits have a metal electrode between them. When the electrode
is negatively charged, it repels the negatively charged electrons that
make up the qubits, keeping them separated. But giving the electrode a
positive charge draws the electrons toward each other, allowing them to
interact by exchanging energy. Allowing the qubits to interact for half
the time it takes to completely swap energy is the basis of two-qubit
The energy of the two qubits has to be resonant or errors can
arise, but off-resonant energy can also be harnessed, said Benjamin. Particles
resonate at specific energies in the same way that larger objects vibrate
more readily at certain frequencies. Different energies can be more or
less resonant with each other much like certain musical notes sounding
better together than others. "Something that we were used to thinking
of as a source of error could in fact be a means of controlling the computer,"
The researchers' proposal replaces the electrode with a third
electron. These three electrons are constantly interacting, but they don't
always exchange energy. When the middle electron is off resonant, the
qubits are blocked from exchanging energy. This way, the interaction "is
always on, but we can effectively negate it by ensuring that the energies
of neighboring spins are completely incompatible," said Benjamin.
Avoiding electrodes is useful for several reasons. Fabricating
qubits with electrodes between them "will require a fantastic degree of
control," said Benjamin. "If a particular pair of electrons are too close,
then the interaction will be jammed on, and if they are too far away then
the interaction will be jammed off," he said.
Electrodes can also knock qubits out of superposition. "Each electrode
can act as an [antenna], channeling electromagnetic noise from the room-temperature
world right down to the qubits," said Benjamin.
The researchers took their proposal a step further by removing
the need to control electrons individually. Every change to the energy
of the electrons is applied to the whole device. The researchers divide
a string of qubits into two groups, odd and even, with every other qubit
in one group. A set of six specific changes to the energies of the electrons
covers all of the logic gates required for quantum computing, according
to the researchers. Quantum programs would consist of timed sequences
of the changes.
The main disadvantage of the researchers' proposal is that it
could require as many as two spins per qubit rather than the usual single
spin, which would make for a larger device, said Benjamin. "Right now
experimentalists are struggling to make even two qubits in solid-state
systems," he said.
The researchers' work is valuable because it extends the range
of candidates for quantum computing, said Barry Sanders, a professor of
quantum information science at the University of Calgary in Canada. The
work is "stoking the fires of creativity so that we physicists can dream
up other quantum computing realizations that lead to easier control and
less experimental complexity," he said.
There is a growing realization that there are many ways to perform
qubit operations, said Robert Joynt, a physics professor at the University
of Wisconsin at Madison. The Oxford and University College London work
is significant for people trying to make a real machine, because it means
that the constraints on the hardware are a lot looser than people thought
at first, he said. This research "is particularly nice since it gets rid
of the usual need to precisely tune two-qubit operations."
The researchers are currently exploring how the method would work
in a two- or three-dimensional array of qubits, said Benjamin. "We'd also
like to build up a more detailed description of how to implement our scheme
with specific technologies like... electron spin," he said.
Researchers generally agree that practical quantum computers are
two decades away. It is possible that quantum computers capable of computations
that are impossible on conventional computers could be built within ten
years, said Benjamin.
Such systems "will be mainly of interest to the scientific community
because they will involve using quantum computers to simulate other quantum
systems, such as fundamental biological processes," said Benjamin. "These
first quantum computers may require an entire lab built around them, and
may be treated as a national or international resource for research --
a bit like today's supercomputers or... particle accelerators."
However, it is also possible that quantum computing research could
stall if there's not enough experimental progress in the next few years,
said Benjamin. "It's possible that quantum computing is an idea born before
it's time. Our technology may simply be to crude to achieve it," he said.
Benjamin's research colleague was Sougato Bose. The work appeared
in the June 20, 2003 issue of Physical Review Letters. The research
was funded by the Royal Society, the Oxford-Cambridge-Hitachi Nanoelectronics
at the Quantum Edge project in England, and the National Science Foundation
Timeline: 10-20 years
Funding: Corporate, Government, University
TRN Categories: Quantum Computing and Communications
Story Type: News
Related Elements: Technical paper, "Quantum Computing with
an Always-On Heisenberg Interaction," Physical Review Letters, June 20,
August 13/20, 2003
Skulls gain virtual faces
Tool blazes virtual trails
keeps it simple
Video keys off human
Device simulates food
nears quantum limit
Molecule makes ring
expand nano toolkit
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog
Buy an ad link | <urn:uuid:0aa763f2-4ac1-48e2-9b1e-03b1a67f07d3> | CC-MAIN-2022-05 | http://trnmag.com/Stories/2003/081303/Quantum_computer_keeps_it_simple_081303.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00698.warc.gz | en | 0.917498 | 1,763 | 4 | 4 |
Science tells us that it is impossible for an object to travel at light speed, let alone faster than that. But so many of our favorite science-fiction movies, games, and TV shows rely on faster-than-light travel to craft their interplanetary adventures.
Let’s take a look at five means of FTL found in sci-fi that don’t break the rules of relativity and examine how plausible they are based on the science behind them.
Popularized by Star Wars and used extensively in fiction, a hyperdrive enables a spaceship to travel at FTL speeds by entering another dimension known as “hyperspace.” The spaceship isn’t actually traveling faster than the speed of light, but rather is making use of hyperspace as a shortcut, and the hyperdrive is the mechanism that shunts the spaceship into and out of this parallel dimension.
Specific coordinates within hyperspace have corresponding coordinates in normal space, but the distance between those two points will be shorter in hyperspace, allowing for a faster journey. Before making a “hyperspace jump,” calculations must be made to find the matching coordinates between hyperspace and normal space in order to know when and where to exit hyperspace at the desired normal space destination.
Is it plausible?
Physicist Bukrhard Heim proposed a theory in 1977 that FTL travel may be possible by using magnetic fields to enter higher-dimensional space. The theory uses a mathematical model that calls upon six or more dimensions in an attempt to resolve incompatibilities between quantum mechanics and general relativity, but Heim’s ideas have not been accepted in mainstream science. Still, the fact that a theoretical physicist devoted a large portion of his life in pursuit of a theory that could lead to a means of space travel lends the concept of hyperspace a little more credibility than if it were simply the fancy of a sci-fi writer.
2. Jump Drive
Seen in such works as Battlestar Galactica, a jump drive allows for instantaneous teleportation between two points. Similar to a hyperdrive, coordinates must be calculated to ensure a safe jump; the longer the desired travel distance, the more complex the calculation. In theory, there is no limit to how far a jump can take a ship, but an incorrect calculation may result in a catastrophic collision with a planet or space debris.
The Dune universe’s FTL, based on the fictional “Holtzman effect,” can also be considered a jump drive.
Is it plausible?
Master of hard sci-fi Isaac Asimov was the first to suggest the idea of a jump drive in the Foundation series, which lends some credibility to the idea. However, most fiction doesn’t clearly explain the principles of physics that allow for this teleportation, making it impossible to claim a jump drive as plausible. However, if it functions by opening a wormhole…
A wormhole, as seen in the Stargate franchise, allows for near-instantaneous travel across vast distances. Wormholes may be naturally-occurring or man-made, but are almost always temporary and serve as tunnels through spacetime.
Imagine our universe as a piece of paper, and an ant walking on that piece of paper as a spaceship. If the ant wants to walk from one end of that piece of paper to the other, the fastest way to do so would be to travel in a straight line. But paper, like space, bends. If you bend the paper into a U shape, the ant’s journey goes largely undisturbed – it still has to traverse the same distance along that line. However, in 3D space, the two ends of the paper are very close to each other now. Cut off a piece of a drinking straw and let the ant use it as a bridge or tunnel between the two ends of the paper, and the journey is suddenly much shorter.
Is it plausible?
While we have never directly observed any evidence for one, wormholes are theoretically possible. Albert Einstein and his colleague Nathan Rosen first discovered wormholes in 1935 as solutions to equations within Einstein’s general theory of relativity – the math says they can exist.
Since then, other scientists, including Stephen Hawking, have argued that it may be possible to traverse a wormhole, under the right circumstances. The debate surrounding wormholes isn’t about their plausibility, but rather how they may be created and sustained.
The concept of slipstream can be found in such works as Star Trek, Doctor Who, and the Halo video game franchise, but there is no widely-agreed upon definition of what slipstream is or how it works beyond it being a means of FTL. We’ll consider the slipstream seen in Gene Roddenberry’s Andromeda, where it is “not the best way to travel faster than light, it’s just the only way,” as per the show’s protagonist.
Slipstream is a form of interdimensional highway in which ships ride a series of slipstream “strings” – the unseen connections between all objects in the universe. These strings are in constant flux and form a tangled mess of intersections and divergent paths. Any time a pilot reaches a fork in the road, he has to guess which is the correct path to take to continue along toward his desired destination. Before the pilot makes that decision, both paths are simultaneously the correct and incorrect route, and it is the act of choosing a path that forces one to be correct and the other to be incorrect – if this made you think of Shrödinger’s cat, that does seem to be the basis for this concept. A computer selects the “correct” path 50% of the time, but due to intuition, a human picks the correct path 99.9% of the time.
Is it plausible?
There are no mainstream scientific theories that support this idea of slipstream. Reading the “lore” of this means of FTL evokes fantastical interpretations of string theory, quantum entanglement, and other concepts in modern physics, but the ideas are supported only through their internal consistency rather than actual fact, much like a well-explained magic system that allows fictional wizards to cast spells.
5. Warp Drive
Popularized by Star Trek, a warp drive distorts space around a ship while leaving the ship itself inside a “bubble” of normal space. The space in front of the ship is contracted, while the space behind it is expanded, and the ship “rides” the distortion wave at FTL speeds. Technically, it is not the ship that is moving, but rather space itself, which is how we avoid breaking any laws of physics.
Imagine a surfer slowly paddling back to shore. When a wave comes, it will lower the water level in front of him and raise the water level behind him, and he can ride the downward slope all the way to shore. Relative to the wave, the surfer isn’t moving – he’s staying between the crest and the trough, and it is instead the wave that is moving.
Surfing doesn’t quite work like that, but it’s a simplification that we can all visualize. In a similar manner to how a wave will distort water to propel a surfer, a warp drive will distort space to propel a ship.
Is it plausible?
In 1994, the Alcubierre drive was proposed as a theoretical means of FTL travel and is based on a mathematical solution to equations within Einstein’s general theory of relativity. Just like a warp drive, the Alcubierre drive would contract space in front of a spaceship and expand space behind it.
NASA has been actively researching this technology since 2012, and the lead researcher even worked with a 3D artist to develop a model of what a warp-capable ship might look like. As far as real-life FTL goes, warp is the current front-runner to becoming reality.
As far as real-life FTL travel goes, the fictional favorites can be found in Star Trek and Stargate: the warp drive, and wormholes. Both are theoretically possible; however, both require further scientific breakthroughs before practical testing can begin. In either case, we need to discover “exotic matter” – hypothetical particles with negative mass – to get these mechanisms to work. “Element zero” from the Mass Effect series, the rare material that is essential to FTL travel in that universe, doesn’t quite fit the description, but the lore is at least scientifically sound in suggesting that some new, rare form of matter is required to make this technological leap.
The good news is that scientists don’t believe this is a matter of if, but rather when. There will be a time in the future when a stately, bald man in uniform will sit back in a command chair and relay the order, “Engage.” | <urn:uuid:6111cc57-3a73-407f-b875-a06d9786db0c> | CC-MAIN-2022-05 | https://www.escapistmagazine.com/5-faster-than-light-travel-methods-and-their-plausibility/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00619.warc.gz | en | 0.94753 | 1,876 | 3.65625 | 4 |
Quantum Hoverboards on Superconducting Circuits
Building a quantum computer or quantum simulation device is a multidisciplinary undertaking that has driven a lot of cutting-edge research. But there is still a long way to go before a fully operational quantum machine becomes a reality. The basic recipe for achieving this goal may sound quite simple. First, identify a set of suitable quantum systems that can be well isolated from the environment to protect their “quantumness.” Second, assemble them together in a controlled and scalable way. The problem is, however, that in nature, isolation does not come along easily with control and scalability. Ge Yang from the University of Chicago, Illinois, and his colleagues have demonstrated a device that could potentially lead to robust yet controllable qubit architectures. In the new scheme, electrons floating on top of a superfluid-helium film (which could encode quantum bits) are combined with a high-quality superconducting circuit (which could enable the readout and control of the qubits).
Since atoms and molecules tend to either stick to solid surfaces or sink into a liquid, it might at first seem surprising that electrons could stably float on top of a liquid-helium film. This long-studied phenomenon arises from two competing phenomena . On the one hand, an effect known as Pauli blocking prevents two electrons from occupying the same quantum state. This makes the densely packed fluid of closed-shell helium atoms impenetrable for an additional incoming electron. On the other hand, the electrons are still attracted towards the helium by the “image charge” they induce, similarly to a charge attracted towards a metallic surface. The combination of the two effects results in a potential that traps the electrons and localizes them within a 2D sheet floating at a distance of a few nanometers above the helium film.
The key word here is “above,” meaning that the electrons are well separated from all the “dirt” (crystal impurities, phonons, nuclear spins, and the like) that usually quickly destroys electronic quantum coherence inside a solid. The record-high electron mobilities that have been measured for such electron “hoverboards” are direct evidence of their exceptional degree of isolation. One of the few remaining, yet small, sources of decoherence for the electron motion is the coupling of the electrons to tiny ripples on the helium surface (so-called ripplons) . Theoretical studies suggest that such isolation from the environment would lead to quantum-coherence times of spin-superposition states exceeding hundreds of seconds . Using electrodes, the floating electrons can also be confined horizontally, and at sufficiently high densities they are predicted to self-organize and form a triangular Wigner crystal —a neat way to obtain a whole lattice of single-electron quantum systems.
Such a crystal of electrons on top of superfluid helium might sound like an ideal starting point for building quantum devices. However, a major obstacle is the lack of reliable techniques to detect the quantum state or even the presence of individual electrons. The floating electron gas cannot be easily accessed by direct electrical contacts or by optical means. Over 15 years have passed since the first ideas for exploiting liquid helium electrons in quantum computing were put forward [4, 5], but the experimental progress in this direction has been modest. Now, Yang and his colleagues have successfully demonstrated a new readout technique that allows fast and nondestructive detection of electrons on liquid helium thanks to their effect on a nearby high-quality-factor superconducting circuit. This could be just the missing ingredient needed to drive this field forward.
The authors confined the helium film and the surface electrons within a narrow gap between the ground and the center electrodes of a planar superconducting microwave resonator (see Fig. 1). Being superconducting, the resonator (which can be thought of as a centimeter-long planar version of a coaxial cable) can exhibit sharp electromagnetic resonances at GHz frequencies. These resonances depend very sensitively on the dielectric properties of the surrounding environment. Therefore, tiny changes in the electron configuration—in principle, as small as the addition or loss of a single electron—can be monitored in situ and nondestructively by looking at the drift of the resonance frequency. Over the past years, similar readout schemes have found widespread use for quantum state detection in superconducting quantum computation architectures .
Yang et al. have successfully applied such ideas to electrons on liquid helium by trapping the floating electrons in the vicinity of such a circuit, where their coupling to the electric field around the resonator is strongest. First, they used the readout technique to measure and adjust the thickness of the helium film with subnanometer resolution—an important parameter defining the trapping conditions. They then sprayed a bunch of electrons emitted from a tungsten filament onto the helium surface, generating a big jump in the circuit’s resonance frequency as these electrons got trapped. Finally, by expelling the electrons a fraction at a time with a negative voltage, they were able to determine the relationship between the number of trapped electrons and the shift of the resonance. A key figure of merit extracted from those measurements (performed with thousands of electrons) is the coupling strength per electron, which quantifies the maximal resonance shift that can be induced by the addition of one electron. Such coupling strength was found to exceed the linewidth of the resonance. This means that in a setup with smaller traps containing only a few electrons , the measurement resolution would be sufficient not only to count individual electrons but also to detect in which quantized vibrational state they are in.
What’s next? To realize the full potential of the new scheme, researchers will now need to bring the hybrid systems into a regime in which the electron trapping frequency matches the circuit’s resonance . Under such conditions, a quantum superposition of two microwave photons can be converted into a quantum superposition of two vibrational states, and vice versa. The microwave resonator could then serve as a quantum “bus” that mediates interactions between distant electrons or interfaces the electrons with other quantum systems, like superconducting qubits. Beyond quantum computing applications, such control possibilities may help realize new quantum states of matter in an electron lattice whose constituents can be individually observed and controlled by quantum circuits.
This research is published in Physical Review X.
- G. Yang, A. Fragner, G. Koolstra, L. Ocola, D. A. Czaplewski, R. J. Schoelkopf, and D. I. Schuster, “Coupling an Ensemble of Electrons on Superfluid Helium to a Superconducting Circuit,” Phys. Rev. X 6, 011031 (2016).
- M. W. Cole and M. H. Cohen, “Image-Potential-Induced Surface Bands in Insulators,” Phys. Rev. Lett. 23, 1238 (1969).
- K. Shirahama, S. Ito, H. Suto, and K. Kono, “Surface Study of Liquid Using Surface State Electrons,” J. of Low Temp. Phys. 101, 439 (1995).
- P. M. Platzman and M. I. Dykman, “Quantum Computing with Electrons Floating on Liquid Helium,” Science 284, 1967 (1999).
- S. A. Lyon, “Spin-Based Quantum Computing Using Electrons on Liquid Helium,” Phys. Rev. A 74, 052338 (2006).
- C. C. Grimes and G. Adams, “Evidence for a Liquid-to-Crystal Phase Transition in a Classical, Two-Dimensional Sheet of Electrons,” Phys. Rev. Lett. 42, 795 (1979).
- R. J. Schoelkopf and S. M. Girvin, “Wiring up Quantum Systems,” Nature 451, 664 (2008).
- G. Papageorgiou, P. Glasson, K. Harrabi, V. Antonov, E. Collin, P. Fozooni, P. G. Frayne, M. J. Lea, D. G. Rees, and Y. Mukharsky, “Counting Individual Trapped Electrons on Liquid Helium,” Appl. Phys. Lett. 86, 153106 (2005).
- D. I. Schuster, A. Fragner, M. I. Dykman, S. A. Lyon, and R. J. Schoelkopf, “Proposal for Manipulating and Detecting Spin and Orbital States of Trapped Electrons on Helium Using Cavity Quantum Electrodynamics,” Phys. Rev. Lett. 105, 040503 (2010). | <urn:uuid:7a2dfe20-3604-407f-baac-8d9a8c5d75fa> | CC-MAIN-2022-05 | https://physics.aps.org/articles/v9/31 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00260.warc.gz | en | 0.906561 | 1,881 | 3.671875 | 4 |
New Superconducting Current Found Traveling Along the Outer Edges of a Superconductor
For the first time, scientists at Princeton University believe that they have spotted a superconducting current travelling along the edge of a material without straying into the middle.
A discovery that has eluded physicists for decades has reportedly been detected for the first time in a laboratory at Princeton University.
A team of physicists at the university found that superconducting currents were flowing along the exterior edge of a superconducting material.
Superconducting Currents Detected Along the Exterior Edge of a Material
Nai Phuan Ong, the senior author of the team’s study, published in the journal Science on May 1, said, “Our motivating question was, what happens when the interior of the material is not an insulator but a superconductor? What novel features arise when superconductivity occurs in a topological material?”
To investigate superconductivity in topological materials, the team used a crystalline material, which features topological properties and is a superconductor under 100 milliKelvin (- 459 degrees Fahrenheit), called molybdenum ditelluride.
Normally, superconducting currents, where electricity flows without losing energy, would permeate an entire material. However, in a thin sheet of molybdenum ditelluride which was chilled to near absolute zero, the interior and edge make up two superconductors that are distinct from one another. In the material, the tow superconductors are “basically ignoring each other,” added Ong.
The distinction between exterior and interior makes molybdenum ditelluride an example of a topological material. These materials exhibit behaviour that is closely tied to topology, a mathematical field, and can be used as topological insulators where electric currents can flow on the surface of a material but not the interior.
Topological insulators are crystals with an insulating interior and a conducting surface. In contrast to conducting materials where electrons can hop from one atom to another, the electrons in insulators cannot move, however, topological insulators allow the movement of electrons on their conducting surface.
Graphic illustrating superconductivity and its resistance to current flow. The jagged pattern in the diagram represents oscillation of the superconductivity which varies with the strength of an applied magnetic field. Image credited to Stephan Kim, Princeton University
Pushing the Superconducting State to Its Limit
Stephan Kim, a graduate student in electrical engineering, who conducted many of the project’s experiments, said, “Most of the experiments done so far have involved trying to ‘inject’ superconductivity into topological materials by putting the one material close to the other. What is different about our measurement is we did not inject superconductivity, and yet we were able to show the signatures of edge states.”
Initially, the team grew crystals in the lab and then cooled them down to a temperature where superconductivity occurs. Then, by applying a weak magnetic field to the crystal, the current displays oscillations as the magnetic field is increased. In their experiment, Kim and colleagues gradually increased the magnetic field on the material and measured how much they could increase it by before the superconducting state was lost, a value known as the ‘critical current.’
As the magnetic field grew, the critical current oscillated in a repeating pattern—a tell-tale sign of an edge superconductor. This oscillation is caused by the physics of superconductors in which electrons form Cooper pairs. The pairs act as a unified whole, all taking on the same quantum state or wave function.
What Could This Mean for Quantum Computing?
Molybdenum Ditelluride is a metal-like compound known as a Weyl semimetal. Due to its unusual properties, scientists believe that it could keep Majorana fermions, disturbances within a material that holds promise for better quantum computers. Computers based on quantum topology are expected to resist the jitter that can impair quantum calculations.
The next big challenge for scientists is to take these Majorana fermions and make them into qubits, or individual computational units, which would be a huge leap forward towards practical quantum computing.
Theoretically, a qubit would be made of combinations of pairs of Majorana fermions, each of which would be separated from its partner. If one member of the pair is disrupted by noise errors, the other should remain unaffected and thereby preserve the integrity of the qubit, enabling it to correctly carry out a computation.
The Difficulty with Developing Qubits
To date, semiconductor-based setups with Majorana fermions have been difficult to scale up. This is because a practical quantum computer requires thousands or millions of qubits, and these require growing very precise crystals of semiconducting material which are difficult to turn into high-quality superconductors. This is where topological insulators come in. | <urn:uuid:fe98ac2b-244d-42fa-8e79-9a500253489b> | CC-MAIN-2022-05 | https://www.allaboutcircuits.com/news/new-superconducting-current-found-travelling-along-the-outer-edges-of-a-superconductor/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00182.warc.gz | en | 0.936516 | 1,045 | 3.609375 | 4 |
The first time a person hears of the Rutherford Model, it’s likely to cause a flurry of excited conversation.
For some, the name means something akin to the idea that you can build a computer with the power of atoms, a kind of super-computer that can do more than what the human mind can.
But for others, the model’s origins lie with a pair of scientists who are building the next-generation of supercomputers – a pair whose first major milestone is already well underway.
In a new paper published in the journal Nature Physics, the pair, Calvin Klein and Thomas Bohr, describe a way of modelling the human cortex using the atomic model.
As the name suggests, the researchers’ method is based on the work of Nobel laureate Calvin Klein, a physicist at MIT who first described the theory of quantum entanglement in 1959.
As part of his work, Klein showed how the atomic models could allow for much more detailed understanding of the structure of a human brain.
In the 1980s, a team led by Klein led by James Oakes at the University of New South Wales (UNSW) used the model to build an atomic computer, which was used to test the properties of a computer chip.
The team used a version of the model called the CERN-EPSY-Klein model to run simulations of the structures of the cortex.
They then built a computer that could perform the task of calculating the position of atoms in the cortex and of measuring the electric charge of individual atoms.
Using this computer, the team was able to map the structures and dynamics of the cells of the cerebral cortex.
The model has become so widely used because of its remarkable properties.
But its success is not limited to neuroscience.
It also works in other areas of physics, including particle physics, and could be used to design computers and other devices that perform calculations in a variety of fields.
A big challenge for researchers in the field of quantum computing is to get the models to perform the work they require.
For instance, it may be impossible to build computers that perform computations that can’t be done by humans, or that cannot be performed by computers at scale.
In this sense, the models are a powerful tool.
But if you want to build something that can perform calculations that are computationally feasible on a human scale, the challenge becomes scaling.
A better way of getting them to perform The scientists’ work on the Rutherford model is the result of decades of work by several groups around the world, including at the Universities of Washington, Princeton, and Oxford.
The groups’ initial effort involved using the Rutherford models to build quantum computers, which could perform calculations with a fraction of the power and accuracy of a conventional computer.
The first computer to perform calculations using the models was constructed by researchers at Princeton in 1989, and it was called the SDSS-A.
However, in the early 2000s, the two groups behind the Rutherford computers decided to move on to a new direction.
They began to build the model based on a different quantum field theory, which is known as the quantum field theories of relativity.
The work led to the design of the SDP-10 (for Spinozium-doped) model, which they also used to build their model of human brains.
But the researchers said they were also trying to make their work accessible to a wider audience.
“Our goal is to make the model accessible to the scientific community, so that the model can be built into any computing platform,” Klein said.
“The goal is also to use the model as a reference for building other models of the brain, for example, models of synaptic activity or of the dynamics of neurons.
And we hope that the models can be used as reference for the computation of the neural network models.”
The Rutherford models’ ability to simulate the structure and dynamics not only of neurons but also of synapses in the brain is important for understanding the brain’s underlying principles.
“What we are doing is going to build on what is known about synaptic connections, which are fundamental processes in the way that the brain works,” Klein explained.
“So the models of synapse structure can help us to understand how neurons and synapses work.”
The models have also been used to understand the evolution of the synapse, which Klein said was a key step towards understanding the structure-function relationship of the neocortex.
But it wasn’t until last year that they had a chance to build another model of synaptosomes, which serve as the core building blocks of neurons and are involved in the process of synapsis, the process by which new connections are established between neighbouring neurons.
“That was the first time we could build a model that has this level of detail,” Klein noted.
The SDP 10 model is one of a number of models currently being built using the theory.
The Rutherford model has been used by researchers in several | <urn:uuid:338611c8-33f3-42f3-8db5-3503048222bf> | CC-MAIN-2022-05 | https://etiquetanegrahn.com/2021/08/02/how-to-build-a-3d-model-of-the-human-brain-using-an-atomic-model/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304600.9/warc/CC-MAIN-20220124185733-20220124215733-00381.warc.gz | en | 0.962408 | 1,046 | 4.1875 | 4 |
For most quantum computers, heat is the enemy. Heat creates error in the qubits that make a quantum computer tick, scuttling the operations the computer is carrying out. So quantum computers need to be kept very cold, just a tad above absolute zero.
“But to operate a computer, you need some interface with the non-quantum world,” says Jan Cranickx, a research scientist at imec. Today, that means a lot of bulky backend electronics that sit at room temperature. To make better quantum computers, scientists and engineers are looking to bring more of those electronics into the dilution refrigerator that houses the qubits themselves.
At December’s IEEE International Electron Devices Meeting (IEDM), researchers from than a half dozen companies and universities presented new ways to run circuits at cryogenic temperatures. Here are three such efforts:
Google’s cryogenic control circuit could start shrinking quantum computers
Google’s cryo-CMOS integrated circuit, ready to control a single qubit. Photo: Google
At Google, researchers have developed a cryogenic integrated circuit for controlling the qubits, connecting them with other electronics. The Google team actually first unveiled their work back in 2019, but they’re continuing to scale up the technology, with an eye for building larger quantum computers.
This cryo-CMOS circuit isn’t much different from its room-temperature counterparts, says Joseph Bardin, a research scientist with Google Quantum AI and a professor at the University of Massachusetts, Amherst. But designing it isn’t so straightforward. Existing simulations and models of components aren’t tailored for cryogenic operation. Much of the researchers’ challenge comes in adapting those models for cold temperatures.
Google’s device operates at 4 kelvins inside the refrigerator, just slightly warmer than the qubits that are about 50 centimeters away. That could drastically shrink what are now room-sized racks of electronics. Bardin claims that their cryo-IC approach “could also eventually bring the cost of the control electronics way down.” Efficiently controlling quantum computers, he says, is crucial as they reach 100 qubits or more.
Cryogenic low-noise amplifiers make reading qubits easier
A key part of a quantum computer are the electronics to read out the qubits. On their own, those qubits emit weak RF signals. Enter the low-noise amplifier (LNA), which can boost those signals and make the qubits far easier to read. It’s not just quantum computers that benefit from cryogenic LNAs; radio telescopes and deep-space communications networks use them, too.
Researchers at Chalmers University of Technology in Gothenburg, Sweden, are among those trying to make cryo-LNAs. Their circuit uses high-electron-mobility transistors (HEMTs), which are especially useful for rapidly switching and amplifying current. The Chalmers researchers use transistors made from indium phosphide (InP), a familiar material for LNAs, though gallium arsenide is more common commercially. Jan Grahn, a professor at Chalmers University of Technology, states that InP HEMTs are ideal for the deep freeze, because the material does an even better job of conducting electrons at low temperatures than at room temperature.
Researchers have tinkered with InP HEMTs in LNAs for some time, but the Chalmers group are pushing their circuits to run at lower temperatures and to use less power than ever. Their devices operate as low as 4 kelvins, a temperature which makes them at home in the upper reaches of a quantum computer’s dilution refrigerator.
imec researchers are pruning those cables
Any image of a quantum computer is dominated by the byzantine cabling. Those cables connect the qubits to their control electronics, reading out of the states of the qubits and feeding back inputs. Some of those cables can be weeded out by an RF multiplexer (RF MUX), a circuit which can control the signals to and from multiple qubits. And researchers at imec have developed an RF MUX that can join the qubits in the fridge.
Unlike many experimental cryogenic circuits, which work at 4 kelvins, imec’s RF MUX can operate down to millikelvins. Jan Cranickx says that getting an RF MUX to work that temperature meant entering a world where the researchers and device physicists had no models to work from. He describes fabricating the device as a process of “trial and error,” of cooling components down to millikelvins and seeing how well they still work. “It’s totally unknown territory,” he says. “Nobody’s ever done that.”
This circuit sits right next to the qubits, deep in the cold heart of the dilution refrigerator. Further up and away, researchers can connect other devices, such as LNAs, and other control circuits. This setup could make it less necessary for each individual qubit to have its own complex readout circuit, and make it much easier to build complex quantum computers with much larger numbers of qubits—perhaps even thousands. | <urn:uuid:d1d6b1c4-1da5-411c-a473-b9d8651bc2b9> | CC-MAIN-2022-05 | https://spectrum.ieee.org/three-super-cold-devices-quantum-computers | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00142.warc.gz | en | 0.925441 | 1,093 | 3.578125 | 4 |
Pyramid pixels promise sharp picturesBy Kimberly Patch, Technology Research News
Pyramids may be the key to sharper, cheaper electronic displays.
The color flatscreens used in electronic devices like laptops, cellphones and miniature television screens are made up of many tiny red, green and blue light emitting diodes (LED's) that produce the tiny dots, or pixels of light that make up the picture. One focus of flatscreen research has been cramming more pixels on the screen, because this makes for a higher resolution picture.
Researchers from the University of California at Los Angeles have come up with a different angle on the problem. They have devised a way to coax light from three colored LED's through a single, tiny plastic pyramid. Effectively, the three types of pixels are stacked into one space, tripling resolution in one fell swoop.
"We built the red, green and blue [LED's] in a vertical structure" said Yang Yang, an associate professor at UCLA. "They mix the light to give you any color that you want, [and] they do not take the real estate" of separate LED's, he said.
Because the pyramids mix light at the pixel level, a screen made with this technology will continue to produce a range of colors close up. In contrast, taking a magnifying glass to a conventional screen will reveal the separate red, blue, and green dots that give the illusion of many colors.
The pyramid pixel method may also prove cheaper than traditional flatscreens because it does not require shadow masking. Today's LED displays are manufactured using sheets of metal containing many tiny holes to guide the separate dots of red, green and blue organic materials as they are deposited on the screen. "The holes are so small it requires [a] very thin metal sheet for the shadow mask. It's not easy to fabricate a large [shadow mask and] it's not easy to maintain," said Yang.
In practice, the pyramid shape acts like its own shadow mask, shielding the different color LED's from each other.
"Permanent shadow masks have been used ... but not in the way Yang has been using them here," said Mark Thompson, a chemistry professor at the University of Southern California. "I don't know that anybody else has looked at building structures and using those as sort of in situ shadow masks -- using the shadowing of the pyramid," Thompson said. "It's an interesting approach that could have a lot of interesting applications," he added.
Some of the pyramid pixel's potential advantages are also shared by a pixel stacking scheme under development by Universal Display Corp. Thompson contributed to the basic research behind that scheme, which literally stacks red, blue, and green elements like pancakes into one pixel using a standard manufacturing process that includes shadow masking. The stacked pixels emit mixed light that changes color as the the ratio of currents in the three pixels is varied.
Like the pyramids, the stacked approach produces true color pixels that are effectively higher resolution and can be looked at closely without breaking up. The tricky part of the pancake pixel scheme was working out how to connect all the pixel elements, something Yang has not yet reported on, said Thompson.
In theory, the pyramid pixel displays could cost 30 percent less to manufacture then screens that use sheets of metal for shadow masking, Yang said. The manufacturing process for depositing the pyramid pixels has yet to be worked out, but it will be similar to a process used by a type of 3M film, Yang said.
Yang has implemented his scheme in a prototype pyramid about ten times the size needed. The next step is to shrink the prototype down to about 100 microns, he said.
According to Yang, the technology could be ready for practical use in about two years.
The pyramid pixel research was funded by UCLA and by a corporate partner who did not want to be named. Yang Yang's research partner was Shun-Chi Chang, also from UCLA. They published a technical paper on their research in the August 14, 2000 issue of Applied Physics Letters.
The research behind Universal Display's stacked pixel scheme was published in Science June 27, 1997 and Applied Physics Letters, November 11, 1996.
Timeline: 2 years
Funding: Corporate, University
TRN Categories: Semiconductors and Materials
Story Type: News
Related Elements: Photo 1, Photo 2; Technical paper, "Pyramid-Shaped Pixels for Full-Color Organic Emissive Displays," Applied Physics Letters, August 14, 2000; Technical paper "Three Color Tunable Organic Light Emitting Devices," Science, June 27, 1997; Technical paper "Color-tunable organic light-emitting devices," Applied Physics Letters November 11, 1996.
October 18, 2000
Nanotubes gain larger kin
Quantum computing without weirdness
Pyramid pixels promise sharp pictures
Molecule movement could make memory
Researchers make cheap telecom laser
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog | Books
Buy an ad link
Ad links: Clear History
Buy an ad link
© Copyright Technology Research News, LLC 2000-2006. All rights reserved. | <urn:uuid:d9346377-0290-4827-9f66-2daaad50efd0> | CC-MAIN-2022-05 | http://trnmag.com/Stories/101800/Pyramid_Pixels_101800.htm | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00343.warc.gz | en | 0.935382 | 1,109 | 3.59375 | 4 |
The applications running on the Internet today rely on a combination of symmetric and asymmetric encryption for security.
The asymmetric protocols are typically used for authentication and key establishment. Examples of such protocols include RSA, RSA-EC, DSA, DH, and DHEC.
The security of these protocols relies on the assumption that it would take even the most powerful classical computers thousands of years to solve certain mathematical problems (e.g. factoring large numbers or computing a discrete logarithm).
Shor’s Algorithm: Challenging classical assumptions
The assumption that these protocols were difficult to crack was held with confidence until 1994, when MIT professor, Peter Shor, showed that a quantum computer could break the encryption with ease. Using Shor’s algorithm, a large-scale quantum computer can solve the mathematical problems underlying existing encryption protocols in minutes.
Once a sufficiently large and reliable (fault-tolerant) quantum computer exists that can run Shor's algorithm, security as it is deployed on the internet today will be broken. The quantum computer will be able to decrypt all traffic without needing the keys.
How QKD works
Quantum key distribution (QKD) offers a solution to this problem by relying only on the laws of quantum physics to distribute keys instead of the complexity of mathematical problems for encryption.
Two communicating parties can use QKD to agree on a secret key, which can then be used for standard encryption algorithms like AES. The secret key bits are encoded as quantum states into individual photons that are sent over optical fibers or across free space (e.g. satellites).
There are many different QKD protocols, each with their own pros and cons. But they all rely on a quantum phenomenon that is called the collapse of the wave function. If an attacker tries to steal the key by observing photons as they fly across the fiber, the laws of quantum physics dictate that this will inevitably cause the photons to change. These changes, and hence the presence of an attacker, can be detected. Once the presence of an attacker is detected, the key is not used since it is deemed unsafe.
QKD systems have been commercially available for several years now.
It can be mathematically proven that the QKD protocols are unbreakable by both classical and quantum computers.
Nevertheless, critics of QKD (which includes, notably, the NSA) point to the following challenges:
- Side-channel attacks: While QKD is provably secure from a theoretical point of view, several attack vectors have been discovered for actual QKD products. There are side-channel attacks, not because the theory is incorrect, but because the actual product implementations are sometimes flawed. As one concrete example, actual products often use weak coherent pulse lasers, which are cheaper, but which sometimes send multiple photons instead of a single photon as assumed by the security proof. This gives rise to the so-called photon number splitting (PNS) attack where the attacker can observe the secret qubits without being detected. The European Telecommunication Standards Institute (ETSI) has published a list of known attacks.
- Complexity: The complexity of QKD protocols further increases vulnerabilities. In addition to processing qubits, the protocols require classical post-processing algorithms to analyze the statistics of the noise and detect the presence of an attacker. Each of these steps is highly complex, introducing additional security risks.
- Deployment: QKD requires new equipment to be deployed. The existing telco fibers can often be reused, but new quantum-enabled endpoints and relay stations need to be deployed.
- Authentication: QKD requires two parties to authenticate each other. There are several approaches, each with its own set of challenges. Pre-shared keys, refreshed with QKD-produced keys, can be used, but this is fragile. Existing protocols or post-quantum cryptography (PQC) can be used, but this of course loses some of the advantage of QKD. Luckily, authentication risk is not retro-active.
- Special purpose: Today’s implementations of QKD have generally used networks purpose-built to run QKD. As a classical analogy, the plain old telephone service (POTS) network at the end of the 20th century was a special-purpose network that only provided voice service. It has now been replaced by voice-over-IP (VOIP) which is just one of many services running over the general-purpose Internet.
QKD promises to protect internet communication by offering protection with the laws of physics. Early QKD hardware is already commercially available. While current technology faces several challenges, methods have been proposed to bring practical quantum secure communication into reality. For instance, Entanglement as a Service (EaaS) networks overcome a number of these challenges by distributing entanglement directly. In addition, EaaS networks support the broad range of quantum network applications, such as clustered quantum computing and quantum sensing.
To stay up to date about the latest developments in each of these network technologies, please sign up for the Aliro newsletter in the footer of this page. Please reach out to firstname.lastname@example.org if you have any questions or comments about this post. | <urn:uuid:fe07defe-a229-4193-91fb-a6cf50517c50> | CC-MAIN-2022-05 | https://www.aliroquantum.com/blog/quantum-network-security-what-is-quantum-key-distribution-qkd | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00623.warc.gz | en | 0.924629 | 1,083 | 3.765625 | 4 |
Quantum engineers from UNSW Sydney have created artificial atoms in silicon chips that offer improved stability for quantum computing.
In a paper published today in Nature Communications, UNSW quantum computing researchers describe how they created artificial atoms in a silicon ‘quantum dot’, a tiny space in a quantum circuit where electrons are used as qubits (or quantum bits), the basic units of quantum information.
Scientia Professor Andrew Dzurak explains that unlike a real atom, an artificial atom has no nucleus, but it still has shells of electrons whizzing around the centre of the device, rather than around the atom’s nucleus.
“The idea of creating artificial atoms using electrons is not new, in fact it was first proposed theoretically in the 1930s and then experimentally demonstrated in the 1990s—although not in silicon. We first made a rudimentary version of it in silicon back in 2013,” says Professor Dzurak, who is an ARC Laureate Fellow and is also director of the Australian National Fabrication Facility at UNSW, where the quantum dot device was manufactured.
“But what really excites us about our latest research is that artificial atoms with a higher number of electrons turn out to be much more robust qubits than previously thought possible, meaning they can be reliably used for calculations in quantum computers. This is significant because qubits based on just one electron can be very unreliable.”
Professor Dzurak likens the different types of artificial atoms his team has created to a kind of periodic table for quantum bits, which he says is apt given that 2019—when this ground-breaking work was carried out—was the International Year of the Periodic Table.
“If you think back to your high school science class, you may remember a dusty chart hanging on the wall that listed all the known elements in the order of how many electrons they had, starting with Hydrogen with one electron, Helium with two, Lithium with three and so on.
“You may even remember that as each atom gets heavier, with more and more electrons, they organise into different levels of orbit, known as ‘shells’.
“It turns out that when we create artificial atoms in our quantum circuits, they also have well organised and predictable shells of electrons, just like natural atoms in the periodic table do.”
Connect the dots
Professor Dzurak and his team from UNSW’s School of Electrical Engineering—including Ph.D. student Ross Leon who is also lead author in the research, and Dr. Andre Saraiva—configured a quantum device in silicon to test the stability of electrons in artificial atoms.
They applied a voltage to the silicon via a metal surface ‘gate’ electrode to attract spare electrons from the silicon to form the quantum dot, an infinitesimally small space of only around 10 nanometres in diameter.
“As we slowly increased the voltage, we would draw in new electrons, one after another, to form an artificial atom in our quantum dot,” says Dr. Saraiva, who led the theoretical analysis of the results.
“In a real atom, you have a positive charge in the middle, being the nucleus, and then the negatively charged electrons are held around it in three dimensional orbits. In our case, rather than the positive nucleus, the positive charge comes from the gate electrode which is separated from the silicon by an insulating barrier of silicon oxide, and then the electrons are suspended underneath it, each orbiting around the centre of the quantum dot. But rather than forming a sphere, they are arranged flat, in a disc.”
Mr Leon, who ran the experiments, says the researchers were interested in what happened when an extra electron began to populate a new outer shell. In the periodic table, the elements with just one electron in their outer shells include Hydrogen and the metals Lithium, Sodium and Potassium.
“When we create the equivalent of Hydrogen, Lithium and Sodium in the quantum dot, we are basically able to use that lone electron on the outer shell as a qubit,” Ross says.
“Up until now, imperfections in silicon devices at the atomic level have disrupted the way qubits behave, leading to unreliable operation and errors. But it seems that the extra electrons in the inner shells act like a ‘primer’ on the imperfect surface of the quantum dot, smoothing things out and giving stability to the electron in the outer shell.”
Watch the spin
Achieving stability and control of electrons is a crucial step towards silicon-based quantum computers becoming a reality. Where a classical computer uses ‘bits’ of information represented by either a 0 or a 1, the qubits in a quantum computer can store values of 0 and 1 simultaneously. This enables a quantum computer to carry out calculations in parallel, rather than one after another as a conventional computer would. The data processing power of a quantum computer then increases exponentially with the number of qubits it has available.
It is the spin of an electron that we use to encode the value of the qubit, explains Professor Dzurak.
“Spin is a quantum mechanical property. An electron acts like a tiny magnet and depending on which way it spins its north pole can either point up or down, corresponding to a 1 or a 0.
“When the electrons in either a real atom, or our artificial atoms, form a complete shell, they align their poles in opposite directions so that the total spin of the system is zero, making them useless as a qubit. But when we add one more electron to start a new shell, this extra electron has a spin that we can now use as a qubit again.
“Our new work shows that we can control the spin of electrons in the outer shells of these artificial atoms to give us reliable and stable qubits.
“This is really important because it means we can now work with much less fragile qubits. One electron is a very fragile thing. However an artificial atom with 5 electrons, or 13 electrons, is much more robust.”
The silicon advantage
Professor Dzurak’s group was the first in the world to demonstrate quantum logic between two qubits in silicon devices in 2015, and has also published a design for a full-scale quantum computer chip architecture based on CMOS technology, which is the same technology used to manufacture all modern-day computer chips.
“By using silicon CMOS technology we can significantly reduce the development time of quantum computers with the millions of qubits that will be needed to solve problems of global significance, such as the design of new medicines, or new chemical catalysts to reduce energy consumption”, says Professor Dzurak.
In a continuation of this latest breakthrough, the group will explore how the rules of chemical bonding apply to these new artificial atoms, to create ‘artificial molecules’. These will be used to create improved multi-qubit logic gates needed for the realisation of a large-scale silicon quantum computer.
Nature Communications (2020). DOI: 10.1038/s41467-019-14053-w
Artificial atoms create stable qubits for quantum computing (2020, February 11)
retrieved 11 February 2020
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only. | <urn:uuid:ce50ba0e-09ed-4a4b-9cdd-263f9762a4d0> | CC-MAIN-2022-05 | https://www.techclever.net/artificial-atoms-create-stable-qubits-for-quantum-computing/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00305.warc.gz | en | 0.928821 | 1,595 | 3.796875 | 4 |
New research has demonstrated that a triple stack of graphene sheets twisted at a very specific angle could demonstrate superconductivity that survives exposure to intense magnetic fields. The study was published in the journal Nature.
Image Credit: ktsdesign/Shutterstock.com
Superconductors - substances capable of conducting electricity without resistance - are poised to form the foundation of future technological and electronic advances, particularly in quantum computing.
While traditional conductors gradually lose resistance as they get colder - allowing progressively more electrons to flow - superconductors have a ‘critical temperature’ at which resistance is lost completely, allowing the free flow of electrons.
The fact that most materials capable of becoming superconductors only do so at very low temperatures has made close-to-room temperature superconductors the ‘holy grail’ of the materials science field.
This near room-temperature superconducting behavior is something that can be seen in graphene — single layers of carbon atoms in a hexagonal arrangement. When these atom thin sheets of graphene are double stacked, and a little twist is applied, they begin to act as a superconductor — even at close to room temperatures.
High temperatures are not the only thing that ‘turn-off’ superconductivity in a material. Exposure to a high magnetic field can also knock a superconductor into a regular conductive state. This has posed a challenge to developers of magnetic resonance imaging (MRI) devices, machines that rely on both superconductivity and intense magnetic fields.
The Twist Between Superconductivity and Graphene Relationship
Physicists from the Massachusetts Institute of Technology have found that not only is bilayer graphene a superconductor with a higher critical temperature, adding a third layer and applying a very specific angle — 54.7356° also known as the ‘magic angle’ — seems to allow superconductivity to be retained even in strong magnetic fields¹.
The team, led by Pablo Jarillo-Herrero, a Physics professor at MIT, discovered that when a trilayer of graphene is twisted in this way it seems to exhibit superconductivity in magnetic fields with a magnetic flux density as high as 10 Tesla. This is three times greater than the material could endure if it were a standard superconductor.
What the researchers believe they are seeing is a rare form of superconductivity called spin-triplet superconductivity.
“The value of this experiment is what it teaches us about fundamental superconductivity, about how materials can behave,” says Jarillo-Herrero. “So with those lessons learned, we can try to design principles for other materials which would be easier to manufacture, that could perhaps give you better superconductivity.”
What Makes a Conductor ‘Super’?
One of the most striking demonstrations of how superconductors work can be seen by placing an ordinary magnet over the top of such a material while it is cooled with liquid nitrogen. The magnet ‘levitates’ in place above the superconductor during this experiment. Whereas a normal conductor produces currents in a magnet moving past it via electromagnetic induction, superconductors ‘push’ the magnetic fields out by inducing surface currents. Instead of allowing the magnetic field to pass through it — with this passage measured by magnetic flux — the superconductor acts as a faux-magnet with the opposite polarity, repelling the ‘real’ magnet — a phenomenon called the Meissner effect.
The key to explaining superconductivity lies in understanding how electrons behave in materials at extremely low temperatures. Thermal energy randomly vibrates atoms in a material, and the higher the temperature, the faster the atoms vibrate.
At high temperatures, electrons — which all have the same negative charge — repel each other and act as free particles. Yet, there is still a tiny attraction between electrons in solids and liquids, and at low temperatures, electrons group together into what is known as Cooper pairs.
In Cooper pairs — named after American physicist Leon Cooper who first described this pairing up phenomenon in the mid-1950s — the electrons have an opposite spin. This is a quantum mechanical quantity that describes how a particle will behave when exposed to a magnetic field. One electron possesses spin ‘up’ and the other has spin ‘down.’ This state is described as a spin-singlet.
Cooper pairs travel unimpeded through a superconductor until they are exposed to a strong magnetic field. The electrons are then pulled in opposite directions, ripping the Cooper pairing apart.
Magnetic fields, therefore, destroy superconductivity. This is at least the case for spin-singlet superconductors. For exotic superconductors such as spin-triplet superconductors, the situation can be quite different.
More ‘Super’ Superconductors
In some exotic superconductors, electrons pair up with the same spin rather than opposite spins — or so-called spin-triplet pairs.
Spin describes how a particle behaves in a magnetic field. Particles of opposite spin move in opposite directions. However, if these electrons have the same spin, the Cooper pairing is not destroyed. Superconductivity is then preserved, even in extremely strong magnetic fields.
What Jarillo-Herrero and his team — already known for their pioneering work with the electronic properties of twisted graphene — wanted to discover was whether magic-angle trilayer graphene may display signs of spin-triplet superconductivity.
The researchers previously observed signs of this phenomenon in magic-angle bilayer graphene, but their new study showed that the effect is much stronger when an extra layer is added, with superconductivity retained at higher temperatures.
Surprisingly, trilayer graphene retained superconductivity in strong magnetic fields that would have wiped it out in its bilayer counterpart. To test this, the researchers exposed the magic-angle trilayer graphene to magnetic fields of increasing strengths. They found that superconductivity disappeared at a specific strength, but the graphene regained superconductivity at high field strengths.
This behavior is not seen in conventional spin-singlet superconductors.
The reintroduced superconductivity lasted in the magic-angle trilayer graphene up to a magnetic flux of 10 Tesla, but this was the maximum flux the team’s magnet could achieve. This means that this resurrected superconductivity could actually persist in even stronger fields.
The conclusion reached by the team; magic-angle trilayer graphene is not a run-of-the-mill superconductor.
“In spin-singlet superconductors, if you kill superconductivity, it never comes back — it’s gone for good,” says MIT postdoctoral researcher Yuan Cao. “Here, it reappeared again. So this definitely says this material is not spin-singlet.”
The question is: what exactly is the spin-state demonstrated by the material? This is something the team will now attempt to further investigate. Even with this question yet unanswered, we can still predict the kinds of applications that would benefit from this boosted resistance to magnetic fields.
Applications of Magic-Angle Trilayer Graphene Superconductors
The fact that this type of superconductor can resist high magnetic fields makes it incredibly useful across a range of applications; in particular, magnetic resonance imaging (MRI), which uses superconducting wires under intense magnetic fields to image biological tissues.
The functioning MRI devices are currently limited to their ability to resist a magnetic flux of no more than 3 Tesla, so if magic-angle graphene trilayer does display spin-triplet superconductivity, it could be used in such machines to boost their resistance to magnetic flux. The net result of this should be MRIs that can produce sharper and deeper images of human tissues.
Magic-angle trilayer graphene could be used in quantum computers to provide more resistant superconductors and much more powerful machines.
“Regular quantum computing is super fragile. You look at it and, poof, it disappears,” says Jarillo-Herrero. “About 20 years ago, theorists proposed a type of topological superconductivity that, if realized in any material, could enable a quantum computer where states responsible for computation are very robust.”
This results in a quantum computer with computing power that far exceeds anything currently available. However, the team does not yet know if the exotic superconductivity they have found in the magic-angle trilayer graphene is the right type to facilitate this computing boost.
“The key ingredient to realizing that would be spin-triplet superconductors, of a certain type. We have no idea if our type is of that type,” concludes Jarillo-Herrero. “But even if it’s not, this could make it easier to put trilayer graphene with other materials to engineer that kind of superconductivity.
“That could be a major breakthrough. But it’s still super early.”
References and Further Reading
¹ Jarillo-Herrero. P., Cao. Y., Park. J. M., et al, Pauli-limit violation and re-entrant superconductivity in moiré graphene. Nature. https://doi.org/10.1038/s41586-021-03685-y | <urn:uuid:bce4269a-f6b3-4944-b2b4-abc8f4057982> | CC-MAIN-2022-05 | https://www.azom.com/article.aspx?ArticleID=20673 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00025.warc.gz | en | 0.917619 | 1,941 | 3.96875 | 4 |
While the scientific community holds its breath for a large-scale quantum computer that could carry out useful calculations, a team of IBM researchers has approached the problem with an entirely different vision: to achieve more and better results right now, even with the limited quantum resources that exist today.
By tweaking their method, the scientists successfully simulated some molecules with a higher degree of accuracy than before, with no need for more qubits. The researchers effectively managed to pack more information into the mathematical functions that were used to carry out the simulation, meaning that the outcome of the process was far more precise, and yet came at no extra computational cost.
"We demonstrate that the properties for paradigmatic molecules such as hydrogen fluoride (HF) can be calculated with a higher degree of accuracy on today's small quantum computers," said the researchers, at the same time priding themselves on helping quantum computers "punch above their weight".
SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium)
Car manufacturer Daimler, a long-term quantum research partner of IBM's, has shown a strong interest in the results, which could go a long way in developing higher-performing, longer-lasting and less expensive batteries.
Since 2015, Daimler has been working on upgrading lithium-ion batteries to lithium-sulfur ones – a non-toxic and easily available material that would increase the capacity and speed-of-charging of electric vehicles.
Designing a battery based on new materials requires an exact understanding of which compounds should come together and how. The process involves accurately describing all the characteristics of all the molecules that make up the compound, as well as the particles that make up these molecules, to simulate how the compound will react in many different environments. In other words, it is an incredibly data-heavy job, with infinite molecular combinations to test before the right one is found.
The classical methods that exist today fail to render these simulations with the precision that is required for a breakthrough such as the one Daimler is working towards. "This is a big problem to develop next-generation batteries," Heike Riel, IBM Research quantum lead, told ZDNet. "Classical computers, and the models we've developed in physics and chemistry for many years still cannot solve those problems."
But the task could be performed at speed by quantum computers. Qubits, and their ability to encode different information at the same time, enable quantum algorithms to run several calculations at once – and are expected, one day, to enable quantum computers to tackle problems that are seemingly impossible, in a matter of minutes.
To do that, physicists need quantum computers that support many qubits; but scaling qubits is no piece of cake. Most quantum computers, including IBM's, work with less than 100 qubits, which is nowhere near enough to simulate the complex molecules that are needed for breakthroughs, such as lithium-sulfur car batteries.
Some of the properties of these molecules are typically represented in computer experiments with a mathematical function called a Hamiltonian, which represents particles' spatial functions, also called orbitals. In other words, the larger the molecule, the larger the orbital, and the more qubits and quantum operations will be needed.
"We currently can't represent enough orbitals in our simulations on quantum hardware to correlate the electrons found in complex molecules in the real world," said IBM's team.
Instead of waiting for a larger quantum computer that could take in weighty calculations, the researchers decided to see what they could do with the technology as it stands. To compensate for resource limitations, the team created a so-called "transcorrelated" Hamiltonian – one that was transformed to contain additional information about the behavior of electrons in a particular molecule.
This information, which concerns the propensity of negatively charged electrons to repel each other, cannot usually fit on existing quantum computers, because it requires too much extra computation. By incorporating the behavior of electrons directly into a Hamiltonian, the researchers, therefore, increased the accuracy of the simulation, yet didn't create the need for more qubits.
The method is a new step towards calculating materials' properties with accuracy on a quantum computer, despite the limited resources available to date. "The more orbitals you can simulate, the closer you can get to reproducing the results of an actual experiment," said the scientists. "Better modelling and simulations will ultimately result in the prediction of new materials with specific properties of interest."
IBM's findings might accelerate the timeline of events for quantum applications, therefore, with new use cases emerging even while quantum computers work with few qubits. According to the researchers, companies like Daimler are already keen to find out more about the breakthrough.
This is unlikely to shift IBM's focus on expanding the scale of its quantum computer. The company recently unveiled a roadmap to a million-qubit system, and said that it expects a fault-tolerant quantum computer to be an achievable goal for the next ten years. According to Riel, quantum simulation is likely to be one of the first applications of the technology to witness real-world impacts.
"The car batteries are a good example of this," she said. "Soon, the number of qubits will be enough to generate valuable insights with which you can develop new materials. We'll see quantum advantage soon in the area of quantum simulation and new materials."
IBM's roadmap announces that the company will reach 1,000 qubits in 2023, which could mark the start of early value creation in pharmaceuticals and chemicals, thanks to the simulation of small molecules. | <urn:uuid:8933b870-9adc-4a46-8504-e95c78ce662b> | CC-MAIN-2022-05 | https://www.zdnet.com/article/less-is-more-ibm-achieves-quantum-computing-simulation-for-new-materials-with-fewer-qubits/#ftag=RSSbaffb68 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00426.warc.gz | en | 0.951842 | 1,140 | 3.65625 | 4 |
You’re a what?
Quantum computer research scientist
Imagine typing a very complex query into your computer and having to wait more than a lifetime for results. Thanks to scientists like Davide Venturelli, supercomputers of the future could return those results in a fraction of a second.
Davide is a quantum computer research scientist for the Universities Space Research Association. Quantum theory explains how matter acts at the tiniest levels; in applying it to computing, researchers study ways in which that behavior can advance processing power. “We explore how to control these quantum behaviors, to make them happen on demand, in order to crunch numbers and process information,” he says. “We’re pushing the boundaries of what is known in computer science.”
What they do
Quantum computer research scientists help to solve problems. In their research, they make scientific assumptions based on quantum theory and then conduct experiments to test whether their solutions work.
These scientists may be involved in a variety of projects but often focus on a specific goal. Davide focuses on finding new ways of applying quantum theory to improve how computers solve optimization problems—that is, problems for finding the best of all possible solutions. Digital computers, which are most common today, process information using variables with 1 value (either 0 or 1) at a time. Quantum computers can use both values simultaneously, which results in faster processing. “We know that quantum computers are more powerful than digital computers,” he says, “but we don’t know by how much yet.”
Research. In studying information technology, quantum computer research scientists think about possibilities. For example, Davide asks questions in his research such as, “What is the fastest possible way we can make computers process information?”
Davide and other research scientists use their understanding of quantum theory to come up with solutions. Their research may lead to problem-solving computer processes that calculate and sort information much faster. For example, research scientists might develop a theoretical solution that can be run only on quantum computers designed to produce better weather forecasts.
Experiments. To test whether their theories work, quantum computer research scientists may conduct experiments or work with experimental physicists. For example, they may create a quantum environment with computer hardware, then test how particles in that environment react to different levels of laser intensity. Experiments that verify a theory may lead to improvements, such as more efficient computer design and faster, more secure communication for computer networks.
But relying on theory means that scientists work with incomplete information—so they’re sometimes surprised at the outcomes. “Experiments may result in the opposite of what you expect,” says Davide, “and you analyze the data to try to figure out why.”
Other job duties. Research scientists may write articles about their findings for academic journals or devise ways to apply their research to advance their employer’s goals. Some research scientist have other responsibilities. For example, Davide also manages external research collaborations for NASA’s Quantum Artificial Intelligence Laboratory.
How to become one
To become a quantum computer research scientist, you usually need a doctoral degree (Ph.D.). But you need some qualities and skills in addition to the formal credential.
Qualities and skills. As researchers, quantum computer research scientists should enjoy being part of a team and sharing their findings with others, which may include engineers, mathematicians, physicists, and Ph.D. students. This collaboration helps bring varied perspectives to solving a problem. “There’s a cross-utilization of ideas when you work with different groups,” Davide says. “My colleagues are very smart and open-minded people.”
Like many scientists, quantum computer research scientists must have strong analytical, critical thinking, and reasoning skills to solve complex problems. Attention to detail is critical as scientists precisely record their theories and experiments, which must be reproducible and able to withstand peer review.
Communication skills are also important. To share their research with collaborators or the public, quantum research scientists must be able to write papers and present their findings at conferences. They may also need to write proposals for grants to fund research projects.
Education. Quantum computer research scientists usually need a Ph.D. to learn methods of discovery and to develop the tools needed for researching. Coursework in undergraduate and graduate degree programs typically includes computer science, mathematics, and physics.
You may decide to pursue a master’s degree with classes in quantum computing before entering a Ph.D. program. Davide studied physics at the bachelor’s and master’s levels, but he was passionate about computers, too. Not surprisingly, quantum computing piqued his interest. “It’s a wonderful interaction between the two disciplines,” he says. Davide earned his Ph.D. in nanophysics and numerical simulations of condensed matter.
What to expect
The U.S. Bureau of Labor Statistics (BLS) does not collect data specifically on quantum computer research scientists. Instead, BLS may count these workers among physicists, of which 15,650 were employed in May 2015. The median annual wage for physicists in colleges, universities, and professional schools—where most quantum computer research scientists are likely to work—was $63,840. That’s more than the median annual wage of $36,200 for all workers.
Quantum computer research scientists work primarily indoors, in academic settings, and may travel frequently to attend seminars or conferences. Area of focus or project type may dictate specific details of their work. For example, testing particularly intricate theories may take days or months, working either independently or with other scientists.
Whether alone or with colleagues, Davide enjoys his work for the independence his job offers. “You have lots of intellectual freedom. Nobody really tells you what to do,” he says. “It’s up to your skills and vision.”
About the Author
Domingo Angeles is an economist in the Office of Occupational Statistics and Employment Projections, BLS. He can be reached at (202) 691-5475 or email@example.com .
Domingo Angeles, "Quantum computer research scientist," Career Outlook, U.S. Bureau of Labor Statistics, July 2016. | <urn:uuid:6674aedf-63d9-44d0-a4db-b87e31df5b3b> | CC-MAIN-2022-05 | https://www.bls.gov/careeroutlook/2016/youre-a-what/mobile/quantum-computer-research-scientist.htm | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00225.warc.gz | en | 0.947217 | 1,322 | 3.75 | 4 |
In this new post on quantum computing, we are going to talk about quantum teleportation. Teleportation. OMG. Yes, I can see those stars sparking in your geeky eyes …
So … first, let’s clarify a few things, shall we ?
Quantum teleportation is a communications protocol that transfers the quantum state of a system to another spatially separated system. For this, it takes advantage of quantum entanglement. Contrary to what the name suggests, it is not a matter of transferring matter (or energy).
Quantum teleportation is defined as a process by which a quantum bit can be transmitted from one location to another, without the quantum bit actually being transmitted through space
So, although the name is inspired by the teleportation commonly used in fiction, quantum teleportation is not a form of transportation, but a form of communication.
Please stay. I won’t tell Captain Kirk. And Mr Spock would be so proud of you he could smile.
The seminal paper first expounding the idea of quantum teleportation was published by C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres and W. K. Wootters in 1993.
Quantum teleportation requirements are:
- Two locations A and B (Alice and Bob),
- The ability to create an entangled EPR pair
- A conventional communication channel, used to carry classical bits,
- A quantum channel, used to send qubits from the entangled pair to A and B
The protocol goes as follows:
- The teleportation protocol begins with a quantum state , in Alice’s possession. The purpose of the teleportation is to convey from Alice to Bob.
Generally speaking, this qubit can be written as: .
- Next, the protocol requires that Alice and Bob share a maximally entangled state. It can be any one of the four Bell states (it doesn’t matter which one).
Let’s say that Alice and Bob mutually agreed on: .
- Now, let’s assume that Alice and Bob are sharing the state .
- Through the quantum channel, Alice obtains one of the particles in the pair, with the other going to Bob. This can be implemented, for example, by preparing the particles together and shooting them to Alice and Bob from a common source.
- At this point:
- Alice has two particles : (the one that she wants to teleport), and one of the entangled pair (let’s call it A)
- Bob has one particle: the other part of the entangled pair (let’s call it B).
- A the global system level, the state is described by a three-particles quantum state:
- Alice makes a local measurement on the two particles in her possession. Alice’s two particles are now entangled to each other. The result of Alice’s (local) measurement is that the three-particle state would collapse to one of the four possible states. The entanglement originally shared between Alice’s and Bob’s particles is now broken.
Experimentally, this measurement may be achieved via a series of laser pulses directed at the two particles.
- Following the local measurement, Bob’s particle now takes on one of four possible superpositions. With a little math (expressing the quantum states in terms of the 4 Bell states basis), it is easy to show that these possible are unitary images of the qubits to be teleported.
- The result of Alice’s Bell measurement tells her which of the above four states the system is in. Alice then sends her result to Bob through a classical channel (using two classical bits to communicate which of the four results she obtained). This is the only potentially time-consuming step, due to speed-of-light considerations.
- After Bob receives the message from Alice, he knows then which of the four states his particle is in. Using this information, he can perform a unitary operation to transfer the state on its particle. Teleportation is thus achieved.
Please note that:
- After this operation, Bob’s qubit will take on the state and Alice’s qubit becomes an (undefined) part of an entangled state. Teleportation does not result in the copying of qubits, and hence is consistent with the no cloning theorem.
- There is no transfer of matter or energy involved. Alice’s particle has not been physically moved to Bob: only its state has been transferred.
- Every time a qubit is teleported, Alice needs to send Bob two bits of information thought the classical communication channel.These two classical bits do not carry complete information about the qubit being teleported.
If Eve (an eavesdropper) intercepts these two bits, she may know exactly what Bob needs to do in order to recover the desired state. However, this information is useless if she cannot interact with the entangled particle in Bob’s possession.
It is possible to express the previous quantum teleportation protocol in terms of quantum circuits. Typically, the unitary transformation that is the change of basis (from the standard product basis into the Bell basis) can be written using quantum gates:
This quantum circuit will be used in the next paragraph, as the basis of a little quantum algorithm experimentation with the Q# language and its Quantum Development Kit.
Experimenting with Q#
We introduced the basic concepts of Q# in our last post on Bell states. Basic quantum teleportation code is provided as a sample in the Quantum Development Kit. Let’s have fun and follow it !
Let’s start with the non-quantum part of the program, used to invoke the quantum simulator, feed it with input data and read outputs from it:
It is rather simple:
- it invokes the quantum simulator,
- it tries 8 quantum teleportations,
- for each round, it randomly chooses the boolean value (either “true” or “false”) to be sent, then checks if this value has been properly teleported.
Now comes the quantum part of the program. As we have seen earlier on, two communication channels are required:
- A classical channel (for classical data)
- A quantum channel (for the entangled EPR pair)
The implementations are following the circuit for quantum teleportation introduced in the previous paragraph.
The code for the classical channel is:
And the code for the quantum channel is:
… and the results (outputs from the classical part of the program) are:
Great ! We have successfully teleported 8 boolean values 🙂
Importance of quantum teleportation
Since 1993, many real life quantum teleportation experiments have been carried out. First verifications came as early as 1998, and subsequently, the record distance for quantum teleportation has been gradually increased.
On 26 February 2015, scientists at the University of Science and Technology of China in Hefei, led by Chao-yang Lu and Jian-Wei Pan carried out the first experiment teleporting multiple degrees of freedom of a quantum particle. Later on (2017), the team achieved the first quantum teleportation from Earth to a satellite, while their counterparts in Japan are the first to use a microsatellite for quantum communications.
Masahide Sasaki and colleagues at the National Institute of Information and Communications Technology in Japan demonstrated in late 2017 that they were able to receive and process the information at a ground station in Japan using a quantum key distribution (QKD) protocol. QKD uses principles of quantum mechanics to ensure that two parties can share an encryption key secure in the knowledge that it has not been intercepted by a third party.
Quantum teleportation is a very active subject of research, with concrete applications to telecommunications and encryption.
Note: to speedup the writing of this post, a few paragraphs and illustrations are based on wikipedia’s entries on quantum teleportation. Q# code is based on MS Quantum Development Kit documentations and samples. | <urn:uuid:8785899a-bff0-42a9-b60d-76e596163b30> | CC-MAIN-2022-05 | https://www.quantum-bits.org/?p=1857 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00107.warc.gz | en | 0.914187 | 1,659 | 3.734375 | 4 |
Eindhoven – August 30, 2021
While the word “quantum” has only started trending in the technology space during the last decade, many past technologies already relied on our understanding of the quantum world, from lasers to MRI imaging, electronic transistors, and nuclear power. The reason quantum has become so popular lately is that researchers have become increasingly better at manipulating individual quantum particles (light photons, electrons, atoms) in ways that weren’t possible before. These advances allow us to harness more explicitly the unique and weird properties of the quantum world. They could launch yet another quantum technology revolution in areas like sensing, computation, and communication.
What’s a Quantum Computer?
The power of quantum computers comes chiefly from the superposition principle. A classical bit can only be in a 0 or 1 state, while a quantum bit (qubit) can exist in several 0 and 1 state combinations. When one measures and observes the qubit, it will collapse into just one of these combinations. Each combination has a specific probability of occurring when the qubit collapses.
While two classical bits can only exist in one out of four combinations, two quantum bits can exist in all these combinations simultaneously before being observed. Therefore, these qubits can hold more information than a classical bit, and the amount of information they can hold grows exponentially with each additional qubit. Twenty qubits can already hold a million values simultaneously (220), and 300 qubits can store as many particles as there are in the universe (2300).
However, to harness this potential processing power, we must understand that probabilities in quantum mechanics do not work like conventional probabilities. The probability we learned about in school allowed only for numbers between 0 and 1. On the other hand, probabilities in quantum mechanics behave as waves with amplitudes that can be positive or negative. And just like waves, quantum probabilities can interfere, reinforcing each other or cancelling each other out.
Quantum computers solve computational problems by harnessing such interference. The quantum algorithm choreographs a pattern of interference where the combinations leading to a wrong answer cancel each other out. In contrast, the combinations leading to the correct answer reinforce each other. This process gives the computer a massive speed boost. We only know how to create such interference patterns for particular computational problems, so for most problems, a quantum computer will only be as fast as a conventional computer. However, one problem where quantum computers are much faster than classical ones is finding the prime factors of very large numbers.
How Quantum Computers Threaten Conventional Cryptography
Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest and most widely used methods to encrypt data is called RSA (Rivest-Shamir-Adleman – the surnames of the algorithm’s designers). RSA protocols encrypt messages with a key that results from the multiplication of two very large numbers. Only someone who knows the values of these two numbers can decode the message.
RSA security relies on a mathematical principle: multiplying two large numbers is computationally easy, but the opposite process—figuring out what large numbers were multiplied—is extremely hard, if not practically impossible, for a conventional computer. However, in 1994 mathematician Peter Shor proved that an ideal quantum computer could find the prime factors of large numbers exponentially more quickly than a conventional computer and thus break RSA encryption within hours or days.
While practical quantum computers are likely decades away from implementing Shor’s algorithm with enough performance and scale to break RSA or similar encryption methods, the potential implications are terrifying for our digital society and our data safety.
In combination with private key systems like AES, RSA encrypts most of the traffic on the Internet. Breaking RSA means that emails, online purchases, medical records, company data, and military information, among many others, would all be more susceptible to attacks from malicious third parties. Quantum computers could also crack the digital signatures that ensure the integrity of updates to apps, browsers, operating systems, and other software, opening a path for malware.
This security threat has led to heavy investments in new quantum-resistant encryption. Besides, existing private key systems used in the enterprise telecom sector like AES-256 are already quantum resistant. However, even if these methods are secure now, there is no guarantee that they will remain secure in the future. Someone might discover a way to crack them, just as it happened with RSA.
Quantum Key Distribution and its Impact on the Telecom World
Given these risks, arguably the most secure way to protect data and communications is by fighting quantum with quantum:protect your data from quantum computer hacking by using security protocols that harness the power of quantum physics laws. That’s what quantum key distribution (QKD) does: QKD uses qubits to generate a secret cryptographic key protected by the phenomenon of quantum state collapse. If an attacker tries to eavesdrop and learn information about the key, they will distort the qubits irreversibly. The sender and receiver will see this distortion as errors in their qubit measurements and know that their key has been compromised.
Quantum-safe encryption will take part in people’s day-to-day lives through upgrades to laptops, phones, browsers, and other consumer products. However, most of the burden for quantum-safe communication will be handled by businesses, governments, and cloud service providers that must design and install these systems. It’s a hugely complex change that’s on par with upgrading internet communications from IPv4 to IPv6.
Even if practical quantum computers are not yet available, it’s essential to begin investing in these changes, as explained by Toshiba Chief Digital Officer Taro Shimada: “Sectors such as finance, health and government are now realizing the need to invest in technology that will prepare and protect them for the quantum economy of the future. Our business plan goes far deeper and wider than selling quantum cryptographic hardware. We are developing a quantum platform and services that will not only deliver quantum keys and a quantum network but ultimately enable the birth of a quantum internet”. Toshiba expects the QKD market to grow to approximately $20 billion worldwide in FY 2035.
How Photonics Impacts QKD
Qubits can be photons, electrons, atoms, or any other system that can exist in a quantum state. However, using photons as qubits will likely dominate the quantum communications and QKD application space. We have decades of experience manipulating the properties of photons, such as polarization and phase, to encode qubits. Thanks to optical fiber, we also know how to send photons over long distances with relatively little loss. Besides, optical fiber is already a fundamental component of modern telecommunication networks, so future quantum networks can run on that existing fiber infrastructure. All these signs point towards a new era of quantum photonics.
Photonic QKD devices have been, in some shape or form, commercially available for over 15 years. Still, factors such as the high cost, large size, and the inability to operate over longer distances have slowed their widespread adoption. Many R&D efforts regarding quantum photonics aim to address the size, weight, and power (SWaP) limitations. One way to overcome these limitations and reduce the cost per device would be to integrate every QKD function—generating, manipulating, and detecting photonic qubits—into a single chip. The further development of the integrated quantum photonics (IQP) chip is considered by many as a critical step in building the platform that will unlock quantum applications in much the same way as integrated circuits transformed microelectronics.
In the coming articles, we will discuss more how to combine photonic integration with quantum technologies to address the challenges in quantum communications.
If you would like to download this article as a PDF, then please click here. | <urn:uuid:f2c3e82a-d26d-4ffd-96cf-c50b7b87d238> | CC-MAIN-2022-05 | https://effectphotonics.com/an-introduction-to-qkd/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00468.warc.gz | en | 0.925728 | 1,603 | 3.65625 | 4 |
What is computer generation?
A computer is a machine manipulating data or information electronically. It can store, retrieve, and analyze the information. A computer can now be used to follow instructions, send email messages, play online games, and browse the internet. Editing or making spreadsheets, reports, and sometimes even videos can also be used. Yet the development of this complex structure began approximately 1940 with the very first Computer Generation and has since evolved. The computer revolution is always marked as a technological breakthrough that has fundamentally altered the unique way for computers work, culminating in ever smaller, cheaper, increasingly efficient, and much more efficient machines. Reference is often made to the development of computer technology in relation to the various types of computing devices. Computer revolution completely changed the way computers function, resulting in ever smaller, cheaper, more efficient, and much more secure computers.
The first computer generation:
Vacuum tubes were used in the first generations of computers. These computer systems made use of vacuum tubes as storage circuits and electromagnetic drums. As just a consequence they were very massive, taking up practically whole rooms and costing a lot to maintain. Those were all ineffective materials that provided a lot of temperatures, pulled enormous energy, and ultimately produced a lot of the heat that caused continuous failures. These machines of the first century focused on ‘machine language’ (that is the most simple programming language which computers use to communicate). Information was dependent on the paper tape and punches cards. Performance appeared on publish-outs. The generation’s two significant devices were the UNIVAC and ENIAC computer.
Figure: 1 Vacuum tube
The second generation of computers:
A transistor computer also referred to as a second-gen computer, is a computer that uses single transistors rather than vacuum tubes. … By 1947, the transistor’s invention drastically changed the production of computers. In television sets, phones, and computers the transistor supplemented the obsolete vacuum tube. As a consequence, computer equipment has now shrunk in scale. In 1956 the transistor had been at work on the device. Together with early developments in magnetic-core memory, transistors contributed to lighter, cheaper, more stable, and much more energy-efficient second-gen computers than their counterparts. The initial supercomputers expanded by IBM and LARC by Sperry-Rand were the very first large-scale devices to take full advantage of this transistor technique. Both built for atomic power research labs, these computers were able to manage huge amounts of data, which by atomic researchers was a skill in much availability. The computers were costly and many were too efficient for the computing wants of the business community, thereby reducing their appeal. Only two LARCs have ever been constructed; one from the Lawrence Radiation Labs in Livermore, California, going to name just after the device, and another in the United States.
Figure: 2. Transistor
The third generation of computers:
Computers of the third generation were machines that increased prevalence to the invention of the integrated circuit (IC). As we recognize them nowadays they were the very first move towards computers. Their key innovation was the use of integrated circuits, which made it possible to slim them down to be as lightweight as big toasters. Despite this, they acquired the title microcomputers because they’re very small in comparison to computers of the 2nd gen that would fill entire floors and houses. Much-known machines in this period also include the DEC PDP range and the IBM-360 series of computers. Computers quickly became much more accessible, and then developers and found it interesting have become more popular, contributing to more advances in the area of computer programming and also hardware. It was around this period that several high-level programming languages, including such C, Pascal, COBOL, and FORTRAN, began public sphere use. In this period, magnetic storage has become more common too.
Figure: 3. Integrated circuit
The fourth generation of computers:
The fourth-generation time frame was from 1971-80. The (VLSI) large scale built-in circuits are used on this gene’s computers. Such circuits have 5000 transistors as well as other components of the circuit. The computers of the fourth-generation are becoming more powerful, compact, reliable, and affordable. There are many numerous additional tools including such time-sharing, real-time networking, fourth-generation decentralized os was used. This generation uses all the high-level languages including Java, C, C++, PHP. Such machines can also be used for incorporation in the LSI (a massive scale). The fourth generation is the third generation expansion. First-generation computers covered the entire room area, but new computers will fit in the hand. This generation of computers uses microprocessor chips. In the fourth generation of computers, object-oriented programming has been used. There are different kinds of language in object-oriented programming, including Java, Visual Basic, etc. These object-oriented applications are intended to solve particular issues and need no advanced training sessions. It includes queries and substations of applications. The first business that can build the microchips was the Intel. IBM produced the first Fourth Generation home computer. Such machines had to operate a minimal amount of energy. The Computer’s fourth-generation had the first supercomputer that could reliably conduct several calculations. Such supercomputers have been used in telecommunications as well. The ability for processing expanded to many gigabytes, and even terabytes of data.
Figure: 4. Micro processor
The fifth generation of computers:
The Fifth Generation project is a major Japanese research study which aims to produce a new form of the computer by 1991. It was initially launched after much discussion about the need for considerably more accessible computers that would proliferate “like air” across in order to take advantage of the aging population and personal development among many other things. The MITI people who funded the plan must’ve had a strong marketing strategist to select the project ‘s address because it’s very title has generated a lot of excitement around the world. Computers of the 5th generation will be in the stage of development, focused on artificial intelligence. The fifth generation’s aim is to create a computer that is smart enough to learn and self-organization and can react to real language input. For this research, Quantum computing and Quantum and Nanotechnology would be used. Therefore we may assume that machines of the fifth century should have the strength of human intelligence.
Figure: 5. Artificial intelligence | <urn:uuid:05fb866a-4204-4843-bb96-c64b29e9e64f> | CC-MAIN-2022-05 | https://www.ssla.co.uk/computer-generation/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300805.79/warc/CC-MAIN-20220118062411-20220118092411-00110.warc.gz | en | 0.965238 | 1,334 | 3.625 | 4 |
Back in 1958, in the earliest days of the computing revolution, the US Office of Naval Research organized a press conference to unveil a device invented by a psychologist named Frank Rosenblatt at the Cornell Aeronautical Laboratory. Rosenblatt called his device a perceptron, and the New York Times reported that it was “the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence.”
Those claims turned out to be somewhat overblown. But the device kick-started a field of research that still has huge potential today.
A perceptron is a single-layer neural network. The deep-learning networks that have generated so much interest in recent years are direct descendants. Although Rosenblatt’s device never achieved its overhyped potential, there is great hope that one of its descendants might.
Today, there is another information processing revolution in its infancy: quantum computing. And that raises an interesting question: is it possible to implement a perceptron on a quantum computer, and if so, how powerful can it be?
Today we get an answer of sorts thanks to the work of Francesco Tacchino and colleagues at the University of Pavia in Italy. These guys have built the world’s first perceptron implemented on a quantum computer and then put it through its paces on some simple image processing tasks.
In its simplest form, a perceptron takes a vector input—a set of numbers—and multiplies it by a weighting vector to produce a single-number output. If this number is above a certain threshold the output is 1, and if it is below the threshold the output is 0.
That has some useful applications. Imagine a pixel array that produces a set of light intensity levels—one for each pixel—when imaging a particular pattern. When this set of numbers is fed into a perceptron, it produces a 1 or 0 output. The goal is to adjust the weighting vector and threshold so that the output is 1 when it sees, say a cat, and 0 in all other cases.
Tacchino and co have repeated Rosenblatt’s early work on a quantum computer. The technology that makes this possible is IBM’s Q-5 “Tenerife” superconducting quantum processor. This is a quantum computer capable of processing five qubits and programmable over the web by anyone who can write a quantum algorithm.
Tacchino and co have created an algorithm that takes a classical vector (like an image) as an input, combines it with a quantum weighting vector, and then produces a 0 or 1 output.
The big advantage of quantum computing is that it allows an exponential increase in the number of dimensions it can process. While a classical perceptron can process an input of N dimensions, a quantum perceptron can process 2N dimensions.
Tacchino and co demonstrate this on IBM’s Q-5 processor. Because of the small number of qubits, the processor can handle N = 2. This is equivalent to a 2x2 black-and-white image. The researchers then ask: does this image contain horizontal or vertical lines, or a checkerboard pattern?
It turns out that the quantum perceptron can easily classify the patterns in these simple images. “We show that this quantum model of a perceptron can be used as an elementary nonlinear classifier of simple patterns,” say Tacchino and co.
They go on to show how it could be used in more complex patterns, albeit in a way that is limited by the number of qubits the quantum processor can handle.
That’s interesting work with significant potential. Rosenblatt and others soon discovered that a single perceptron can only classify very simple images, like straight lines. However, other scientists found that combining perceptrons into layers has much more potential. Various other advances and tweaks have led to machines that can recognize objects and faces as accurately as humans can, and even thrash the best human players of chess and Go.
Tacchino and co’s quantum perceptron is at a similarly early stage of evolution. Future goals will be to encode the equivalent of gray-scale images and to combine quantum perceptrons into many-layered networks.
This group’s work has that potential. “Our procedure is fully general and could be implemented and run on any platform capable of performing universal quantum computation,” they say.
Of course, the limiting factor is the availability of more powerful quantum processors capable of handling larger numbers of qubits. But most quantum researchers agree that this kind of capability is close.
Indeed, since Tacchino and co did their work, IBM has already made a 16-qubit quantum processor available via the web. It’s only a matter of time before quantum perceptrons become much more powerful.
Ref: arxiv.org/abs/1811.02266 : An Artificial Neuron Implemented on an Actual Quantum Processor
The code must go on: An Afghan coding bootcamp becomes a lifeline under Taliban rule
In Afghanistan, tech entrepreneurship was once promoted as an element of peace-building. Now, young coders wonder whether to stay or go.
The internet runs on free open-source software. Who pays to fix it?
Volunteer-run projects like Log4J keep the internet running. The result is unsustainable burnout, and a national security risk when they go wrong.
This new startup has built a record-breaking 256-qubit quantum computer
QuEra Computing, launched by physicists at Harvard and MIT, is trying a different quantum approach to tackle impossibly hard computational tasks.
Inside the machine that saved Moore’s Law
The Dutch firm ASML spent $9 billion and 17 years developing a way to keep making denser computer chips.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more. | <urn:uuid:678a5e0c-058d-4a64-88ee-0df1d7bf7aaa> | CC-MAIN-2022-05 | https://www.technologyreview.com/2018/11/16/139049/machine-learning-meet-quantum-computing/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00231.warc.gz | en | 0.912059 | 1,256 | 3.859375 | 4 |
What is known as a quantum computer has been the subject of movies and series on dozens of occasions throughout history? Although the original concept may sound like science fiction, the truth is that quantum computers are already a reality. As their name suggests, this type of machine takes advantage of the properties of quantum mechanics to solve certain problems that classical computers are not capable of solving, problems that we will discuss below.
Quantum Computers: What They Are and What Differentiates Them From A Traditional PC
Before discussing the differences between a quantum computer and a conventional computer, it is useful to know the nature of the term “quantum”, which in this case refers to the type of information handled by this type of equipment.
As is well known, conventional computers work with the simplest unit of information we know, the bit. This unit contains exactly two states of information that are subdivided into 0 and 1. In the case of quantum computers, the minimum unit of information is known as a cubit or qubit.
Graphical representation of a cubit or qubit in the form of a Bloch sphere. The sphere represents both the possible states of the qubit and the states themselves based on the polarization of a photon.
Unlike a bit, which can only contain a single combination, a qubit can contain a simultaneous combination of 0 and 1. Hence, more complex units such as bytes, which are simple groupings of bits, are handled. It should be noted that the natural state of a qubit is represented by subatomic particles, such as photons or electrons.
To deal with this type of information, quantum computers require the use of certain systems and materials that are resistant to this type of particle. In other words, the computer does not have a conventional structure but uses a series of superconducting circuits whose cooling is designed to reach absolute zero and thus isolate the particles in a state that can be controlled.
Quantum Mechanics and Qubits: How Quantum Computers Work With Information
We have already mentioned that qubits can contain different strings of 0’s and 1’s at the same time. This is because qubits can be represented in different states. For this, quantum computers require the use of a series of systems to achieve what is known as quantum superposition, which is nothing more or less than the possibility of representing several states at the same time, i.e. several strings of 0 and 1. This means that the information contained in this type of particle is much greater than what we can find in a byte.
This is what an alanine molecule used in the NMR implementation of quantum computing looks like. Often, how such molecules are introduced into quantum computers is related to magnetic resonance systems.
Current systems are made up of microwaves and precision lasers that allow the state of the qubits to be controlled. One of the great challenges of current engineering has to do with these systems and their design. Creating a system that is capable of controlling these states while keeping the qubits in their natural state will raise the possibility of working with enormous amounts of information to levels never before recorded. And precisely another of the great challenges of current engineering is related to the combination of different qubits in groups known as chains, which overlap through what is known as quantum entanglement.
This phenomenon describes the pairwise grouping of qubits. In the same way that bits intertwine with each other to form a byte, the grouping of this unit follows the laws of quantum mechanics. The problem is that the current laws of physics do not explain this phenomenon, as controllability is subject to failure. And this is one of the major problems of quantum computers: the probability of error when performing calculations.
This is due to the behavior of the qubits themselves when interacting with each other and creating pairs with the rest of the particles in the surrounding environment. As we indicated in previous paragraphs, the control of qubit states is one of the great challenges of current engineering, since current systems try to solve what is known as quantum incoherence.
Such is the difficulty of grouping qubits, that the greatest achievement of current engineering has only grouped 128 qubits. This difference concerning conventional computers is known as quantum supremacy, which is precisely related to the possibility of solving calculations that conventional computers are not capable of solving regardless of the computational capacity they have. In 2019, Google announced having reached quantum supremacy with its computers, A year later, it was China that announced having reached this milestone through a research group at the University of Science and Technology of China in collaboration with Tsinghua University in Beijing.
The Race to Develop a Fully-fledged Quantum Computer
At present, very few companies have participated in the development of this type of equipment because of the investment and the difficulty of progress involved. The best known at present are Intel, Google, and IBM, which are in a race to develop the first viable quantum computer. For example, Google’s quantum computer, called Sycamore, has a capacity of 54 qubits and is capable of performing calculations that a conventional computer would take approximately 10,000 years to perform in just 3.5 minutes. That’s nothing.
As for Intel’s developments, the company has launched its chip, known as Horse Ridge, in 2020. This chip allows the integration of quantum processors of up to 128 qubits, the limit that has been achieved to date. On the other hand, companies such as D-Wave, which are involved in the development of this type of equipment, have proposed their computers to the scientific community in the fight against the cure for COVID-19. IBM has also created its commercial quantum computer, called IBM Q System One.
With a power of 20 qubits, the computer is housed in an airtight glass cube 2.7 meters wide by 2.7 meters high that helps maintain the correct temperature while absorbing vibrations in the environment. It is worth noting that such a feat was accomplished in 2019, no less.
So, What is Quantum Computer For?
The information processing capacity of quantum computers opens the door to a whole world of the future in different sectors. After all, the main limitation of the different developments in the industry today has to do with the processing capacity of conventional computer hardware.
The use of quantum computers in certain industries would help to develop advances in medicine, cybersecurity, autonomous driving systems, artificial intelligence, robotics, and many other sectors that depend on information processing. The enormous computing power of this type of equipment could accelerate the development of certain technologies, such as those related to graphene or the development of lithium-ion batteries with higher density. It would also be possible to simulate the behavior of certain particles in contact with others, giving us the possibility of emulating the birth of the Universe, an action that has already been attempted since the Higgs Boson was installed in Switzerland and which resulted in the discovery of the God particle.
In any case, everything points to the fact that this type of equipment will not be massively available for approximately 15 years. Their arrival in the home is not expected for at least a century, since both particle control and equipment size are not within the reach of the consumer market at the time of publication (although companies such as SpinQ Technology have already developed a desktop device for the general public). Needless to say, such proposals are a far cry from the capabilities of today’s most powerful computers, although they bring the possibilities of quantum computers closer to the market.
This post may contain affiliate links, which means that I may receive a commission if you make a purchase using these links. As an Amazon Associate, I earn from qualifying purchases. | <urn:uuid:7e91c90b-c694-4c28-ab96-40943afa355f> | CC-MAIN-2022-05 | https://www.techidence.com/what-is-a-quantum-computer-and-what-can-we-do-with-it/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00591.warc.gz | en | 0.960127 | 1,558 | 3.703125 | 4 |
A section of the light-based quantum computer created by researchers at the University of Science and Technology of China. (Image by Xinhua)
Scientists in Anhui develop a machine far exceeding classical supercomputers
Chinese scientists have created the world's first light-based quantum computer, called Jiuzhang, that can reliably demonstrate "quantum computational advantage", a milestone in which a quantum machine can solve a problem no classical supercomputer can tackle within a reasonable amount of time, according to a study published in the journal Science on Friday.
It is the second time that humanity has reached this milestone, after Google declared its 53-qubit quantum computer had achieved such a breakthrough last year.
However, Jiuzhang used a new method of manipulating 76 photons to do calculations instead of Google's, which uses superconductive materials.
Experts hailed the Chinese machine as a "state-of-the-art experiment" and a "major achievement" in quantum computing, as it proves the feasibility of photonic quantum computation, thus providing a fundamentally different approach to designing such powerful machines.
Quantum computers excel at running simulations that are impossible for conventional computers, leading to breakthroughs in materials science, artificial intelligence and medicine.
Moreover, most components of the light-based quantum machine can operate at room temperature, aside from its sensory equipment, which must be kept at -269.1 C.
This makes it significantly easier to make and maintain than superconducting quantum computers, the bulk of which must be kept at ultra-cold temperatures to ensure the materials can conduct electricity without any resistance.
Jiuzhang takes its name from an ancient Chinese mathematical text. It can perform an extremely esoteric calculation, called Gaussian boson sampling, in 200 seconds. The same task would take the world's fastest classical supercomputer, Fugaku, around 600 million years.
Fabio Sciarrino, a quantum physicist at Sapienza University of Rome, told Science News, an outlet based in the United States, that his first impression of the Chinese quantum computer was, simply, "wow".
According to interviews by the University of Science and Technology of China in Hefei, Anhui province, whose researchers created Jiuzhang, Barry Sanders, director of the Institute for Quantum Science and Technology at the University of Calgary, Canada, called the feat "one of the most significant results in the field of quantum computing" since Google's claim to quantum advantage last year was later challenged by IBM.
Anton Zeilinger, noted quantum physicist and president of the Austrian Academy of Sciences, said that, following this experiment, he predicts there is a very good chance that quantum computers may be used very broadly someday, according to the university's interviews.
"I'm extremely optimistic in that estimate, but we have so many clever people working on these things, including my colleagues in China. So, I am sure we will see quite rapid development."
Quantum machines' astronomical computing power arises from their basic building blocks, called quantum bits, or qubits, according to the University of Science and Technology of China.
Unlike bits of classical computers that present data as either 0s or 1s, similar to the on and off of a light switch, qubits can harness the strange property of quantum mechanics known as superposition and exist as 0s, 1s or everything in between, like the increments on a control knob.
Quantum machines can take computational shortcuts when simulating extremely complex scenarios, whereas conventional computers have to brute force their way to a solution, taking significantly more time in the process.
Moreover, quantum machines' computing power can increase exponentially as more qubits are added.
Therefore, Jiuzhang, which uses 76 photons as qubits, is about 10 billion times faster than the 53-qubit computer developed by Google, according to the university.
"The feat cements China's position in the first echelon of nations in quantum computing," the university said in a news release.
Pan Jianwei, who is recognized as China's top quantum scientist and one of the key researchers behind Jiuzhang, said the calculations they carried out can not only showcase the machine's computing prowess but also demonstrate potential practical applications in machine learning, quantum chemistry and graph theory.
"Quantum computing has already become a fierce competition ground among the United States, Europe and other developed regions," Pan said, adding that China's quantum computational advantage took about seven to 10 years to achieve, since the team first decided to tackle the boson-sampling problem around 2013.
However, Pan stressed that the photonic quantum computer is a highly specialized and unorthodox machine, characterized as an elaborate, interconnected tabletop setup of lasers, mirrors and detectors, and is currently only programed to do boson sampling. "It is not a general-purpose quantum computer," he said.
Lu Chaoyang, another key researcher behind Jiuzhang, said that, even if a machine is only good at one job, such as analyzing materials, it can still have great social and economic value if it can overcome an extremely challenging problem.
In the near future, scientists may increase Jiuzhang's possible output states-a key indicator of computing power-by 10 orders of magnitude, from 10 to the 30th power to 10 to the 40th power, Lu said. (China Daily)
52 Sanlihe Rd., Xicheng District,
Beijing, China (100864) | <urn:uuid:11366188-4a73-442a-8fa1-ddac62113fae> | CC-MAIN-2022-05 | https://english.cas.cn/newsroom/cas_media/202012/t20201205_256092.shtml | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304954.18/warc/CC-MAIN-20220126131707-20220126161707-00312.warc.gz | en | 0.938525 | 1,175 | 3.703125 | 4 |
A Q-bit is like a classical bit but with a state that is 0 or 1 and neither at the same time. This opens a bunch of new possibilities for us, in context quantum computer has a computational advantage in a lot of the jobs that classical computers just cannot perform in a reasonable amount of time. This state in which the Q-bit is a 0 and a 1 is called a superposition. One very importent fact of Q-bits in superposition is that when we measure will fall to either 0 or 1 on a 50-50 chance.
Before you read this article you should read the part one which you can find here, this makes sure that you know what we are going to talk about.
Read this if you want to know exactly how RSA works this, it will also help you in part 3 of this series.
Advantages of Q-bits
Now that we have a refresher on what Q-bits are lets take a look at how they can be helpful. Lets say that we have 3 bits and the same number of Q-bits, we can have a total of 3 * 2 number of possibilities or combinations of bits, but with the same number of Q-bits we have 3^2 (3 Squared) total possibilities or combinations, because each one of those 3 bits has an extra state called a superposition.
This gives Quantum Computers a massive exponential computational advantage over Classical Computers. This however doesn't mean that Quantum Computer will be better or faster at all the task that a Classical Computer can do, but it does mean for specific computation a Quantum Computer will win by default because a Classical Super Computer would take years to perform it or will not even be able to perform it. There is also a lack of good Quantum Algorithms, this however will be fixed as we get better at making these Quantum Computers and making them available to people who develop algorithms. Remember as of right now the main advantage we have over classical supercomputer is running quantum algorithms, but that would be the only reason to choose it not the speed its actually very slow on classical algorithms, the only place where we see the speed is in Quantum Algorithms.
Shor's algorithm is the most famous Quantum algorithm,it is not a very special algorithm as you can essentially run it on your normal home PC, but it runs exponentially fast on a Quantum Computer. Am going to try and attempt to explain this algorithm without using a lot of math and physics, which is really hard to do since its pretty much all math and physics.
So here goes a 4 weeks worth of my study notes covering a complicated algorithm in one big paragraph! Shor's algorithm's "basic" functionality is that it can guess factors of given number N (Really Big Number), we already have a basic algorithm that finds factors called The Euclidean Algorithm, which tell us the factor of N, so we take a guess "g". This "g" doesn't have to be a factor of N it can be a number that shares a factor of N( how 4 isn't a factor of 6 but shares a number that is 2).if you want more reading on this basic algo read this.
So lets look at shor's algorithm, it helps us make a better guess "g" as a factor, if there are 2 whole number(x,y) which don't share a factor to N, if we raise x to a certain power x^p we will have k * y+1, (x^p = k * y+1).
So now the main problem for us is to guess the right p. So for a very large number N and a arbitrary starting guess "g", we would have an equation:
g^p = k * N+1. Now if we rearrange this in a clever way we would arrive at this useful equation: (g^(p/2) +1) * (g^(p/2) -1) = k * N.
This looks like the factor equation which we are trying to find (N = a * b). This is the math part of shor's algorithm. The clever part is the science, The advantage we have with Quantum Computer is that we can input multiple bits in superposition which will all simultaneously calculate all the possible values of "p" and "g", however the problem comes from the fact that even if we do that and have an answer in superposition we will only get one of the answers and a low probability of the right one.
To solve this we also calculate the frequency ( sin and cos graphs) and these are also superposition, which helps us cancel out the wrong superposition and arrive to the correct answer, in the "first try".
This whole thing happens so fast that its mind blowing, Classical Computers would take thousands of years to calculate it. This would easily break all the encryption that we use right now. This also poses a big threat to the Cyber Security of Computers/Networks and Applications.If you wanna read more on the Shor's Algotithm you should read this article
This Video can help you understand the algo itself in great detail here. We will talk about Quantum Encryption in the next article in this series, which will solve this problem of breaking all encryption.For updates on these articles follow secjuice on twitter or Me. I am sorry about not posting article's, its just hard will a lot of stuff going around me.
You should also check out IBM Q. As well as Qiskit, a programming language to write and run code on a quantum computer. | <urn:uuid:9fc4964b-17a7-4742-9d01-91e1ddad888b> | CC-MAIN-2022-05 | https://www.secjuice.com/shors-algorithm-quantum-computing/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00515.warc.gz | en | 0.958553 | 1,160 | 3.734375 | 4 |
Flashlight beams don’t clash together like lightsabers because individual units of light—photons—generally don’t interact with each other. Two beams don’t even flicker when they cross paths.
But by using matter as an intermediary, scientists have unlocked a rich world of photon interactions. In these early days of exploring the resulting possibilities, researchers are tackling topics like producing indistinguishable single photons and investigating how even just three photons form into basic molecules of light. The ability to harness these exotic behaviors of light is expected to lead to advances in areas such as quantum computing and precision measurement.
In a paper recently published in Physical Review Research, Adjunct Associate Professor Alexey Gorshkov, Joint Quantum Institute (JQI) postdoctoral researcher Przemyslaw Bienias, and their colleagues describe an experiment that investigates how to extract a train of single photons from a laser packed with many photons.
In the experiment, the researchers examined how photons in a laser beam can interact through atomic intermediaries so that most photons are dissipated—scattered out of the beam—and only a single photon is transmitted at a time. They also developed an improved model that makes better predictions for more intense levels of light than previous research focused on (greater intensity is expected to be required for practical applications). The new results reveal details about the work to be done to conquer the complexities of interacting photons.
“Until recently, it was basically too difficult to study anything other than a few of these interacting photons because even when we have two or three things get extremely complicated,” says Gorshkov, whi is also a physicist at the National Institute of Standards and Technology and Fellow of the Joint Center for Quantum Information and Computer Science. “The hope with this experiment was that dissipation would somehow simplify the problem, and it sort of did.”
Trains, Blockades and Water Slides
To create the interactions, the researchers needed atoms that are sensitive to the electromagnetic influence of individual photons. Counterintuitively, the right tool for the job is a cloud of electrically neutral atoms. But not just any neutral atoms; these specific atoms—known as Rydberg atoms—have an electron with so much energy that it stays far from the center of the atom.
The atoms become photon intermediaries when these electrons are pushed to their extreme, remaining just barely tethered to the atom. With the lone, negatively charged electron so far out, the central electrons and protons are left contributing a counterbalancing positive charge. And when stretched out, these opposite charges make the atom sensitive to the influence of passing photons and other atoms. In the experiment, the interactions between these sensitive atoms and photons is tailored to turn a laser beam that is packed with photons into a well-spaced train.
The cloud of Rydberg atoms is kind of like a lifeguard at a water park. Instead of children rushing down a slide dangerously close together, only one is allowed to pass at a time. The lifeguard ensures the kids go down the slide as a steady, evenly spaced train and not in a crowded rush.
Unlike a lifeguard, the Rydberg atoms can’t keep the photons waiting in line. Instead they let one through and turn away the rest for a while. The interactions in the cloud of atoms form a blockade around each transmitted photon that scatters other photons aside, ensuring its solitary journey.
To achieve the effect, the researchers used Rydberg atoms and a pair of lasers to orchestrate a quantum mechanical balancing act. They selected the frequency of the first laser so that its photons would be absorbed by the atoms and scattered in a new direction. But this is the laser that is whittled down into the photon train, and they needed a way to let individual photons through.
That’s were the second laser comes in. It creates another possible photon absorption that quantum mechanically interferes with the first and allows a single photon to pass unabsorbed. When that single photon gets through, it disturbs the state of the nearby atoms, upsetting the delicate balance achieved with the two lasers and blocking the passage of any photons crowding too closely behind.
Ideally, if this process is efficient and the stream of photons is steady enough, it should produce a stream of individual photons each following just behind the blockade of the previous. But if the laser is not intense enough, it is like a slow day at the waterpark, when there is not always a kid eagerly awaiting their turn. In the new experiment, the researchers focused on what happens when they crowed many photons into the beam.
Model (Photon) Trains
Gorshkov and Bienias’s colleagues performed the experiment, and the team compared their results to two previous models of the blockade effect. Their measurements of the transmitted light matched the models when the number of photons was low, but as the researchers pushed the intensity to higher levels, the results and the models’ predictions started looking very different. It looked like something was building up over time and interfering with the predicted, desired formation of well-defined photon trains.
The team determined that the models failed to account for an important detail: the knock-on effects of the scattered photons. Just because those photons weren’t transmitted, doesn’t mean they could be ignored. The team suspected the models were being thrown off by some of the scattered light interacting with Rydberg atoms outside of the laser beam. These additional interactions would put the atoms into new states, which the scientists call pollutants, that would interfere with the efficient creation of a single photon train.
The researchers modified one of their models to capture the important effects of the pollutants without keeping track of every interaction in the larger cloud of atoms. While this simplified model is called a “toy model,” it is really a practical tool that will help researchers push the technique to greater heights in their larger effort to understand photon interactions. The model helped the researchers explain the behavior of the transmitted light that the older models failed to capture. It also provides a useful way to think about the physics that is preventing an ideal single photon train and might be useful in judging how effectively future experiments prevent the undesirable affects—perhaps by using cloud of atoms with different shapes.
“We are quite optimistic when it comes to removing the pollutants or trying to create less of them,” says Bienias. “It will be more experimentally challenging, but we believe it is possible.”
Original story by Bailey Bedford: https://jqi.umd.edu/news/scientists-see-train-photons-new-light
In addition to Bienias and Gorshkov, James Douglas, a Co-founder at MEETOPTICS; Asaf Paris-Mandoki, a physics researcher at Instituto de Física, Universidad Nacional Autónoma de México; JQI postdoctoral researcher Paraj Titum; Ivan Mirgorodskiy; Christoph Tresp, a research and development employee at TOPTICA Photonics; Emil Zeuthen, a physics professor at the Niels Bohr Institute; Michael J. Gullans, a former JQI postdoctoral researcher and current associate scholar at Princeton University; Marco Manzoni, a data scientist at Big Blue Analytics; Sebastian Hofferberth, a professor of physics at the University of Southern Denmark; and Darrick Chang, a professor at the Institut de Ciencies Fotoniques, were also co-authors of the paper. | <urn:uuid:63d09cab-216c-4f7c-b376-cefca0aa2fd8> | CC-MAIN-2022-05 | https://umdphysics.umd.edu/about-us/news/research-news/1618-scientists-see-train-of-photons-in-a-new-light.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00075.warc.gz | en | 0.939855 | 1,555 | 3.734375 | 4 |
First of two parts
One of the first steps toward becoming a scientist is discovering the difference between speed and velocity.
To nonscientists, it’s usually a meaningless distinction. Fast is fast, slow is slow. But speed, technically, refers only to rate of motion. Velocity encompasses both speed and direction. In science, you usually want to know more than just how fast something is going; you also want to know where it is going. Hence the need to know direction, and to analyze velocity, not just speed. Numbers like velocity that express both a magnitude and a direction are known as vectors.
Vectors are great for describing the motion of a particle. But now suppose you need to analyze something more complicated, where multiple magnitudes and directions are involved. Perhaps you’re an engineer calculating stresses and strains in an elastic material. Or a neuroscientist tracing the changing forces on water flow near nerve cells. Or a physicist attempting to describe gravity in the cosmos. For all that, you need tensors. And they might even help you unify gravitational theory with quantum physics.
Tensors accommodate multiple numerical values (a vector is actually a simple special case of a tensor). While the ideas behind tensors stretch back to Gauss, they were first fully described in the 1890s by the Italian mathematician Gregorio Ricci-Curbastro, with the help of his student Tullio Levi-Civita. (Tensors were given their name in 1898 by Woldemar Voigt, a German crystallographer, who was studying stresses and strains in nonrigid bodies.)
Ricci (as he is commonly known) was influenced by the German mathematician Bernhard Riemann in developing advanced calculus with applications to complicated geometrical problems. In particular, this approach proved valuable in studying coordinate systems. Tensors help make sense of the relationships in the system that stay the same when you change the coordinates. That turned out to be just the thing Einstein needed in his theory of gravity, general relativity. His friend Marcel Grossmann explained tensors to him and they became the essential feature of general relativity’s mathematics.
And now, in a recent development, some physicists think tensors of a sort could help solve the longstanding problem of unifying general relativity with quantum mechanics. It’s part of a popular new line of research using tensors to quantify quantum entanglement, which some physicists believe has something to do with gravity.
Entanglement is that spooky connection between separated particles that disturbed Einstein so much. Somehow a measurement of one of a pair of particles affects what you’ll find when you measure its distant partner, or so it seems. But this “entanglement” is a clear-cut consequence of quantum physics for particles that share a common origin or interaction. It leads to some weird phenomena, but it’s all very sensible mathematically, as described by the “quantum state.” Entangled particles belong to a single quantum state.
A quantum state determines the mathematical expression (called the wave function) that can be used to predict the outcome of measurements of a particle — whether the direction that it spins is pointing up or down, for instance. When describing multiple particles — such as those in materials exhibiting quantum properties such as superconductivity — quantum states can get very complicated. Coping with them is made easier by analyzing the network of entanglement among those many particles. And patterns of such network connections can be described using tensors.
“Tensor networks are representations of quantum many-body states of matter based on their local entanglement structure,” physicist Román Orús writes in a recent paper posted at arXiv.org. “In a way, we could say that one uses entanglement to build up the many-body wave function.”
Put another way, Orús says, the entire wave function can be thought of as built from smaller tensor subnetworks, kind of like Legos. Entanglement is the glue holding the Legos together.
“Tensor network methods represent quantum states in terms of networks of interconnected tensors, which in turn capture the relevant entanglement properties of a system,” Orús writes in another recent paper, to be published in Annals of Physics.
While the basic idea of tensor networks goes back decades, they became more widely used to study certain quantum systems in the 1990s. In the last few years, ideas from quantum information theory have spawned an explosion of new methods using tensor networks to aid various calculations. Instead of struggling with complicated equations, physicists can analyze systems using tensor network diagrams, similar to the way Feynman diagrams are used in other aspects of quantum physics.
“This is a new language for condensed matter physics (and in fact, for all quantum physics) that makes everything much more visual and which brings new intuitions, ideas and results,” Orús writes.
Most recently, tensor networks have illuminated the notion that quantum entanglement is related to gravity. In Einstein’s general relativity, gravity is the effect of the geometry of spacetime. Analyses suggest that the geometry in which a quantum state exists is determined by the entanglement tensor network.
“By pushing this idea to the limit,” Orús notes, “a number of works have proposed that geometry and curvature (and hence gravity) could emerge naturally from the pattern of entanglement present in quantum states.”
If so, tensor networks could be the key to unlocking the mystery of quantum gravity. And in fact, another clue to quantum gravity, known as the holographic principle, seems naturally linked to a particular type of tensor network. That’s a connection worth exploring further.
Follow me on Twitter: @tom_siegfried | <urn:uuid:44275f8a-ec53-417f-8484-33586217aaae> | CC-MAIN-2022-05 | https://www.sciencenews.org/blog/context/tensor-networks-get-entangled-quantum-gravity | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00076.warc.gz | en | 0.942909 | 1,218 | 3.609375 | 4 |
As modern computers continue to reach the limits of their processing power, quantum computing is starting to offer hope for solving more specialized problems that require immensely robust computing. Quantum computers were once thought an impossible technology because they harness the intricate power of quantum mechanics and are housed in highly unconventional environments. But these machines now have the potential to address problems ranging from finding drugs that can target specific cancers to valuing portfolio risk, says Vern Brownell, founder and CEO of D-Wave Systems, the Canadian company that in 2010 introduced the world’s first commercially available quantum computer. In this interview with McKinsey’s Michael Chui, Brownell discusses what quantum computing is, how it works, and where it’s headed in the next five years. An edited transcript of their conversation follows.
We’re at the dawn of the quantum-computing age, and it’s really up to us to execute. It sounds grand. But I think this is such an important enabling technology and can help mankind solve problems that are very, very important.
What is quantum computing?
D-Wave Systems is the world’s first quantum-computing company. We have produced the world’s first commercial quantum computers. A quantum computer is a type of computer that directly leverages the laws of quantum mechanics to do a calculation.
And in order to do that, you have to build a fairly exotic type of computer. You have to control the environment very carefully. The whole point of building a quantum computer is, basically, for performance, to solve problems faster than you can with conventional (or what we call classical) computers, meaning the types of computers that we all enjoy today and that have done such a great job. There are problems that scale better, or they can perform better, using quantum computers rather than classic computers. And that’s really why everyone is trying to build a quantum computer: to take advantage of that capability that’s inherent in quantum mechanics.
How do quantum computers work?
You probably will remember from your physics classes that a quantum mechanical object, if it’s disturbed, it’s frozen in one state or it becomes classical. So every quantum computer has, as its building block, something called a qubit, a quantum bit. And a quantum bit is like the digital bit that’s in every computer; digital bits are sort of the building blocks of all computers.
But a qubit has this special characteristic where it can be in what’s called a superposition of zero and one at the same time. So if you step back from that, this object is actually in two different states at the same time. And it’s not like it’s half in this state and half in the other; it’s in those two states at the same time. It sounds spooky. Einstein called it spooky. But it is a fundamental law of quantum mechanics and it is the building block of a quantum computer.
So these qubits are all in this superposition, which is a very delicate state. And whenever a cosmic ray or some kind of interference hits that computation, it freezes it out to a classical state. So the trick is to keep the calculation going in this superposition for the duration of the computational cycle.
The environment in which the system operates is kept at a temperature that is near absolute zero. So you probably remember, –273 degrees centigrade is the lowest temperature, called a thermodynamic limit or the lowest temperature that’s physically possible in the universe. This machine runs at 0.01 degrees kelvin, or 10 degrees millikelvin, above that.
So unless there’s any other intelligent life in the universe, this is the coldest environment in the universe that this machine has to run in. For instance, interstellar space is about 4 degrees kelvin, which is much, much warmer than our operating temperature.
That’s not the only part of it. We have to create a magnetic vacuum and an air vacuum. So there’s this coffee-can-sized environment that has this incredibly low temperature and this magnetic vacuum that is probably among the purest environments in the universe. There are no naturally occurring environments like this.
You don’t buy a quantum computer for the economics. But that will change, as I said, as the power of the machine grows. There can certainly be just an economic benefit of using this for certain problem types versus using classical computers.
What problems do quantum computers solve?
There are different types of quantum computers. The type that we build is called a quantum annealer. And so I’ll talk about the types of problems that quantum annealers do. Much of what you’ll hear about quantum computing is related to gate-model quantum computing, which is another approach that’s very valid. The problem with it is that it’s very, very hard to implement. And it’s probably more than ten years away.
We believe that one of the most important applications of quantum computing is in the category of machine learning. So we’ve developed, together with our partners, algorithms that can leverage this quantum-computing capability to do machine learning better than you could with just classical resources alone, even though the state of the art in classical computing and machine learning is quite high. They’re doing some amazing things with scale-out architectures and GPUs
and special-purpose hardware. We believe that the advantages that quantum computing can have can even take that to the next level.
Another is in the whole optimization area, and it’s called sampling. So there are optimization problems all around us. We’re trying to find the best answer out of a complex set of alternatives. And that could be in portfolio analysis and financial services. It could be trying to find the right types of drugs to give a cancer patient—lots of meaty, very impactful types of applications that are in the sampling world that we believe are very relevant to this.
Google and NASA, for instance, are customers of ours. And Google has created what they call the Quantum Artificial Intelligence Lab, where they’re exploring using our computer for AI applications or learning applications. And NASA has a whole set of problems that they’re investigating, ranging from doing things like looking for exoplanets to [solving] logistic problems and things like that. I’d say within five years, it’s going to be a technology that will be very much in use in all sorts of businesses. | <urn:uuid:29a50a48-e7e4-4384-a16a-bbb045f34497> | CC-MAIN-2022-05 | https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-growing-potential-of-quantum-computing | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00357.warc.gz | en | 0.943796 | 1,366 | 3.546875 | 4 |
Vortices of Light on the Cheap
Vector vortex laser beams have a unique polarization pattern that varies with position around a dark center, and this nonuniformity can, for example, lead to tighter focusing than is possible with the usual uniform polarization. Now researchers have produced vector vortices using just a low-cost laser along with a reflecting element, bypassing the usual specialized equipment. This relatively simple design could allow switching between different polarization patterns with a small change to the laser current.
Vector vortices are similar to optical vortices; both have beam cross sections with a donut-shaped intensity pattern. In an optical vortex, the light’s phase varies around the dark hole in the center of the beam, while the polarization direction is uniform. But in a vector vortex beam it’s the polarization that changes as you move around the center. Researchers have shown that a beam with varying polarization can be focused to a smaller spot size than a beam with uniform polarization . This tight focusing could benefit optical trapping and light-based etching techniques. Researchers are also exploring correlation properties of vector vortices, which resemble quantum entanglement.
One of the most common methods for producing vector vortices is to use a liquid-crystal spatial light modulator that can impose a polarization direction at specific points in the cross section of a beam. Other elements, such as cone-shaped reflectors, can also be inserted into a beam’s path to generate nonuniform polarization configurations. But a simpler method has now been revealed by Thorsten Ackemann of the University of Strathclyde in Glasgow, UK, and his colleagues. The team discovered that a type of semiconductor laser—specifically, a vertical-cavity surface-emitting laser (VCSEL)—could produce a variety of different vector vortices when aimed at a frequency-specific mirror. “You don’t need to engineer these polarization states,” Ackemann says. “They arise spontaneously.”
The potential for generating vector vortices with a VCSEL was predicted twenty years ago . The reasoning then was based on the highly symmetric character of a VCSEL lasing cavity. Most lasers are not symmetric—for example, they may have rectangular cross sections—and this asymmetry largely determines the polarization direction of the emitted light. By contrast, a VCSEL emits from a cylindrical cavity, so the polarization can often switch between, say, horizontal and vertical directions. This polarization “competition” suggests that there might be intermediate states where the two polarization modes coexist to form a spatially nonuniform pattern. Ackemann suspects that no previous experiments had detected a nonuniform polarization (vector vortex) because VCSELs have just enough asymmetry that one polarization mode always wins—at least for a little while before switching to the other.
In their system, Ackemann and his colleagues were able to compensate for the intrinsic asymmetry of their VCSEL by adding a volume Bragg grating (VBG)—essentially a mirror that only reflects one frequency. The VBG, which was placed in front of the VCSEL, created a new resonance cavity between the reflecting surface and the emitting surface of the laser. The feedback from this cavity helped to lock the laser frequency at the reflection frequency. As the team varied the current supplied to the VCSEL (which affects the beam’s intensity and frequency), they were surprised to find that the polarization became nonuniform. The reason for this change is not entirely clear, but Ackemann believes that minute tilts of the VBG can offset laser-based anisotropies and force multiple polarization modes to have the same frequency, so that none dominates.
The observed polarization configurations depended on the current supplied to the VCSEL. For most values, the polarization stayed uniform, but for certain current inputs, the team recorded vortices with radial, hyperbolic, or spiral patterns. This dependence on current, which is linked to temperature-induced changes in the resonant frequency of the VCSEL-VBG cavity, might be used in future devices to switch vortices on and off. Such control is possible with spatial light modulators, but these devices are expensive.
Qiwen Zhan of the University of Dayton in Ohio says that creating vector vortices with a VCSEL could be useful in many applications, with the main advantages being “the compactness of the device and the capability of adjusting the nonuniform polarization states through electrical tuning.” However, he agrees with the authors that applications will have to wait for a more comprehensive study of the VCSEL-VBG resonance cavity to understand the mechanisms that lead to vortex formation.
This research is published in Physical Review Letters.
Michael Schirber is a Corresponding Editor forPhysics based in Lyon, France. | <urn:uuid:d772a8ce-51b2-4f51-b341-10bfb0a0e66b> | CC-MAIN-2022-05 | https://physics.aps.org/articles/v10/102 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00275.warc.gz | en | 0.93166 | 1,024 | 3.515625 | 4 |
What is Quantum Computing?
Quantum Computing, very simplistically, is computing with the hardware, software, and devices that follow the principles of Quantum Physics.
Let’s discuss the basics of Quantum Computing in layman’s terms. Currently, the “bit” in semiconductor technology can hold the state of ‘0’ or ‘1’. But, a quantum bit, or qubit, can take a state of ‘0’ and ‘1’ simultaneously. In current state microprocessor-based computing, states ‘0’ or ‘1’ can be considered to be electronic logic gate managed voltages like, for example, zero (0) volt or five (5) volts. In the case of quantum computing, the state is a probabilistic outcome. The potential outcome of a calculation in a quantum computer is close to the actual result with the highest probability. Ideally, the total number of outcomes of quantum computing through Superposition could be infinite. Quantum Computer needs a low-temperature and nearly empty environment to reduce the computational noise impacting the outcome of the calculation. An interesting fact about quantum computing is that with the increase in the number of qubits involved in any calculation, the computational power of the quantum computer increases exponentially as every qubit can represent two states – 2 to the power n, where n is the number of qubits.
Other than Superposition, there are two interesting concepts in Quantum Computing. Entanglement is the concept of Quantum Physics that explains the phenomenon of the interaction of two qubits across the universe. Interference in Quantum Computing is somewhat related to Superposition that explains the bias of outcome towards a particular state.
Why do we need Quantum Physics?
Simply stated, to explain a few complexities of our universe that Classical Physics has fallen short of explaining to the fullest satisfaction. Phenomena like Blackbody radiation, Photoelectric effect, Hydrogen atom’s behavior with heat, etc cannot be satisfactorily explained with Classical Physics.
Why do we need Quantum Computing?
As proclaimed by Gordon Moore, the number of transistors on Intel’s processors at the time of their introduction has almost doubled every 18 to 24 months. But, this growth of computational power over time is slowing down because of the manageable limits of size, heat generated, and power required.
Quantum Computing is kind of following Neven’s Law of Quantum Computing that proclaims that Quantum computers are improving at a doubly exponential rate.
The growth of computing power in Moore’s Law was exponential by powers of 2: 21, 22, 23, 24. Doubly exponential growth represents the growth of computing power by powers of powers of 2.
How to transform Mathematical Concepts of Quantum Computing into a Physical Machine for day-to-day use?
We saw how the field of electronics made progress from valve-based transistors to semiconductor-based transistors before we got our modern days laptop or desktop. Long before that, we learned to use tools like Abacus that is nothing but a version of an analog computer. Now, the big question is around making the concepts of quantum physics and computing a reality for normal day-to-day use. What kind of inorganic or organic compound can represent the quantum phenomenon for regular use towards human computational needs?
It is still in the state of research and continuous development. Summary of types of Quantum Computers are as follows:
- Quantum Annealing Computers – by DWave Systems
- Universal Quantum Computers – IBM, Google
- Topological Quantum Computers – Microsoft
- Ion Trap Quantum Computers – IonQ
A number of vendors are also offering Quantum Computer Simulators over the Cloud.
Potential Use cases of Quantum Computing
Fundamentally, the best application areas for Quantum Computing are those that involve the massive volume of data and processing of those data but don’t need 100 percent accuracy and precision. Additionally, considering the current state of technology, those use cases should support the possibility of Hybrid Computing, that is, the use of both Quantum Computers as well as Classical Computers, taking advantage of respective computing advantages. At the first step, Quantum Computers will narrow the options of possible solutions because of its probabilistic nature, and at the final step, Classical Computers will deliver the final solution with defined accuracy or precision.
Biochemistry and Pharmaceuticals
These are huge potential benefits of Quantum Computing in the field of Biochemistry to reduce the time and effort needed for the Synthesis of Molecules. Modeling of Molecules impacting quicker Drug Discovery is a very important application area, directly impacting human life.
In Cancer Treatment, Quantum Computing will have a positive impact in Intensity Modulated Radio Therapy (IMRT) with better optimization of dosage calculations.
In the field of Materials Production, Quantum Computer will drastically improve the manufacturing process of fertilizer, impacting global food production and agriculture.
Quantum Computing will positively change the way Algorithm-driven High-Frequency Trading takes place in Finacial Companies.
Smart City and government
Management of driverless cars through a big city with continuous optimization can only be handled with the massive computational power of Quantum Computers. Transportation and Logistics of the future will be positively impacted by Quantum Computing.
Weather forecasting is expected to improve with the use of Quantum Computer. Energy Generation and Distribution and related calculation of utilization prediction, grid optimization will function with better accuracy.
Needless to mention that Quantum Computing drives the performance with better optimization using massive data quicker and better.
Areas of AI like unsupervised machine learning, computer vision, etc will make AI more effective to society with Quantum Computing.
Quantum Computing is expected to disrupt the way present-day cybersecurity using Cryptography functional. Researchers in the field of Quantum Cryptography are busy formulating quantum-ready encryption algorithms. Some of the algorithms like McEliece Cryptography are being thought to mitigate the impacts of Quantum Computing.
The underlying premise of cryptography for Blockchain to function has to evolve to remain effective in the era of Quantum Computing. Concepts like Quantum Resistant Ledger are active research areas now.
Conclusion – How should we prepare for Quantum future?
When the global organizations are undertaking AI-first strategy, a few industry sectors like Finance, Logistics, Biotechnology, etc already started evaluating the potential impacts of Quantum Computing. Need of the time is to be aware of the disruptive impact on Cybersecurity and of the transformative impact on business-centric innovation with Quantum Computing. | <urn:uuid:c150c753-833a-4ad4-82c2-1f08410f6b0e> | CC-MAIN-2022-05 | https://www.enterprisetechmgmt.com/2021/04/28/quantum-computing-and-its-potential-impacts/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00122.warc.gz | en | 0.896977 | 1,356 | 3.890625 | 4 |
Review of Short Phrases and Links|
This Review contains major "Electron Spin"- related terms, short phrases and links grouped together in the form of Encyclopedia article.
- Electron spin is the electromagnetic field's angular momentum.
- Electron spin is basically a relativistic effect in which the electron's momentum distorts local space and time.
- Electron spin is roughly analogous to the intrinsic spin of the top.
- The electron spin is the key to the Pauli exclusion principle and to the understanding of the periodic system of chemical elements.
- The electron spin is more of an implosion or explosion in higher dimensional space.
- Paramagnetism results from the electron spin of unpaired electrons.
- The hybrid technology, "thermo-spintronics," would convert heat to electron spin.
- Electron spin resonance (ESR) is a related technique which detects transitions between electron spin levels instead of nuclear ones.
- The train of short optical pulses reduces the continuous density of electron spin precession modes to just three frequencies (blue).
- This is illustrated in Figure 3 for the case when the excess electron spin is initialized to.
- In this study, the interaction of TDH with cell membranes was investigated using electron spin resonance (ESR) techniques.
- Figure 1 depicts a storage qubit (an electron spin in a QD) interacting with a traveling qubit (a single photon) inside a quantum network node.
- Such a system is realized by a single electron spin bound in a semiconductor nanostructure and interacting with surrounding nuclear spins.
- Furthermore, this electron spin could explain the necessity for the number of neutrons added to the nucleus with protons.
- This discovery of the relativistic precession of the electron spin led to the understanding of the significance of the relativistic effect.
- In retrospect, the first direct experimental evidence of the electron spin was the Stern-Gerlach experiment of 1922.
- It does not incorporate the electron spin, and, more fundamentally, the time derivative enters this equation in second order.
- The interaction of the electron spin with the magnetic field is of the same order and should be included together with the E2 and M1 terms.
- The exact same result comes from the quantum mechanics of an electron spin in a magnetic field.
- Because the electron spin has only two allowed projections along any axis, we cannot add a third electron to the n = 1 state.
- The term "electron spin" is not to be taken literally in the classical sense as a description of the origin of the magnetic moment described above.
- This splitting is called fine structure and was one of the first experimental evidences for electron spin.
- Uhlenbeck and Goudsmit later identified this degree of freedom as electron spin.
- Goudsmit on the discovery of electron spin.
- George Uhlenbeck and Samuel Goudsmit one year later identified Pauli's new degree of freedom as electron spin.
- Note that the experiment was performed several years before Uhlenbeck and Goudsmit formulated their hypothesis of the existence of the electron spin.
- Electron spin plays an important role in magnetism, with applications for instance in computer memories.
- Therefore he obtained the precession of the point like magnet instead of the electron spin.
- It predicts electron spin and led Dirac to predict the existence of the positron.
- We call this motion the electron spin and treat it quantum mechanically as another kind of angular momentum.
- In 1921, Otto Stern and Walter Gerlach performed an experiment which showed the quantization of electron spin into two orientations.
- The book looks at applications to the electronic structure of atoms including perturbation and variation methods and a study of electron spin.
- Indeed, the explicit unit imaginary in the Dirac equation is automatically identified with the electron spin in the reformulation.
- Samuel Goudsmit (1902 – 1978) was a Dutch -American physicist famous for jointly proposing the concept of electron spin with George Eugene Uhlenbeck.
- Samuel Goudsmit (1902–1978) was a Dutch-American physicist famous for jointly proposing the concept of electron spin with George Eugene Uhlenbeck.
- Pauli met the train at Hamburg, Germany, to find out Bohr's opinion about the possibility of electron spin.
- He found Pauli and [Otto] Stern waiting for him, wanting to know what he thought of electron spin.
- It is this magnetic moment that is exploited in NMR. Electron spin resonance is a related technique which exploits the spin of electrons instead of nuclei.
- Electron spin resonance is a related technique which exploits the spin of electrons instead of nuclei.
- Each nucleus of spin I splits the electron spin levels into (2I + 1) sublevels.
- Taking electron spin into account, we need a total of four quantum numbers to label a state of an electron in the hydrogen atom: n,,, and s z.
- Classically this could occur if the electron were a spinning ball of charge, and this property was called electron spin.
- Uhlenbeck, along with Samuel Goudsmit, proposed the concept of electron spin, which posits that electrons rotate on an axis.
- Not only is there electron spin that has to be taken into account, but there is also the electron repulsion terms between the electrons.
- These atoms or electrons are said to have unpaired spins which are detected in electron spin resonance.
- When the idea of electron spin was first introduced in 1925, even Wolfgang Pauli had trouble accepting Ralph Kronig's model.
- When Paul Dirac derived his relativistic quantum mechanics in 1928, electron spin was an essential part thereof.
- The following year, Paul Dirac discovered the fully relativistic theory of electron spin by showing the connection between spinors and the Lorentz group.
- Goudsmit, along with George Uhlenbeck, proposed the concept of electron spin, which posits that electrons rotate on an axis.
- Uhlenbeck and Goudsmit one year later identified this degree of freedom as electron spin.
- Physics > Quantum Theory > Quantum Mechanics > Paul Dirac
- Nature > Matter > Particles > Electrons
- Encyclopedia of Keywords > Thought > Concept > Proposing
Books about "Electron Spin" in | <urn:uuid:911dcdbc-ec9d-47e9-9406-e02e15d6b847> | CC-MAIN-2022-05 | http://keywen.com/en/ELECTRON_SPIN | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00643.warc.gz | en | 0.885487 | 1,466 | 3.578125 | 4 |
15 Mar Switching the Twist in X Rays with Magnets
• Physics 14, 34
Scientists create a pattern of nanomagnets—called an artificial spin ice—that can control the orbital angular momentum of a scattered x-ray beam.
A beam of x rays with a spiral wave front can be used to characterize spin and chiral textures, such as magnetic vortices, hedgehogs, and skyrmions, inside a material. However, generating these twisted x rays isn’t trivial. Over the past two years, scientists have achieved this goal by designing and fabricating extremely precise x-ray optics devices [1, 2]. These devices are passive, meaning that their effect on light is fixed, like that of a lens or a mirror. Now Justin Woods from the University of Kentucky and his colleagues have realized an active device that can control the properties of an x-ray beam on the fly . The team used an engineered nanomagnet array—called an artificial spin ice—that twists x rays by different amounts. By changing the temperature or by using an external magnetic field, the team showed that they could control the amount of twisting and the direction of the outgoing beams. This flexibility could be advantageous for probing or controlling electronic and magnetic systems.
Scientists’ ability to discover and observe fundamental processes in nature has been historically connected to their capability to harness the properties of light. In the early 1960s, the development of the laser and nonlinear optics allowed scientists to exquisitely control the wavelength (energy), polarization (spin angular momentum), wave vector (linear momentum), amplitude, and phase of light. In the 1990s, scientists realized that light beams can also possess a property—the orbital angular momentum (OAM)—which involves the rotation of the beam’s phase around its central axis (OAM) .
The OAM of light is associated with spiral wave fronts of an electromagnetic wave. Different modes exist, distinguished by the “light topological charge,” which corresponds to the number of spirals in the spatial evolution of the phase. OAM is of fundamental interest to the manipulation of light, impacting current and future applications that include optical manipulation of particles, super resolution imaging, optical metrology, optical communications, and quantum computing [5, 6]. Moreover, by extending the OAM of light to shorter wavelengths it should be possible to more sensitively probe chiral and topological structure in matter.
However, imprinting OAM onto ultraviolet and x-ray beams is technically challenging. In the extreme ultraviolet (EUV) region of the electromagnetic spectrum, exciting advances in high harmonic generation have made it possible to generate beams with high spatial and temporal coherence and to tailor their properties by sculpting the driving laser field. These advances have enabled subwavelength imaging as well as the generation of EUV OAM beams [8, 9]. In the x-ray regime, any optical device requires ultraprecise fabrication—to within a fraction of the x-ray wavelength, corresponding to less than an angstrom (
In their work, Woods and colleagues have engineered and tested a new approach for generating x-ray beams carrying OAM . They fabricated a device based on an artificial spin ice, which consisted of nickel-iron nanomagnets arranged in a two-dimensional lattice. The team designed their spin-ice array so that it contained a specific topological defect consisting of a double dislocation, which looks like a hole in the strings of a tennis racket. The fundamental working principle of their artificial spin ice can be understood as a “fork-dislocation hologram,” which is a slightly warped diffraction grating that has been used since the 1990s to generate visible OAM beams . Similarly, when an x-ray beam struck the spin-ice dislocation, multiple x-ray beams (diffraction modes) scattered out in different directions, each with a different amount of OAM—or light topological charge (Fig. 1).
Interestingly, the defect in the artificial spin ice had two separate scattering effects on an incoming x-ray beam. The double dislocation in the arrangement of electric charges had a “structural topological charge” of 2, which meant it generated diffraction modes with even-order OAM (light topological charge of
, etc.). By contrast, the dislocation in the magnetic spins had a topological charge of 1, which meant it generated odd-order OAM modes (light topological charge of
, etc.) in x rays that were resonant with an absorption line of iron. However, this magnetic scattering was only possible when the artificial spin ice had an antiferromagnetic (antiparallel) ordering of its spins. Thus, the researchers could turn off this scattering by inducing an antiferromagnetic-to-paramagnetic phase transition. This transition was dependent on the temperature and on the external magnetic field. The team showed that they could switch the OAM of the outgoing x-ray beam from a mix of odd and even orders to only even orders—either by increasing the temperature from 270 K to 380 K or by applying a magnetic field.
The use of artificial spin ice with structural topological defects offers new capabilities for x-ray OAM beam generation. Adaptive optical components that can control the OAM properties of x-ray light can enhance our ability to probe and image chiral and topological nanostructures, chiral molecules used in medicines, and magnetic and other materials relevant to nanotechnologies. More opportunities to manufacture other types of active x-ray devices will come from emerging nanomaterials and nanodevices that are reconfigurable or controllable through phase transitions. In combination with devices that create structured EUV and soft x-ray beams through extreme nonlinear optics, these advances can greatly expand the capabilities of structured, short-wavelength light for capturing the time-resolved dynamics and functions of topological structures.
- J. C. T. Lee et al., “Laguerre–Gauss and Hermite–Gauss soft x-ray states generated using diffractive optics,” Nat. Photon. 13, 205 (2019).
- L. Loetgering et al., “Generation and characterization of focused helical x-ray beams,” Sci. Adv. 6, eaax8836 (2020).
- J. S. Woods et al., “Switchable x-ray orbital angular momentum from an artificial spin ice,” Phys. Rev. Lett. 126, 117201 (2021).
- L. Allen et al., “Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes,” Phys. Rev. A 45, 8185 (1992).
- Y. Shen et al., “Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities,” Light Sci. Appl. 8, 90 (2019).
- B. Wang et al., “Coherent Fourier scatterometry using orbital angular momentum beams for defect detection,” Opt. Express 29, 3342 (2021).
- D. F. Gardner et al., “Subwavelength coherent imaging of periodic samples using a 13.5 nm tabletop high-harmonic light source,” Nat. Photon. 11, 259 (2017).
- C. Hernández-García et al., “Attosecond extreme ultraviolet vortices from high-order harmonic generation.,” Phys. Rev. Lett. 111, 083602 (2013).
- L. Rego et al., “Generation of extreme-ultraviolet beams with time-varying orbital angular momentum,” Science 364, eaaw9486 (2019).
- V. Y. Bazhenov et al., “Laser beams with screw dislocations in their wavefronts,” JETP Lett. 52, 429 (1990). | <urn:uuid:6cfc1531-3f16-4103-a644-9ac4b12e7a5a> | CC-MAIN-2022-05 | https://fiberguide.net/tech-guides/switching-the-twist-in-x-rays-with-magnets/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00644.warc.gz | en | 0.899224 | 1,680 | 3.734375 | 4 |
We use physics every day. The laws of motion and momentum govern our movements, and the law of gravity keeps us from floating away — but what about quantum physics? We hear the words used in popular media — it’s mentioned repeatedly in shows like “The Big Bang Theory,” but what do they actually mean? Let’s take a closer look at the field of quantum physics, and how scientists are able to study it.
What Is Quantum Physics?
Quantum physics is similar to standard physics. Classic physics focuses on ordinary nature, things we can see and touch without the need for additional tools. Think of Newton’s apple, when he allegedly discovered the theory of gravity.
Quantum physics is based on a theory called quantization, or the process of transitioning from an understand physical phenomena — like Newton’s apple — to something we can’t see or touch. In essence, quantum physics is the science of the smallest particles in the universe and how they interact with the things around them. Quantum physicists study subatomic particles — photons, electrons, neutrons, quarks, etc. — but how can you study something you can’t see?
Quantum physics, also known as quantum mechanics, made an appearance in the scientific communities in the early 1900s when Albert Einstein published his theory of relativity. However, this field can’t be attributed to any one scientist.
In 1900, a physicist named Max Planck found himself facing a dilemma. According to the laws of physics at the time, if a box was heated up in an environment where no light could escape, it would produce an infinite amount of ultraviolet radiation. At the time, scientists assumed light was a continuous wave. When heating the box didn’t work as they predicted, Planck started to think that light didn’t exist as a wave, but rather as small amounts of energy known as quanta.
He was right. Einstein later theorized that light existed as individual particles, which in 1926 were named photons.
Studying the Universe’s Smallest Particles
How can you study something that is too small for even the most powerful microscope to see? The technology actually dates back to the early 1800s during the discovery and development of the periodic table. Our first glimpse into subatomic particles didn’t come from physics, but rather from chemistry. The first subatomic particle we discovered was the electron, because of the discharge effects of electricity in some gases. Then came protons, the nucleus of the atom and neutrons.
The 1930s brought us the first particle accelerators, and while they were not as high-tech or advanced as the ones we use today, they enabled scientists of the time to accelerate proton beams and measure the size of an atom’s nucleus. Today’s accelerators work on the same principles, producing a beam of charged particles scientists can use to study other subatomic components. They can detect them directly or discover their presence because of the reaction of the charged particles.
The Quantum Uncertainty Principle
Of course, nothing is ever easy in quantum physics. In 1927, Werner Heisenberg of Germany theorized that it is impossible to measure both the position and the velocity of an object at the same time. This theory later became known as the Quantum or Heisenberg Uncertainty Principle, and is one of the foundations of modern quantum mechanics.
It doesn’t work for items we can see. You can easily tell the velocity and position of an apple falling from a tree — 5.8 meters per second squared, based on the law of gravity — but it’s not as easy to determine either of these things when you’re talking about a particle that’s impossible to view with the naked eye.
Remember Schrodinger’s thought experiment, in which a cat was in a box with poison? The cat is both alive and dead until it is observed to be one or the other. That applies to the Heisenberg Uncertainty Principle as well. Any attempt to measure the velocity or position of a subatomic particle will affect both measurements in such a way that no actual analysis is possible. The mere act of observation changes the outcome of the experiment.
This is what makes quantum physics so challenging as a field. Anything we learn is colored by the act of learning it — but that doesn’t mean we haven’t made any significant discoveries.
Recent Discoveries in Quantum Physics
Quantum physics has taken off in recent years. 2018, in particular, was a phenomenal year for scientific advancements. Scientists trying to create quantum computers managed to pack 18 qubits of information into six photons. We’ve discovered that life on this planet may rely on some form of quantum entanglement, with particles linked together at a subatomic level.
We’ve found that there are actually two types of water molecules — one where the hydrogen and oxygen atoms point in the same direction, and one where they’re pointing in opposite directions. Military radar technology may even be getting an upgrade thanks to quantum mechanics. By using entangled photons, scientists hope to create a stealth-busting radar that will notify them if they are being tampered with or encountering problems. This is based on the readings generated by photons back at base.
This is just a fraction of the amazing discoveries we’ve made in the last year alone.
The Future of Quantum Physics
We’ve barely scratched the surface of the quantum universe, and as new discoveries trickle in, they’re likely to alter our understanding of everything — from science to life itself. It’s an exciting time to be alive, and we can’t wait to see what new advances are on the horizon. | <urn:uuid:2a7d7846-3467-429f-a1f4-bd8e8941057b> | CC-MAIN-2022-05 | https://scienceswitch.com/2019/05/01/what-is-quantum-physics/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00285.warc.gz | en | 0.947945 | 1,189 | 3.515625 | 4 |
For a scientist whose career was made by his work on black holes, it might seem a little confusing to read that Stephen Hawking now thinks that they don’t exist. But that’s what “Information Preservation and Weather Forecasting for Black Holes,” the study Hawking published last week on arXiv, says: “there are no black holes.”
While this might seem surprising–after all, there’s a huge amount of (indirect) evidence that black holes exist, including a massive one several million times the mass of our Sun at the centre of the Milky Way—it’s really not. It’s Hawking’s latest attempt to solve a paradox that he, and other astrophysicists, have been grappling with for a couple of years.
So what’s he talking about? Here’s the background: black holes are objects which are so massive, with such strong gravity, that even light can’t escape. The distance from the black hole, beyond which nothing gets out, is the event horizon. However, Hawking made his name in the 1970s when he published a paper showing that black holes don’t just suck stuff up, endlessly—they spew out a beam of so-called “Hawking radiation” as they absorb other matter. That means black holes actually lose mass over time, eventually whittling away to nothing.
Black holes are frustrating, though, because their extreme gravity exposes the major inadequacy in our current scientific understanding of the universe - we don’t know how to reconcile quantum mechanics and general relativity. With general relativity, we can make accurate predictions about objects with certainty, but on the tiny scale of quantum mechanics it’s only possible to talk about the behaviour of objects in terms of probability. When we do the maths on what happens to things that fall into black holes, using relativity gives results that break quantum mechanics; the same goes vice versa.
One of the key things about quantum mechanics is that it tells us information can’t be destroyed–that is, if you measure the radiation given off by a black hole, you should be able to build up a picture of what matter fell into the hole to create it. However, if general relativity holds, and nothing can escape from inside the event horizon, then that should apply to that quantum information–any radiation that’s coming out is, Hawking showed, random. It’s the black hole “information paradox.” Either give up quantum mechanics, or accept that information can die.
Hawking was in the “information can die” camp, until 2004, when it became clear—thanks to string theory—that quantum mechanics held up (and there’s an excellent in-depth explanation of this in Nature that explores this story more fully if interested). There was just one problem—nobody could work out *how* information was getting out of black holes, even if it was happening mathematically.
And, just in case this wasn’t all entirely confusing, it turns out that our best post-2004 theory about what’s been going on gives rise to an entirely new paradox—the “firewall.”
It’s to do with quantum entanglement, where two particles are created that are identical on the quantum level. The way it works isn’t exactly clear yet—it could be something to do with string theory and wormholes—but it means that measuring the properties of one particle will give readings that mirror those found on its entangled particle. It might lead to teleportation technology, but scientists aren’t sure yet.
Joseph Polchinski from the Kavli Institute for Theoretical Physics in Santa Barbara, California published a paper in 2012 that worked out the information paradox could be solved if Hawking radiation was quantum entangled with the stuff falling in. But, due to the limitations of entanglement, if this is true, that would mean that at the event horizon a massive amount of energy was given off by particles entering and leaving.
Hence “firewall”—anything crossing the event horizon would be burnt to a crisp. And even though most scientists, including Polchinski, thought this couldn’t possibly be right—it completely contradicts a lot of the stuff underlying general relativity, for example—nobody’s yet managed to disprove it.
The choice for physicists, once again, was to: a) accept the firewall, and throw out general relativity, or b) accept that information dies in black holes, and quantum mechanics is wrong.
Still with me? Here’s where Hawking’s latest paper comes in.
(That title—“Information Preservation and Weather Forecasting for Black Holes”—might make some more sense too, hopefully.)
Hawking’s proposed solution, building on an idea first floated in 2005, is that the event horizon isn’t as defined as we’ve come to imagine it. He instead proposes something called an “apparent horizon,” which light and other stuff can escape from:
"The absence of event horizons mean that there are no black holes—in the sense of regimes from which light can't escape to infinnity. There are however apparent horizons which persist for a period of time."
Black holes should be treated more like massive galactic washing machines. Stuff falls in and starts getting tossed around, mixed up with other stuff in there, and only eventually is allowed to escape out again when ready. This happens because the quantum effects around a black hole, like weather on Earth, churn so violently and unpredictably that it’s just impossible to either predict the position of an event horizon or expect uniform effects for stuff crossing it. While the theoretical basis, that information is preserved, remains, in practice it's so difficult as to be impractical.
It’s a fudge of an idea, which tries to have its general relativity and quantum mechanics cakes, and eat them, too. Possible weaknesses, as Nature points out, are that it could imply that escaping from black holes is easier than it is in reality. It could also be the apparent horizons are just as much of a firewall as the traditional conception of an event horizon. Hawking's peers have yet to have a go at assessing his idea, so we'll have to wait to see whether the idea has merit—or whether it merely gives rise to yet more paradoxes.
This piece first appeared on newstatesman.com.
Image via Shutterstock. | <urn:uuid:92ce4d3d-72e9-4a5f-af0e-98fa2026c090> | CC-MAIN-2022-05 | https://newrepublic.com/article/116442/stephen-hawking-thinks-black-holes-dont-exist | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00085.warc.gz | en | 0.947909 | 1,360 | 3.65625 | 4 |
Physicists at MIT and elsewhere have observed evidence of Majorana fermions — particles that are theorized to also be their own antiparticle — on the surface of a common metal: gold. This is the first sighting of Majorana fermions on a platform that can potentially be scaled up. The results, published in the Proceedings of the National Academy of Sciences, are a major step toward isolating the particles as stable, error-proof qubits for quantum computing.
In particle physics, fermions are a class of elementary particles that includes electrons, protons, neutrons, and quarks, all of which make up the building blocks of matter. For the most part, these particles are considered Dirac fermions, after the English physicist Paul Dirac, who first predicted that all fermionic fundamental particles should have a counterpart, somewhere in the universe, in the form of an antiparticle — essentially, an identical twin of opposite charge.
In 1937, the Italian theoretical physicist Ettore Majorana extended Dirac’s theory, predicting that among fermions, there should be some particles, since named Majorana fermions, that are indistinguishable from their antiparticles. Mysteriously, the physicist disappeared during a ferry trip off the Italian coast just a year after making his prediction. Scientists have been looking for Majorana’s enigmatic particle ever since. It has been suggested, but not proven, that the neutrino may be a Majorana particle. On the other hand, theorists have predicted that Majorana fermions may also exist in solids under special conditions.
Now the MIT-led team has observed evidence of Majorana fermions in a material system they designed and fabricated, which consists of nanowires of gold grown atop a superconducting material, vanadium, and dotted with small, ferromagnetic “islands” of europium sulfide. When the researchers scanned the surface near the islands, they saw signature signal spikes near zero energy on the very top surface of gold that, according to theory, should only be generated by pairs of Majorana fermions.
“Majorana ferminons are these exotic things, that have long been a dream to see, and we now see them in a very simple material — gold,” says Jagadeesh Moodera, a senior research scientist in MIT’s Department of Physics, and a member of MIT’s Plasma Science and Fusion Center. “We’ve shown they are there, and stable, and easily scalable.”
“The next push will be to take these objects and make them into qubits, which would be huge progress toward practical quantum computing,” adds co-author Patrick Lee, the William and Emma Rogers Professor of Physics at MIT.
Lee and Moodera’s coauthors include former MIT postdoc and first author Sujit Manna (currently on the faculty at the Indian Institute of Technology at Delhi), and former MIT postdoc Peng Wei of University of California at Riverside, along with Yingming Xie and Kam Tuen Law of the Hong Kong University of Science and Technology.
If they could be harnessed, Majorana fermions would be ideal as qubits, or individual computational units for quantum computers. The idea is that a qubit would be made of combinations of pairs of Majorana fermions, each of which would be separated from its partner. If noise errors affect one member of the pair, the other should remain unaffected, thereby preserving the integrity of the qubit and enabling it to correctly carry out a computation.
Scientists have looked for Majorana fermions in semiconductors, the materials used in conventional, transistor-based computing. In their experiments, researchers have combined semiconductors with superconductors — materials through which electrons can travel without resistance. This combination imparts superconductive properties to conventional semiconductors, which physicists believe should induce particles in the semiconductor to split , forming the pair of Majorana fermions.
“There are several material platforms where people believe they’ve seen Majorana particles,” Lee says. “The evidence is stronger and stronger, but it’s still not 100 percent proven.”
What’s more, the semiconductor-based setups to date have been difficult to scale up to produce the thousands or millions of qubits needed for a practical quantum computer, because they require growing very precise crystals of semiconducting material and it is very challenging to turn these into high-quality superconductors.
About a decade ago, Lee, working with his graduate student Andrew Potter, had an idea: Perhaps physicists might be able to observe Majorana fermions in metal, a material that readily becomes superconductive in proximity with a superconductor. Scientists routinely make metals, including gold, into superconductors. Lee’s idea was to see if gold’s surface state — its very top layer of atoms — could be made to be superconductive. If this could be achieved, then gold could serve as a clean, atomically precise system in which researchers could observe Majorana fermions.
Lee proposed, based on Moodera’s prior work with ferromagnetic insulators, that if it were placed atop a superconductive surface state of gold, then researchers should have a good chance of clearly seeing signatures of Majorana fermions.
“When we first proposed this, I couldn’t convince a lot of experimentalists to try it, because the technology was daunting,” says Lee who eventually partnered with Moodera’s experimental group to to secure crucial funding from the Templeton Foundation to realize the design. “Jagadeesh and Peng really had to reinvent the wheel. It was extremely courageous to jump into this, because it’s really a high-risk, but we think a high-payoff, thing.”
Over the last few years, the researchers have characterized gold’s surface state and proved that it could work as a platform for observing Majorana fermions, after which the group began fabricating the setup that Lee envisioned years ago.
They first grew a sheet of superconducting vanadium, on top of which they overlaid nanowires of gold layer, measuring about 4 nanometers thick. They tested the conductivity of gold’s very top layer, and found that it did, in fact, become superconductive in proximity with the vanadium. They then deposited over the gold nanowires “islands” of europium sulfide, a ferromagnetic material that is able to provide the needed internal magnetic fields to create the Majorana fermions.
The team then applied a tiny voltage and used scanning tunneling microscopy, a specialized technique that enabled the researchers to scan the energy spectrum around each island on gold’s surface.
Moodera and his colleagues then looked for a very specific energy signature that only Majorana fermions should produce, if they exist. In any superconducting material, electrons travel through at certain energy ranges. There is however a desert, or “energy gap” where there should be no electrons. If there is a spike inside this gap, it is very likely a signature of Majorana fermions.
Looking through their data, the researchers observed spikes inside this energy gap on opposite ends of several islands along the the direction of the magnetic field, that were clear signatures of pairs of Majorana fermions.
“We only see this spike on opposite sides of the island, as theory predicted,” Moodera says. “Anywhere else, you don’t see it.”
“In my talks, I like to say that we are finding Majorana, on an island in a sea of gold,” Lee adds.
Moodera says the team’s setup, requiring just three layers — gold sandwiched between a ferromagnet and a superconductor — is an “easily achievable, stable system” that should also be economically scalable compared to conventional, semiconductor-based approaches to generate qubits.
“Seeing a pair of Majorana fermions is an important step toward making a qubit,” Wei says. “The next step is to make a qubit from these particles, and we now have some ideas for how to go about doing this.”
This research was funded, in part, by the John Templeton Foundation, the U.S. Office of Naval Research, the National Science Foundation, and the U.S. Department of Energy. | <urn:uuid:9fc1c214-9577-4d07-b90b-c230fdd9c21c> | CC-MAIN-2022-05 | https://news.mit.edu/2020/first-majorana-fermion-metal-quantum-computing-0410 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00687.warc.gz | en | 0.942246 | 1,800 | 3.59375 | 4 |
Classical computing is built on the power of the bit, which is, in essence, a micro transistor on a chip that can be either on or off, representing a 1 or a 0 in binary code. The quantum computing equivalent is the qubit. Unlike bits, qubits can exist in more than one “state” at a time, enabling quantum computers to perform computational functions exponentially faster than can classical computers.
To date, most efforts to build quantum computers have relied on qubits created in superconducting wires chilled to near absolute zero or on trapped ions held in place by lasers. But those approaches face certain challenges, most notably that the qubits are highly sensitive to environmental factors. As the number of qubits increases, those factors are more likely to compound and interrupt the entanglement of qubits required for a quantum computer to work.
Another approach, developed more recently, is to use a photon as an optical qubit to encode quantum information and to integrate the components necessary for that process into a photonic integrated circuit (PIC). Galan Moody, an assistant professor in the UC Santa Barbara College of Engineering’s Department of Electrical and Computer Engineering (ECE), has received a Defense University Research Instrumentation Program (DURIP) Award from the U.S. Department of Defense and the Air Force Office of Scientific Research to build a quantum photonic computing testbed. He will conduct his research in a lab set aside for such activity in recently completed Henley Hall, the new home of the College of Engineering’s Institute for Energy Efficiency (IEE).
The grant supports the development or acquisition of new instrumentation to be used in fundamental and applied research across all areas of science and engineering. “My field is quantum photonics, so we’re working to develop new types of quantum light sources and ways to manipulate and detect quantum states of light for use in such applications as quantum photonic computing and quantum communications,” Moody said.
“At a high level,” he explained, the concept of quantum photonic computing is “exactly the same as what Google is doing with superconducting qubits or what other companies are doing with trapped ions. There are a lot of different platforms for computing, and one of them is to use photonic integrated circuits to generate entangled photons, entanglement being the foundation for many different quantum applications.”
To place an entire quantum photonics system onto a chip measuring about one square centimeter would be a tremendous achievement. Fortunately, the well-developed photonics infrastructure — including AIM Photonics, which has a center at UCSB led by ECE professor and photonics pioneer John Bowers, also director of the IEE — lends itself to that pursuit and to scaling up whatever quantum photonics platform is most promising. Photonics for classical applications is a mature technology industry that, Moody said, “has basically mastered large-scale and wafer-scale fabrication of devices.”
It is reliable, so whatever Moody and his team design, they can fabricate themselves or even order from foundries, knowing they will get exactly what they want.
The Photonic Edge
The process of creating photonic qubits begins with generating high-quality single photons or pairs of entangled photons. A qubit can then be defined in several different ways, most often in the photon’s polarization (the orientation of the optical wave) or in the path that the photons travel. Moody and his team can create PICs that control these aspects of the photons, which become the carriers of quantum information and can be manipulated to perform logic operations.
The approach has several advantages over other methods of creating qubits. For instance, the aforementioned environmental effects that can cause qubits to lose their coherence do not affect coherence in photons, which, Moody says, “can maintain that entanglement for a very long time. The challenge is not coherence but, rather, getting the photons to become entangled in the first place.”
“That,” Moody notes, “is because photons don’t naturally interact; rather, they pass right through each other and go their separate ways. But they have to interact in some way to create an entangled state. We’re working on how to create PIC-based quantum light sources that produce high-quality photons as efficiently as possible and then how to get all the photons to interact in a way that allows us to build a scalable quantum processor or new devices for long-distance quantum communications.”
Quantum computers are super efficient, and the photonics approach to quantum technologies is even more so. When Google “demonstrated quantum supremacy” in fall 2019 using the quantum computer built in its Goleta laboratory under the leadership of UCSB physics professor John Martinis, the company claimed that its machine, named Sycamore, could do a series of test calculations in 200 seconds that a super-computer would need closer to 10,000 years to complete. Recently, a Chinese team using a laboratory-scale table-top experiment claimed that, with a photon-based quantum processor, “You could do in two hundred seconds what would take a super-computer 2.5 billion years to accomplish,” Moody said.
Another advantage is that photonics is naturally scalable to thousands and, eventually, millions of components, which can be done by leveraging the wafer-scale fabrication technologies developed for classical photonics. Today, the most advanced PICs comprise nearly five thousand components and could be expanded by a factor of two or four with existing fabrication technologies, a stage of development comparable to that of digital electronics in the 1960s and 1970s. “Even a few hundred components are enough to perform important quantum computing operations with light, at least on a small scale between a few qubits,” said Moody. With further development, quantum photonic chips can be scaled to tens or hundreds of qubits using the existing photonics infrastructure.
Moody’s team is developing a new materials platform, based on gallium arsenide and silicon dioxide, to generate single and entangled photons, and it promises to be much more efficient than comparable systems. In fact, they have a forthcoming paper showing that their new quantum light source is nearly a thousand times more efficient than any other on-chip light source.
In terms of the process, Moody says, “At the macro level, we work on making better light sources and integrating many of them onto a chip. Then, we combine these with on-chip programmable processors, analogous to electronic transistors used for classical logic operations, and with arrays of single-photon detectors to try to implement quantum logic operations with photons as efficiently as possible.”
For more accessible applications, like communications, no computing need occur. “It involves taking a great light source and manipulating a property of the photon states (such as polarization), then sending those off to some other chip that’s up in a satellite or in some other part of the world, which can measure the photons and send a signal back that you can collect,” Moody said.
One catch, for now, is that the single-photon detectors, which are used to signal whether the logic operations were performed, work with very high efficiency when they are on the chip; however, some of them work only if the chip is cooled to cryogenic temperatures.
“If we want to integrate everything on chip and put detectors on chip as well, then we’re going to need to cool the whole thing down,” Moody said. “We’re going to build a setup to be able to do that and test the various quantum photonic components designed and fabricated for this. The DURIP award enables exactly this: developing the instrumentation to be able to test large-scale quantum photonic chips from cryogenic temperatures all the way up to room temperature.”
There are also challenges associated with cooling the chip to cryogenic temperatures. Said Moody, “It’s getting this whole platform up and running, interfacing the instrumentation, and making all the custom parts we need to be able to look at large-scale photonic chips for quantum applications at cryogenic temperatures.” | <urn:uuid:376ad82a-84ac-4c67-83e8-0e1cad9be7ee> | CC-MAIN-2022-05 | https://www.news.ucsb.edu/2021/020173/quantum-photons | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00609.warc.gz | en | 0.936996 | 1,733 | 3.84375 | 4 |
A quantum computer is based on phenomena such as superposition and entanglement and uses such phenomena to perform operations on data. While binary digital electronic computers are based on transistors, quantum-mechanical phenomena form the basis of quantum computers.
Theoretically, large-scale quantum computers would be far more efficient and faster at solving certain problems than any classical computers. For example, Shor’s algorithm, a quantum algorithm for integer factorization running on a quantum computer would beat hands down the corresponding problem running on a classical computer. Another example where quantum computers would reign supreme is the simulation of quantum many-body systems. Quantum algorithms, such as Simon’s algorithm, run much faster compared to any possible probabilistic classical algorithm. It’s important to note that, in principle, a classical computer could simulate a quantum algorithm. This is because quantum computation does not violate the Church–Turing thesis. To perform this, however, a classical computer would require an inordinate amount of resources. Quantum computers, on the contrary, might efficiently solve problems that are too complex to be practically solved by classical computers.
A quantum computer uses quantum states to represent bits simultaneously to achieve an exponential increase in speed and power. Enormous, complex problems usually requiring massive amount of resources and time can be solved in a reasonable amount of time. This is highly beneficial for IoT data that requires a lot of computation power and other complex optimization functions. In drug discovery, for example, trillions of combinations of amino acids are examined to find a single elusive protein.
On the quantum level, you’re able to program the atoms to represent all possible input combinations simultaneously. That means when you run an algorithm, all possible input combinations are tested at once. With a regular computer, you’d have to serially cycle through every possible input combination to arrive at your solution. Interestingly, solving the most complex problems this way would take longer than the age of the universe.
For certain types of problems, quantum computers can provide an exponential speed boost. Quantum database search is the most well-known example of this.
Besides factorization and discrete logarithms, quantum algorithms offer more than polynomial speedup over the best-known classical algorithms. Simulation of quantum physical processes in solid state physics as well as chemistry, approximation of Jones polynomials, and solving Pell’s equation are some well-known examples.
A composite system is always expressible as a sum or superposition of products of states of local constituents.
Binary Bits vs. Quantum Qubits
With quantum computers, information is not held in individual units but rather in the system as a whole. The system can exist in two states at the same time. This is courtesy of the superposition principle of quantum mechanics. This “qubit” can store a “0” and “1” simultaneously. If you build a system comprising two qubits, it can hold four values at once — 00, 01, 10, and 11.
Quantum computers differ from digital computers, which use binary system and are based on transistors. Digital computing requires data encoded into bits, where a bit can be only in one exclusive state (0 or 1). Quantum computation, on the other hand, uses quantum bits (qubits), which need not be in an exclusive state; rather, the qubits can be in superpositions of states. A quantum Turing machine is a theoretical model of such a computer, also known as the universal quantum computer. The major groundwork in the field of quantum computing was done by Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982, and David Deutsch in 1985. In 1968, a quantum computer with spins as qubits was also formulated for use as a quantum space–time.
A classical computer has bits for memory, where each bit represents either a 0 or 1. A quantum computer, instead, has a sequence of qubits. A single qubit can represent 0, 1, or any quantum superposition of those two qubit states. Similarly, a pair of qubits can be in any quantum superposition of four states, and three qubits in any superposition of eight states. Generalizing, a quantum computer with n qubits can be in an arbitrary superposition of up to 2n different states simultaneously. A classical computer, in contrast, can exclusively be in just one of these 2n states at a time.
A quantum computer operates on qubits. The qubits are set in a perfect drift, representing the specific problem at hand. Subsequently, a precise sequence of quantum logic gates is used to manipulate these qubits. The quantum algorithm is the sequence of gates to be applied to solve the problem. When a measurement is done, the qubit system collapses into a classical state, where each qubit is 0 or 1. Thus, the outcome can at most be n classical bits of information. An important aspect of quantum algorithms is their probabilistic nature. They associate a certain known probability with a correct solution.
A particle having spin states “up” and “down” (usually written | ↓ ⟩ and | ↑ ⟩ , or | 0 ⟩ and | 1 ⟩) can be considered an example of an implementation of qubits in a quantum computer. But in fact, any system possessing an observable quantity A, which is conserved under time evolution such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system.
Qubits are not just the particles themselves. In addition to the controlled particles, qubits also have the means of control, such as the devices that trap particles and switch them between different states.
Research on Quantum Computers
As of 2017, the development of actual quantum computers is rapidly gaining pace. Advances are being made in both practical and theoretical research. National governments and military agencies are taking deep interest and funding in quantum computing research. Once developed, quantum computers can be employed in a variety of fields such as civilian, business, trade, environmental and national security purposes.
Learn more: https://amyxinternetofthings.com/ | <urn:uuid:6721f33e-e891-4c61-9ac8-a07f1c6d69e0> | CC-MAIN-2022-05 | https://iotpractitioner.com/quantum-computing-series-part-7-quantum-computer/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00329.warc.gz | en | 0.926922 | 1,304 | 3.984375 | 4 |
In quantum teleportation, the properties of quantum entanglement are used to send a spin state (qubit) between observers without physically moving the involved particle. The particles themselves are not really teleported, but the state of one particle is destroyed on one side and extracted on the other side, so the information that the state encodes is communicated. The process is not instantaneous, because information must be communicated classically between observers as part of the process. The usefulness of quantum teleportation lies in its ability to send quantum information arbitrarily far distances without exposing quantum states to thermal decoherence from the environment or other adverse effects.
Although quantum teleportation can in principle be used to actually teleport macroscopic objects (in the sense that two objects in exactly the same quantum state are identical), the number of entangled states necessary to accomplish this is well outside anything physically achievable, since maintaining such a massive number of entangled states without decohering is a difficult problem. Quantum teleportation, is, however, vital to the operation of quantum computers, in which manipulation of quantum information is of paramount importance. Quantum teleportation may eventually assist in the development of a "quantum internet" that would function by transporting information between local quantum computers using quantum teleportation .
Below is a sketch of an algorithm for teleporting quantum information. Suppose Alice has state C, which she wants to send to Bob. To achieve this, Alice and Bob should follow the sequence of steps:
1) Generate an entangled pair of electrons with spin states A and B, in a particular Bell state:
Separate the entangled electrons, sending A to Alice and B to Bob.
2) Alice measures the "Bell state" (described below) of A and C, entangling A and C.
3) Alice sends the result of her measurement to Bob via some classical method of communication.
4) Bob measures the spin of state B along an axis determined by Alice's measurement
Since step 3 involves communicating via some classical method, the information in the entangled state must respect causality. Relativity is not violated because the information cannot be communicated faster than the classical communication in step 3 can be performed, which is sub-lightspeed.
The idea of quantum teleportation, which can be seen in the mathematics below, is that Alice's measurement disentangles A and B and entangles A and C. Depending on what particular entangled state Alice sees, Bob will know exactly how B was disentangled, and can manipulate B to take the state that C had originally. Thus the state C was "teleported" from Alice to Bob, who now has a state that looks identical to how C originally looked. It is important to note that state C is not preserved in the processes: the no-cloning and no-deletion theorems of quantum mechanics prevent quantum information from being perfectly replicated or destroyed. Bob receives a state that looks like C did originally, but Alice no longer has the original state C in the end, since it is now in an entangled state with A.
Which of the following is true of quantum teleportation?
1) Quantum information is transferred between states
2) The teleported particle is physically transferred between locations
3) A quantum state is cloned between observers
4) Quantum information is permanently removed from the system
As a review, recall the Pauli matrices:
The spin operators along each axis are defined as times each of for the axes respectively.
These Pauli matrices are used to construct Bell states, an orthonormal basis of entangled states for the tensor product space of spin- particles:
Measurements that project tensor products of spin states onto the Bell basis are called Bell measurements.
Now, follow the algorithm sketched in the previous section. Suppose Alice starts with state C, which she wants to send Bob. State C can be written in the most general form:
with and normalized complex constants.
1) Generate an entangled pair of electrons A and B in the Bell state:
The state of the full system of three particles is therefore . This is a product state between entangled pair AB and non-entangled C.
2) Alice measures the Bell state of AC, entangling A and C while disentangling B. The process of measuring the Bell state projects a non-entangled state into an entangled state, since all four Bell states are entangled.
Expanding Alice's full original state, she starts with:
Multiplying out the states and changing to the Bell basis of A and C, this state can be rewritten:
When Alice measures the Bell state of A and C, she will find one of , each with probability . Whichever she measures, the state of particle B will be after measurement.
3) To send Bob the state of particle C, therefore, Alice does not need to send Bob the possibly infinite amount of information contained in the coefficients and which may be real numbers out to arbitrary precision. She needs only to send the integer of the Bell state of A and C, which is a maximum of two bits of information. Alice can send this information to Bob in whatever classical way she likes.
4) Bob receives the integer from Alice that labels the Bell state that she measured. After Alice's measurement, the overall state of the system is:
Bob therefore applies to the disentangled state on his end, by measuring the spin along axis . Since for all , Bob is left with the overall state:
Bob has therefore changed the spin state of particle B to:
which is identical to the original state of particle C that Alice wanted to send. The information in state C has been "teleported" to Bob's state: the final spin state of B looks like C's original state. Note, however, that the particles involved never change between observers: Alice always has A and C, and Bob always has B.
- Pirandola, S., & Braunstein, S. Physics: Unite to build a quantum Internet. Retrieved from http://www.nature.com/news/physics-unite-to-build-a-quantum-internet-1.19716
- Debenben, . quantum teleportation diagram. Retrieved from https://commons.wikimedia.org/w/index.php?curid=34503176 | <urn:uuid:0db82ae8-f31b-4490-81c7-68cd4f39d884> | CC-MAIN-2022-05 | https://brilliant.org/wiki/quantum-teleportation/?subtopic=quantum-mechanics&chapter=multiparticle-systems | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00410.warc.gz | en | 0.927272 | 1,288 | 3.984375 | 4 |
Learn about parallel computing, the rise of heterogeneous processing (also known as hybrid processing), and the prospect of quantum engineering as a field of study!
Parallel computing used to be a way of sharing tasks between processor cores.
When processor clock rates stopped increasing, the response of the microprocessor companies was to increase the number of cores on a chip to increase throughput.
But now, the increased use of specialized processing elements has become more popular.
A GPU is a good example of this. A GPU is very different from an x86 or ARM processor and is tuned for a different type of processing.
GPUs are very good at matrix math and vector math. Originally, they were designed to process pixels. They use a lot of floating point math because the math behind how a pixel value is computed is very complex.
A GPU is very useful if you have a number of identical operations you have to calculate at the same time.
GPUs used to be external daughter cards, but in the last year or two the GPU manufacturers are starting to release low power parts suitable for embedded applications. They include several traditional cores and a GPU.
So, now you can build embedded systems that take advantage of machine learning algorithms that would have traditionally required too much processing power and too much thermal power.
This is an example of a heterogeneous processor (AMD) or hybrid processor. A heterogeneous processor contains cores of different types, and a software architect figures out which types of workloads are processed by which type of core.
Andrew Chen (professor) has predicted that this will increase in popularity because it’s become difficult to take advantage of shrinking the semiconductor feature size.
This year or next year, we will start to see heterogeneous processors (MOOR) with multiple types of cores.
Traditional processors are tuned for algorithms on integer and floating point operations where there isn’t an advantage to doing more than one thing at a time. The dependency chain is very linear.
A GPU is good at doing multiple computations at the same time so it can be useful when there aren’t tight dependency chains.
Neither processor is very good at doing real-time processing. If you have real time constraints – the latency between an ADC and the “answer” returned by the system must be short – there is a lot of computing required right now. So, a new type of digital hardware is required. Right now, ASICs and FPGAs tend to fill that gap, as we’ve discussed in the All about ASICs podcast.
Quantum cores (like we discussed in the what is quantum computing podcast) are something that we could see on processor boards at some point. Dedicated quantum computers that can exceed the performance of traditional computers will be introduced within the next 50 years, and as soon as the next 10 or 15 years.
To be a consumer product, a quantum computer would have to be a solid state device, but their existence is purely speculative at this point in time.
Quantum computing is reinventing how processing happens. And, quantum computers are going to tackle very different types of problems than conventional computers.
There is a catalog on the web of problems and algorithms that would be substantially better on a quantum on a computer than a traditional computer.
People are creating algorithms for computers that don’t even exist yet.
The Economist estimated that the total spend on quantum computing research is over 1 Billion dollars per year globally. A huge portion of that is generated by the promise of these algorithms and papers. The interest is driven by this.
Quantum computers will not completely replace typical processors.
Lee’s opinion is that the quantum computing industry is still very speculative, but the upsides are so great that neither the incumbent large computing companies nor the industrialized countries want to be left behind if it does take off.
The promise of quantum computing is beyond just the commercial industry, it’s international and inter-industry. You can find long whitepapers from all sorts of different governments laying out a quantum computing research strategy. There’s also a lot of venture capitalists investing in quantum computing.
Is this research and development public, or is there a lot of proprietary information out there? It’s a mixture, many of the startups and companies have software components that they are open sourcing and claim to have “bits of physics” working (quantum bits or qbits), but they are definitely keeping trade secrets.
19:50 Quantum communication means space lasers.
Engineering with quantum effects has promise as an industry. One can send photons with entangled states. The Chinese government has a satellite that can generate these photons and send them to base stations. If anyone reads them they can tell because the wave function collapsed too soon.
Quantum sensing promises to develop accelerometers and gyroscopes that are orders of magnitude more sensitive than what’s commercially available today.
Quantum engineering could become a new field. Much like electrical engineering was born 140 years ago, electronics was born roughly 70 years ago, computer science was born out of math and electrical engineering. It’s possible that the birth of quantum engineering will be considered to be some point in the next 5 years or last 5 years.
Lee’s favorite quantum state is the Bell state. It’s the equal probability state between 1 and 0, among other interesting properties. The Bell state encapsulates a lot of the quantum weirdness in one snippet of math. | <urn:uuid:c8d50641-5208-4efb-840a-3cd0dbe61318> | CC-MAIN-2022-05 | https://eestalktech.com/heterogeneous-computing/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/warc/CC-MAIN-20220122012907-20220122042907-00530.warc.gz | en | 0.951295 | 1,180 | 3.765625 | 4 |
Reliable quantum computing would make it possible to solve certain types of extremely complex technological problems millions of times faster than today’s most powerful supercomputers. Other types of problems that quantum computing could tackle would not even be feasible with today’s fastest machines. The key word is “reliable.” If the enormous potential of quantum computing is to be fully realized, scientists must learn to create “fault-tolerant” quantum computers. A small but important step toward this goal has been achieved by an international collaboration of researchers from China’s Tsinghua University and the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) working at the Advanced Light Source (ALS).
Using premier beams of ultraviolet light at the ALS, a DOE national user facility for synchrotron radiation, the collaboration has reported the first demonstration of high-temperature superconductivity in the surface of a topological insulator – a unique class of advanced materials that are electrically insulating on the inside but conducting on the surface. Inducing high-temperature superconductivity on the surface of a topological insulator opens the door to the creation of a pre-requisite for fault-tolerant quantum computing, a mysterious quasiparticle known as the “Majorana zero mode.”
“We have shown that by interfacing a topological insulator, bismuth selenide, with a high temperature superconductor, BSCCO (bismuth strontium calcium copper oxide), it is possible to induce superconductivity in the topological surface state,” says Alexei Fedorov, a staff scientist for ALS beamline 12.0.1, where the induced high temperature superconductivity of the topological insulator heterostructure was confirmed. “This is the first reported demonstration of induced high temperature superconductivity in a topological surface state.”
The results of this research are presented in the journal Nature Physics in a paper titled “Fully gapped topological surface states in Bi2Se3 induced by a d-wave high temperature superconductor.” The corresponding authors are Shuyun Zhou and Xi Chen of Tsinghua University in Beijing, China. The lead authors are Eryin Wang and Hao Ding, also with Tsinghua University. Wang is currently an ALS Doctoral fellow in residence.
For all of its boundless potential, quantum computing faces a serious flaw. The quantum data bit or “qubit” used to process and store information is fragile and easily perturbed by electrons and other elements in its surrounding environment. Utilizing topological insulators is considered one promising approach for solving this “decoherence” problem because qubits in a topological quantum computer would be made from Majorana zero modes, which are naturally immune from decoherence. Information processed and stored in such topological qubits would always be preserved. While the ALS collaboration has not yet identified a Majorana zero mode in their bismuth selenide/BSCCO heterostructures, they believe their material is fertile ground for doing so.
“Our studies reveal a large superconducting pairing gap on the topological surface states of thin films of the bismuth selenide topological insulator when grown on BSCCO,” Fedorov says. “This suggests that Majorana zero modes are likely to exist, bound to magnetic vortices in this material, but we will have to do other types of measurements to find it.”
The high quality bismuth selenide/BSCCO topological thin film heterostructure was made at Tsinghua University in the laboratory of Xi Chen and Qi-Kun Xue using molecular beam epitaxy.
“Our study was made possible by the high quality topological insulator film heterostructure that the Chen and Xue groups managed to grow,” says Zhou, who did much of her research at the ALS before returning to China. “Bismuth selenide and the BSSCO ceramic have very different crystal structures and symmetries, which made the growth of such a heterostructure particularly challenging.”
Says Chen, “By controlling the growth kinetics carefully using molecular beam epitaxy, we managed to grow a topological insulator film with controlled thickness on a freshly cleaved BSCCO surface. This provided a cleaner and better-controlled interface, and also opened up opportunities for surface sensitive measurements.”
The bismuth selenide/BSCCO material was brought to the ALS to study the electronic states on its surface using a technique known as ARPES, for angle-resolved photoemission spectroscopy. In ARPES, a beam of X-ray photons striking the material’s surface causes the photoemission of electrons. The kinetic energy of these photoelectrons and the angles at which they are ejected are then measured to obtain an electronic spectrum.
“Previous work on topological insulators revealed superconductivity at only a few Kelvin with a gap of about one milli-electron volt,” Fedorov says. “Such a small energy scale and ultra-low temperature makes it particularly challenging to realize Majorana zero modes experimentally, and to distinguish these modes from other states. Using ARPES, we show evidence of a superconducting gap persisting in the surfaces of our material up to the transition temperature of BSCCO. As the gap and transition temperature in our heterostructure reflect almost an order of magnitude increase over previous work, we believe ours is a better system to search for Majorana zero modes.”
This research was primarily supported by the National Natural Science Foundation of China.
Filed Under: M2M (machine to machine) | <urn:uuid:e7d2c94d-f1ac-4bbf-960f-09e258e2d8d5> | CC-MAIN-2022-05 | https://www.designworldonline.com/on-the-road-to-fault-tolerant-quantum-computing/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304959.80/warc/CC-MAIN-20220126162115-20220126192115-00611.warc.gz | en | 0.919354 | 1,218 | 3.8125 | 4 |
Quantum simulators permit the study of quantum system in a programmable fashion. In this instance, simulators are special purpose devices designed to provide insight about specific physics problems. Quantum simulators may be contrasted with generally programmable "digital" quantum computers, which would be capable of solving a wider class of quantum problems.
A universal quantum simulator is a quantum computer proposed by Yuri Manin in 1980 and Richard Feynman in 1982. Feynman showed that a classical Turing machine would not be able to simulate a quantum effect, while his hypothetical universal quantum computer would be able to mimic needed quantum effect.
A quantum system of many particles could be simulated by a quantum computer using a number of quantum bits similar to the number of particles in the original system. This has been extended to much larger classes of quantum systems.
Quantum simulators have been realized on a number of experimental platforms, including systems of ultracold quantum gases, polar molecules, trapped ions, photonic systems, quantum dots, and superconducting circuits.
Many important problems in physics, especially low-temperature physics and many-body physics, remain poorly understood because the underlying quantum mechanics is vastly complex. Conventional computers, including supercomputers, are inadequate for simulating quantum systems with as few as 30 particles. Better computational tools are needed to understand and rationally design materials whose properties are believed to depend on the collective quantum behavior of hundreds of particles. Quantum simulators provide an alternative route to understanding the properties of these systems. These simulators create clean realizations of specific systems of interest, which allows precise realizations of their properties. Precise control over and broad tunability of parameters of the system allows the influence of various parameters to be cleanly disentangled.
Quantum simulators can solve problems which are difficult to simulate on classical computers because they directly exploit quantum properties of real particles. In particular, they exploit a property of quantum mechanics called superposition, wherein a quantum particle is made to be in two distinct states at the same time, for example, aligned and anti-aligned with an external magnetic field. Crucially, simulators also take advantage of a second quantum property called entanglement, allowing the behavior of even physically well separated particles to be correlated.
Ion trap based system forms an ideal setting for simulating interactions in quantum spin models. A trapped-ion simulator, built by a team that included the NIST can engineer and control interactions among hundreds of quantum bits (qubits). Previous endeavors were unable to go beyond 30 quantum bits. The capability of this simulator is 10 times more than previous devices. It has passed a series of important benchmarking tests that indicate a capability to solve problems in material science that are impossible to model on conventional computers.
The trapped-ion simulator consists of a tiny, single-plane crystal of hundreds of beryllium ions, less than 1 millimeter in diameter, hovering inside a device called a Penning trap. The outermost electron of each ion acts as a tiny quantum magnet and is used as a qubit, the quantum equivalent of a “1” or a “0” in a conventional computer. In the benchmarking experiment, physicists used laser beams to cool the ions to near absolute zero. Carefully timed microwave and laser pulses then caused the qubits to interact, mimicking the quantum behavior of materials otherwise very difficult to study in the laboratory. Although the two systems may outwardly appear dissimilar, their behavior is engineered to be mathematically identical. In this way, simulators allow researchers to vary parameters that couldn’t be changed in natural solids, such as atomic lattice spacing and geometry.
Friedenauer et al., adiabatically manipulated 2 spins, showing their separation into ferromagnetic and antiferromagnetic states. Kim et al., extended the trapped ion quantum simulator to 3 spins, with global antiferromagnetic Ising interactions featuring frustration and showing the link between frustration and entanglement and Islam et al., used adiabatic quantum simulation to demonstrate the sharpening of a phase transition between paramagnetic and ferromagnetic ordering as the number of spins increased from 2 to 9. Barreiro et al. created a digital quantum simulator of interacting spins with up to 5 trapped ions by coupling to an open reservoir and Lanyon et al. demonstrated digital quantum simulation with up to 6 ions. Islam, et al., demonstrated adiabatic quantum simulation of the transverse Ising model with variable (long) range interactions with up to 18 trapped ion spins, showing control of the level of spin frustration by adjusting the antiferromagnetic interaction range. Britton, et al. from NIST has experimentally benchmarked Ising interactions in a system of hundreds of qubits for studies of quantum magnetism. Pagano, et al., reported a new cryogenic ion trapping system designed for long time storage of large ion chains demonstrating coherent one and two-qubit operations for chains of up to 44 ions.
Many ultracold atom experiments are examples of quantum simulators. These include experiments studying bosons or fermions in optical lattices, the unitary Fermi gas, Rydberg atom arrays in optical tweezers. A common thread for these experiments is the capability of realizing generic Hamiltonians, such as the Hubbard or transverse-field Ising Hamiltonian. Major aims of these experiments include identifying low-temperature phases or tracking out-of-equilibrium dynamics for various models, problems which are theoretically and numerically intractable. Other experiments have realized condensed matter models in regimes which are difficult or impossible to realize with conventional materials, such as the Haldane model and the Harper-Hofstadter model.
Quantum simulators using superconducting qubits fall into two main categories. First, so called quantum annealers determine ground states of certain Hamiltonians after an adiabatic ramp. This approach is sometimes called adiabatic quantum computing. Second, many systems emulate specific Hamiltonians and study their ground state properties, quantum phase transitions, or time dynamics. Several important recent results include the realization of a Mott insulator in a driven-dissipative Bose-Hubbard system and studies of phase transitions in lattices of superconducting resonators coupled to qubits. | <urn:uuid:dce4cf72-1d16-4928-957e-baccab5fd059> | CC-MAIN-2022-05 | https://db0nus869y26v.cloudfront.net/en/Quantum_simulator | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00290.warc.gz | en | 0.895559 | 1,367 | 3.84375 | 4 |
Think back a second. When was it that you got your first smartphone? What about the first time that you streamed a show online?
Those things were available to us around 12-15 years ago, depending on how tech-savvy you were at that time. Now, though, smartphones and fast computers are ubiquitous. Not only that, but they’re affordable.
The cutting-edge technology just keeps slicing deeper and deeper to the point that we’re used to advanced progress. We expect to be amazed, then we get bored of our amazement and look for the next thing.
That said, is computer processor speed just going to keep getting better?
We’re going to look at this question today, giving you some insights into the world of technology and where it’s headed. Let’s get started.
How Do Computer Processors Work?
To start this discussion, we have to know a few things about how computer processors work. A few basic insights into CPUs allow us to have a better grasp of what the future might hold.
A central processing unit (CPU) is considered the brain of the computer. It’s where all of the complex tasks take place, and it manages everything you do while you use a device. The CPU reaches into the random access memory and hard drive storage to get information in a matter of milliseconds.
It also interacts with your graphics processing unit to generate all of the beautiful images and 3D renderings you engage with on-screen.
The processor consists of 10s of millions of transistors made of semiconductor materials. Simply put, a semiconductor allows or blocks electrical signals to flow, depending on the situation.
The Importance of Transistors
As a semiconductor, a transistor manages electrical signals in either a positive or negative fashion. When it’s positive to the current, it allows it to continue or directs it in the right way. When negative, that signal is stopped.
It’s like a little traffic cop that stops and starts traffic to keep things flowing smoothly. This little device is the absolute building block for all computers and pieces of modern technology.
It might not seem like that’s very complex or that it could power something as influential as the iPhone. That said, these devices are all just the result of small electrical signals getting directed to produce specific, mechanical actions.
When you press a single key on your keyboard, there’s a simple and elegant process that takes place. The button sends a signal to the CPU, which then sends a signal to the screen, and the letter pops up in an instant. That process is reflective of almost any process you do on the computer.
It’s simple, but the complexity compounds each time you press another button. In the case of the transistor, that little traffic cop gets multiplied by orders of magnitude and placed in a microchip.
The microchip is an essential worker for the CPU. A chip the size of your fingernail holds billions (yes, billions) of transistors.
Moore’s Law and The Future of Technology
At some point, the only devices available had ten or twenty transistors in them. That was some time back in the sixties or seventies when computer technology took hold.
The more transistors you include in a device, though, the better it is. When they’re placed on a microchip, they’re said to be included in an “integrated circuit.” When you increase the ability of an integrated circuit to house transistors, you improve the quality of the device in question.
One of the founders of Intel computers, Gordon Moore, proposed an idea. He said that, so long as the price stays consistent, the integrated circuit will be able to house double the number of components every 18 to 24 months.
As a result, the performance of technology will be twice as good as it was a year and a half prior. His law held up for the first twenty years of the computer.
Since then, it has had years when advancement fell behind his estimate and years when it surpassed his estimate. That said, the slope of Moore’s law and the slope of microprocessor ability are eerily close to one another.
If nothing else, we can look to Moore’s law to estimate roughly how good technology will be in the near and distant future, barring any big changes to the situation.
It will keep doubling and improving ad infinitum in that case, though. Can we be sure that that will happen?
How Can Things Improve?
The thing about Moore’s law is that it was created when one couldn’t foresee the technology we have now. Technology breeds paradigm shifts, and that’s what we can expect in the next decades if Moore’s law is correct until then.
We’ll hypothetically reach a point when we no longer need transistors and microchips at all. People are already producing transistors that are the size of a handful of atoms pushed together.
That’s approaching the size of the fundamental building blocks of the universe as far as we know. What lies beyond that advancement is difficult to say, but things are accelerated by the fact that computers are actually doing the thinking for us in some instances.
There are more neurons in the human mind than microchips in the smartest computer, but that doesn’t mean that computers aren’t better at thinking logically and recalling information than we are. Artificial intelligence thinks critically in real-time, and it might be able to produce better computers than we can.
Is Quantum Computing Just Science Fiction?
Quantum computers are already in existence, although they’re not as powerful as classical computers with microchips yet. Yet is the keyword, though.
The science hasn’t gotten narrowed down into perfection as of yet, but the idea is that artificial intelligence will keep chipping away at the stone until David emerges.
Quantum computing plays on the random nature of quantum states like entanglement, superposition, and more. Without getting too deep into the terminology, it might help to understand, basically, what those things are.
Quantum mechanics state that particles and waves exist to different degrees at different times and their existence is relative to the observer at a particular time. Ent anglement is an instance when the particle and wave occupy the same space in such a way that the observer can’t say that either one doesn’t exist.
Superimposition suggests that both particle and wave are atop one another in an instance that produces a third, equally viable state. Those things are heady enough as it is, but introduce computing into the mix and you’ve got a real brain-melter.
The result is that computers will work trillions of times faster than ours do. The implications of that are hard to imagine, especially for our consumer technology.
What To Expect From Computer Processor Speed
Whether or not Moore’s law is correct, we can be sure that things will improve. Provided that there’s no extreme climate disaster or global collapse, technology will improve.
Phones, computers, and other devices are essential to the lifestyles of billions of people on earth. There’s a lot of money waiting for the individuals or companies that think up new ways to improve our lives through technology.
There are also a lot of issues on planet earth that something like quantum computing could fix. Supply chain management, hunger, poverty, and numerous other essential problems might get solved by a more intelligent computer.
So, there are more than enough carrots dangling in front of humanity to push the technology cart forward. Whether that will keep happening in a way that doubles every couple of years, only time will tell.
That said, quantum computing advancements will be a paradigm shift for the entire idea of technology. The speed of our computers today was almost unimaginable 30 years ago. Things are incredibly fast and easy to use now.
You can get the scoop on modern computers and start enjoying them if you’re not already.
Where Will It End?
If things scale up at an exponential rate as they have, it’s impossible to imagine what the state of technology could be. Just like people 100 years ago would faint if they saw a smartphone, we might do the same if we saw what was possible 20 years from now.
The difference for us is that things change at an exponential rate. What would have taken 100 years might take only ten now. Ten years from now, it’ll only take one year to do what took us ten, and so on and so forth.
If things keep multiplying upon themselves like that, the only question is “where does it all end?” Will the singularity come and take us over? Will we merge with technology in some way?
Science fiction has to take the reins from that point on.
Want to Learn More About Computer Chips?
Hopefully, our look at computer processor speed was interesting to you. There’s a lot more to learn and keep track of as things move forward, though.
We’re here to keep you filled in. Explore our site for more ideas on technology, central processing unit insights, processor cores, and much more. | <urn:uuid:3ef50056-d5c5-4a2a-8841-a668d9b7bf51> | CC-MAIN-2022-05 | https://theblogspost.com/how-innovation-is-driving-your-computer-processor-speed/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00293.warc.gz | en | 0.943793 | 1,921 | 3.65625 | 4 |
computers go digital
Technology Research News
Some of the same properties that would
make quantum computers phenomenally powerful are also properties that
make it difficult to actually build them.
Problems that would take the fastest possible classical computer longer
than the lifetime of the universe to solve would be hours-long exercises
for large-scale quantum computers. Such machines would be able to rapidly
search huge databases and would render today's encryption methods useless.
The key to quantum computers' potential is that quantum bits, the basic
building blocks of quantum computing logic circuits, can represent a mix
of 1 and 0 at the same time, allowing a string of qubits to represent
every possible answer to a problem at the same time. This means a quantum
computer could check every possible answer using a single set of operations.
Classical computers, in contrast, check each answer one at a time.
But today's qubits are difficult to work with and prone to errors, and
the faster they go the more errors they produce. One of the challenges
of building a quantum computer is reducing errors. Researchers from the
University of Wisconsin at Madison have eased the problem with a method
that reduces error rates by two orders of magnitude.
Today's computers are digital, meaning they use signals that are either
on or off to represent two states -- a 1 or a 0 -- and all computations
are done using combinations of these binary numbers. One advantage of
using just two states is the signals that represent those states don't
have to be exact, they simply have to be clearly closer to 1 than 0 or
Qubits are analog devices, meaning they produce variable, continuous signals
rather than discrete on and off states. For example, a particle can be
in one of two orientations, spin up and spin down, but also some mix of
the two. The 1s and 0s of digital information are mapped to the spin up
and spin down states, but quantum computations have to be precise to ensure
that the given particle is actually in one of those two states. "Classical
bits have only two states... quantum bits can be in between," said Robert
Joynt, a physics professor at the University of Wisconsin at Madison.
A qubit continually rotates between 0 and 1, which makes it prone to errors,
said Joynt. "A rotation of a qubit can, for example, fall a little bit
short with only a very minor error in the input signal," he said.
The researchers' method makes quantum computing a pseudo-digital operation.
"In our set-up, a definite rotation rate for the qubits is associated
with a range of input signals. [This way] the input does not have to be
exceedingly precise," said Joynt.
Easing the requirements for precision could go a long way toward making
quantum computers viable. "The driving force [for the idea] was objections
from experienced electrical engineers, particularly at IBM, who believed
that quantum computing would not work... the because the specs for the
driving electronics would be much too [demanding]," said Joynt.
The researchers are applying the pseudo-digital qubits to their ongoing
efforts to build a solid-state quantum computer. Their design calls for
thousands of individually-controlled electrons in a silicon chip. The
chip would allow for careful control of the interactions between neighboring
electrons so that the states of the electrons could be used to carry out
computations. Some of the fundamental logic operations in quantum computers
are carried out through the interactions of pairs of qubits.
The researchers added the pseudo-digital qubits concept to their design
by having pairs of electrons slide past each other rather than crash into
each other, said Joynt. When the electrons are well separated the interaction
is off, representing a 0, and when they are within close range the interaction
is on, representing a 1.
When the researchers simulated the technique, they found that it reduced
operational error rates by more than two orders of magnitude, according
to Joynt. The researchers' pseudo-digital qubits could be implemented
in other types of quantum computers, he added.
The pseudo-digital approach is a good one, said Bruce Kane, a visiting
associate research scientist at the University of Maryland. "My guess
is that future quantum computers will use the pseudo-digital approach,"
he said. It remains to be seen whether the devices the researchers are
building will work well, however, he said.
Quantum computing naturally has many similarities to analog rather than
digital computing, said Kane. Because digital computers operate using
just two states -- 1 and 0 -- inputs can always be rounded. This type
of rounding, however, is impossible in quantum computing, he said. "It
[is usually] necessary to control parameters very precisely to keep the
computation on track," he said.
The researchers' method is an attempt to find systems that "pretty much
automatically have only two interaction strengths," said Kane. No system
can have exactly this behavior, so the method doesn't eliminate the problem
of errors creeping into a quantum computation, but it can reduce the severity
of the errors, he said.
The researchers have shown how to minimize the adverse effects of turning
interactions on and off in quantum computing, said Seth Lloyd, a professor
of mechanical engineering at the Massachusetts Institute of Technology.
"Although I doubt that this exact architecture will prove to be the one
that is used to construct large-scale quantum computers, it is exactly
this sort of imaginative quantum-mechanical engineering that is required
to solve the problems of large-scale quantum computation," he said.
One of the challenges in implementing the scheme in a real quantum computer
is fabricating the tiny qubits precisely, said Joynt. "The real issue
is fabrication of quite complicated nanostructures," he said.
The researchers are working on qubits made from two basic pieces -- a
semiconductor sandwich structure "which is really a monster club sandwich,"
said Joynt; and a gate structure, which controls the state of a qubit
so that it can represent a one or a zero.
The researchers have made progress on the semiconductor sandwich structure
and are gearing up now to produce the gate structure, "which is quite
complex," Joynt said.
The researchers are also working on a readout apparatus that will fit
on the chip. Reading the quantum states of particles is tricky because
quantum states are easily disturbed.
It will take a decade to develop simple demonstration models, and probably
20 years before the devices can be used in practical quantum computers,
Joynt's research colleagues were Mark Friesen and M. A. Eriksson. They
published the research in the December 9, 2002 issue of Applied Physics
Letters. The research was funded by the National Science Foundation (NSF)
and the Army research office (ARO).
Timeline: 10-20 years
TRN Categories: Physics; Quantum Computing and Communications
Story Type: News
Related Elements: Technical paper, "Pseudo-Digital Quantum
Bits," Applied Physics Letters, December 9, 2002.
29/February 5, 2003
Data stored in live cells
Faster quantum crypto
Bumpy surface stores data
Quantum computers go
Tiny hole guides
atoms against tide
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog
Buy an ad link | <urn:uuid:903e76b3-c973-45a5-9e5f-8966f6e36b05> | CC-MAIN-2022-05 | http://trnmag.com/Stories/2003/012903/Quantum_computers_go_digital_012903.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00133.warc.gz | en | 0.919203 | 1,663 | 4.03125 | 4 |
switch flips atoms
Technology Research News
Atoms and subatomic particles are like
microscopic tops that can spin in one of two directions, up or down. Spintronics
and quantum computing use these spin directions to represent the ones
and zeros of digital information. Today's electronics, in contrast, use
the presence or absence of electric charge to represent binary numbers.
A team of researchers from the Max Planck Institute and the Technical
University of Munich in Germany has used an electronic switch to transfer
the spin of a group of electrons to the nuclei of atoms in a semiconductor.
Information transfer between electrons and atoms is a key component of
spintronics and quantum computing. Atoms in semiconductor crystals are
better suited to preserving spin and thereby storing information than
electrons because they are fixed in position and they are better insulated
from the environment than electrons. Electrons, however, can flow in currents,
which makes them better suited to transmitting information.
Computers based on spintronics would be faster, use less electrical power
and store data more densely than electronic computers. Data would also
remain in memory after the power was turned off, allowing spintronics
computers to start instantly.
Quantum computers can use the interactions of individual particles to
solve certain problems, like cracking secret codes and searching large
databases, that are beyond the abilities of the fastest classical computer
The researchers' experiment proved that it is possible to transfer spin
between atoms and electrons, but a lot of work remains before the capability
can be put to practical use, said Jurgen Smet, a scientist at the Max
Planck Institute. The experiment "brings us one step closer, but we have
a large number of giant leaps to go to make something useful and practical,"
said Smet. "We have succeeded... in a very crude manner for a large ensemble
of nuclei, however under extreme conditions, like nearly absolute zero
temperature and... a large, stationery magnetic field."
Ordinarily, the spins of electrons and atoms in a semiconductor are isolated
from each other. The energy associated with electron spin is considerably
greater than the energy associated with atomic spin, and this energy mismatch
usually keeps the electrons from changing the atomic spin. But by using
a gate, or electronic switch, to control the density of electrons in the
semiconductor, the researchers found that at certain densities the interactions
between electrons affect the spins of the semiconductor's atoms.
Atomic spins can also be flipped using magnetic fields, which is how hard
disk drives in today's computers work. But disk drives are larger, slower
and require more energy than the integrated circuits on computer chips.
"One would like all-electronic nuclear solid-state devices so that one
can marry the benefits of the technology used in present-day electronics
with those of quantum computation or spintronics," said Smet.
The researchers' experiment shows that electronic control of atomic spin
in semiconductors is possible. However, their technique is unlikely to
lead directly to practical technology, said Smet. "The physics we exploit
to flip the nuclear spins actually also requires these low temperatures,
so there is at least no straightforward rule on how to scale this up,"
Still, the research shows that spintronics could be a viable successor
to today's electronics. "Atoms... are the smallest unit of which a semiconductor
crystal is composed. If you were to extrapolate Moore's Law... you'll
find that in the next decade or so we end up with a dimension on the order
of the atom," said Smet. Moore's Law, which has held true for the past
couple of decades, states that computer speeds double every 18 months
as manufacturers shrink computer circuits. "Clearly a paradigm shift has
to occur. That is one reason why long-term researchers fervently think
about ways to explore the spin degree of freedom of the nucleus of atoms,"
Controlling atomic spin could also be used in quantum computing. But to
do so, however, the researchers' technique would need to be applied to
individual atoms. "This kind of control is not something we will manage
to achieve within the next two decades," said Smet.
The researchers device serves as a miniature laboratory for probing the
fundamental interactions between electrons and nuclei and exploring the
basis for exchanging information between the two spin systems, said David
Awschalom, a professor of physics at the University of California at Santa
Barbara. "This is a beautiful experiment," he said. "Many people envision
that future quantum computing will use nuclear spins for information storage,
and thus it is important to explore these basic interactions."
Smet's research colleagues were Rainer Deutschmann, Frank Ertland and
Gerhard Abstreiter of the Technical University of Munich, Werner Wegscheider
of the Technical University of Munich and the University of Regensburg,
and Klaus von Klitzing of the Max Planck Institute. They published their
research in the January 17, 2002 issue of the journal Nature. The research
was funded by the German Ministry of Science and Education (BMBF) and
the German National Science Foundation (DFG).
Timeline: >20 years
TRN Categories: Materials Science and Engineering; Quantum
Story Type: News
Related Elements: Technical paper, "Gate-voltage control
of spin interactions between electrons and nuclei in a semiconductor,"
Nature, January 17, 2002
Tiny wires turn
chips inside out
share the load
Nanotubes take tiny
envisions DNA origami
Electric switch flips
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog
Buy an ad link | <urn:uuid:615d0f4c-7337-49e8-a60f-270e21d36265> | CC-MAIN-2022-05 | http://trnmag.com/Stories/2002/021302/Electric_switch_flips_atoms_021302.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00534.warc.gz | en | 0.89855 | 1,323 | 4.03125 | 4 |
Many people credit Professor Richard Feynman, a Nobel Prize-winning physicist, for conceiving the notion of a quantum computer. Physicist Joseph John Fernandez notes, “In a lecture titled Simulating Physics with Computers, Professor Feynman talked about why physicists need computers, and what they require of these devices. … Feynman asked the following question: can a classical, universal computer simulate any physical system? And in particular, what about quantum systems?” Feynman’s question is a good one since weird things happen at the quantum level. Fernandez notes trying to model quantum systems with a classical system isn’t possible. He explains, “For classical computers, the memory requirements for these calculations are too much. The true simulation of physical systems becomes intractable. This is where the interest in quantum computers started to grow.” James Norman explains, “Quantum computers can be game changers because they can solve important problems no existing computer can. While conventional computing scales linearly, QC scales exponentially when adding new bits. Exponential scaling always wins, and it’s never close.” Fernandez concludes, “Professor Feynman was definitely onto something!” A quantum computer’s exponential scaling properties theoretically allow it to solve problems much faster than classical computers. Proving that quantum computers actually work faster is where an argument between Google and IBM began.
Google’s claim to quantum supremacy
Sarah Kaplan (@sarahkaplan48) reports, “For the first time, a machine that runs on the mind-boggling physics of quantum mechanics has reportedly solved a problem that would stump the world’s top supercomputers — a breakthrough known as ‘quantum supremacy.’” Amy Thomson (@athomson6) adds, “Alphabet Inc.’s Google said it’s built a computer that’s reached ‘quantum supremacy,’ performing a computation in 200 seconds that would take the fastest supercomputers about 10,000 years. … Google’s tests … were conducted using a quantum chip it developed in-house.”
In a blog post, Google engineering director Hartmut Neven stated, “This achievement is the result of years of research and the dedication of many people. It’s also the beginning of a new journey: figuring out how to put this technology to work. We’re working with the research community and have open-sourced tools to enable others to work alongside us to identify new applications.” The announcement, published in the journal Nature, inspired Dr. Michael Wall (@MichaelDWall) to declare, “We have just entered the age of quantum supremacy.” He quotes study co-author Brooks Foxen, a graduate student researcher in physics at Google AI Quantum in Mountain View and the University of California, Santa Barbara, who states, “It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement.”
Google CEO Sundar Pichai (@sundarpichai) writes, “While we’re excited for what’s ahead, we are also very humbled by the journey it took to get here. And we’re mindful of the wisdom left to us by the great Nobel Laureate Richard Feynman: ‘If you think you understand quantum mechanics, you don’t understand quantum mechanics.’ In many ways, the exercise of building a quantum computer is one long lesson in everything we don’t yet understand about the world around us. While the universe operates fundamentally at a quantum level, human beings don’t experience it that way. In fact many principles of quantum mechanics directly contradict our surface level observations about nature. Yet the properties of quantum mechanics hold enormous potential for computing. … For those of us working in science and technology, it’s the ‘hello world’ moment we’ve been waiting for — the most meaningful milestone to date in the quest to make quantum computing a reality.”
IBM and others dispute the claim
Foxen and other members of her team may feel comfortable declaring they have achieved quantum supremacy, but their claims are being disputed by IBM and some academics. James Sanders (@jas_np) writes, “Quantum computing researchers in academia and firms in competition with Google are dismissing claims of quantum supremacy, though note that this is still a significant milestone toward it.” He explains, “The industry objection to this claim is that the calculation in question is of no practical use outside of research laboratories — even inside labs, the utility of it does not extend meaningfully beyond the synthetic benchmark scenario Google pursued for this paper.” Pichai admits, “We have a long way to go between today’s lab experiments and tomorrow’s practical applications; it will be many years before we can implement a broader set of real-world applications.” He goes on to note, “We can think about today’s news in the context of building the first rocket that successfully left Earth’s gravity to touch the edge of space. At the time, some asked: Why go into space without getting anywhere useful? But it was a big first for science because it allowed humans to envision a totally different realm of travel … to the moon, to Mars, to galaxies beyond our own. It showed us what was possible and nudged the seemingly impossible into frame. That’s what this milestone represents for the world of quantum computing: a moment of possibility.”
IBM researchers Edwin Pednault, John Gunnels, Dmitri Maslov, and Jay Gambetta lay out their objections to Google’s claim for quantum supremacy. They write, “In the paper, it is argued that their device reached ‘quantum supremacy’ and that ‘a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.’ We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced. … Building quantum systems is a feat of science and engineering and benchmarking them is a formidable challenge. Google’s experiment is an excellent demonstration of the progress in superconducting-based quantum computing, showing state-of-the-art gate fidelities on a 53-qubit device, but it should not be viewed as proof that quantum computers are ‘supreme’ over classical computers. … The term ‘quantum supremacy’ is being broadly misinterpreted and causing ever growing amounts of confusion, we urge the community to treat claims that, for the first time, a quantum computer did something that a classical computer cannot with a large dose of skepticism due to the complicated nature of benchmarking an appropriate metric.”
Whether or not Google has achieved quantum supremacy may be in doubt, but everyone seems to agree their achievement is notable and praiseworthy. The IBM researchers conclude, “The concept of quantum computing is inspiring a whole new generation of scientists, including physicists, engineers, and computer scientists, to fundamentally change the landscape of information technology. If you are already pushing the frontiers of quantum computing forward, let’s keep the momentum going.”
Joseph John Fernandez, “Richard Feynman and the birth of quantum computing,” Medium, 4 January 2018.
James Norman, “Quantum Computing Will Revolutionize Data Analysis. Maybe Soon,” Seeking Alpha, 14 March 2018.
Sarah Kaplan, “Google scientists say they’ve achieved ‘quantum supremacy’ breakthrough over classical computers,” The Washington Post, 23 October 2019.
Amy Thomson, “Google Says Quantum Computer Beat 10,000-Year Task in Minutes,” Data Center Knowledge, 23 October 2019.
Sundar Pichai, “What our quantum computing milestone means,” Google, 23 October 2019.
Michael Wall, “‘Supremacy’ Achieved: Quantum Computer Notches Epic Milestone,” Space.com, 23 October 2019.
James Sanders, “Google’s quantum computing supremacy claim relies on a synthetic benchmark, researchers assert,” TechRepublic, 23 October 2019.
Edwin Pednault, John Gunnels, Dmitri Maslov, and Jay Gambetta, “On ‘Quantum Supremacy’,” IBM Research Blog, 21 October 2019. | <urn:uuid:7b4ae951-3858-4f3b-b099-02dd7d72b9e7> | CC-MAIN-2022-05 | https://enterrasolutions.com/blog/quantum-computing-the-adults-are-arguing/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00014.warc.gz | en | 0.918452 | 1,892 | 3.515625 | 4 |
Two major steps toward putting quantum computers into real practice — sending a photon signal on demand from a qubit onto wires and transmitting the signal to a second, distant qubit — have been brought about by a team of scientists at Yale.
Over the past several years, the research team of Professors Robert Schoelkopf in applied physics and Steven Girvin in physics has explored the use of solid-state devices resembling microchips as the basic building blocks in the design of a quantum computer. Now, for the first time, they report that superconducting qubits, or artificial atoms, have been able to communicate information not only to their nearest neighbor, but also to a distant qubit on the chip.
This research now moves quantum computing from “having information” to “communicating information.” In the past information had only been transferred directly from qubit to qubit in a superconducting system. Schoelkopf and Girvin’s team has engineered a superconducting communication ‘bus’ to store and transfer information between distant quantum bits, or qubits, on a chip. This work, according to Schoelkopf, is the first step to making the fundamentals of quantum computing useful.
The first breakthrough reported is the ability to produce on demand — and control — single, discrete microwave photons as the carriers of encoded quantum information. While microwave energy is used in cell phones and ovens, their sources do not produce just one photon. This new system creates a certainty of producing individual photons.
“It is not very difficult to generate signals with one photon on average, but, it is quite difficult to generate exactly one photon each time. To encode quantum information on photons, you want there to be exactly one,” according to postdoctoral associates Andrew Houck and David Schuster who are lead co-authors on the first paper.
“We are reporting the first such source for producing discrete microwave photons, and the first source to generate and guide photons entirely within an electrical circuit,” said Schoelkopf.
In order to successfully perform these experiments, the researchers had to control electrical signals corresponding to one single photon. In comparison, a cell phone emits about 1023 (100,000,000,000,000,000,000,000) photons per second. Further, the extremely low energy of microwave photons mandates the use of highly sensitive detectors and experiment temperatures just above absolute zero.
“In this work we demonstrate only the first half of quantum communication on a chip — quantum information efficiently transferred from a stationary quantum bit to a photon or ‘flying qubit,’” says Schoelkopf. “However, for on-chip quantum communication to become a reality, we need to be able to transfer information from the photon back to a qubit.”
This is exactly what the researchers go on to report in the second breakthrough. Postdoctoral associate Johannes Majer and graduate student Jerry Chow, lead co-authors of the second paper, added a second qubit and used the photon to transfer a quantum state from one qubit to another. This was possible because the microwave photon could be guided on wires — similarly to the way fiber optics can guide visible light — and carried directly to the target qubit. “A novel feature of this experiment is that the photon used is only virtual,” said Majer and Chow, “winking into existence for only the briefest instant before disappearing.”
To allow the crucial communication between the many elements of a conventional computer, engineers wire them all together to form a data “bus,” which is a key element of any computing scheme. Together the new Yale research constitutes the first demonstration of a “quantum bus” for a solid-state electronic system. This approach can in principle be extended to multiple qubits, and to connecting the parts of a future, more complex quantum computer.
However, Schoelkopf likened the current stage of development of quantum computing to conventional computing in the 1950’s, when individual transistors were first being built. Standard computer microprocessors are now made up of a billion transistors, but first it took decades for physicists and engineers to develop integrated circuits with transistors that could be mass produced.
Schoelkopf and Girvin are members of the newly formed Yale Institute for Nanoscience and Quantum Engineering (YINQE), a broad interdisciplinary activity among faculty and students from across the university.
Other Yale authors involved in the research are J.M. Gambetta, J.A. Schreier, J. Koch, B.R. Johnson, L. Frunzio, A. Wallraff, A. Blais and Michel Devoret. Funding for the research was from the National Security Agency under the Army Research Office, the National Science Foundation and Yale University.
Citation: Nature 449, 328-331 (20 September 2007) doi:10.1038/nature06126
& Nature 450, 443-447 (27 September 2007) doi:10.1038/nature06184 | <urn:uuid:53d3d99c-0491-47ab-91ed-1893a6eb368d> | CC-MAIN-2022-05 | https://www.science20.com/news_account/two_giant_steps_in_quantum_computing | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00456.warc.gz | en | 0.921474 | 1,066 | 4.09375 | 4 |
Earlier this year, the night-migratory European robin (Erithacus rubecola) made the headlines. Evidence has emerged that it may be using quantum mechanical effects to sense Earth’s magnetic field in order to migrate.
Few expected to find quantum mechanical manipulation in the eye of a bird. Zoologist Eric Warrant, who was not involved in the research, says, that magnetic direction sensing is “the last sense we know, effectually, nothing about.” But this mysterious intelligence appears essential to migration, and hence, to the survival of many birds. So how, exactly, do they do it?
Humans perceive the world around them with five senses — vision, hearing, taste, smell and touch. Many other animals are also able to sense the Earth’s magnetic field. For some time, a collaboration of biologists, chemists and physicists centred at the Universities of Oldenburg (Germany) and Oxford (UK) have been gathering evidence suggesting that the magnetic sense of migratory birds such as European robins is based on a specific light-sensitive protein in the eye. In the current edition of the journal Nature, this team demonstrate that the protein cryptochrome 4, found in birds’ retinas, is sensitive to magnetic fields and could well be the long-sought magnetic sensor…
Hore says “if we can prove that cryptochrome 4 is the magnetic sensor we will have demonstrated a fundamentally quantum mechanism that makes animals sensitive to environmental stimuli a million times weaker than previously thought possible.”University of Oldenburg, “Mechanism of magnetic sensing in birds” at ScienceDaily (June 23, 2021) The paper requires a subscription.
That might explain the precision of migrating birds, returning to a precise spot year after year.
Warrant says that one barrier to research on quantum magnetosensing, first proposed thirty years ago, was that the hypothesis was first put forward by a physicist and most biologists weren’t in touch with the physics concepts. However, the team that zeroed in on the magnetosensing mechanism includes members from both disciplines.
The challenge, says study co-author Henrik Mouritsen, was to produce cryptochrome molecules in a beaker because it is impractical to study them inside the eye of a living bird. But they succeeded:
Henrik: Now it’s not a hypothesis that this molecule is magnetically sensitive. We can see that it’s magnetically sensitive.
V/O: The team were also curious how cryptochrome proteins compared between birds that migrate and birds that don’t.
Henrik: We then also made cryptochromes from an extreme non-migratory bird – basically the chicken. And it looks like the cryptochrome-4 from the migratory birds are significantly more magnetically sensitive than the same molecule from a chicken.How quantum mechanics help birds find their way (video, 04:10–4:34 min), Nature, June 23, 2021
The researchers also speculate that, because the processing is done in the bird’s visual field, birds may actually see Earth’s magnetic field — perhaps as a shadow imposed over an aerial view.
How do physicists think it actually works?
Quantum entanglement dictates that if two electrons are created at the same time, the pair will be “entangled” so that whatever happens to one particle affects the other. Otherwise, it would violate fundamental laws of physics.
The two particles remain entangled even when separated by vast distances.
So if one particle is spin-up, the other must be spin-down, but what’s mind-boggling is that neither will have a spin until they’re measured.
That means that not only will you not know what the spin of the electron is until you measure it, but that the actual act of measuring the spin will make it spin-up or spin -own.
As difficult as entanglement is to believe, as well as understand, it is a well established property of quantum mechanics. And some physicists are suggesting that birds and other animals might be using the effect to see and navigate Earth’s magnetic fields.
The process could work via light-triggered interactions on a chemical in bird’s eyes.American Association of Physicists, “Migration via quantum mechanics” at PhysicsCentral
How does the sensing system pick up these magnetic fields?
We already know that spin is significantly affected by magnetic fields. Arrange electrons in the right way around an atom, and collect enough of them together in one place, and the resulting mass of material can be made to move using nothing more than a weak magnetic field like the one that surrounds our planet.Mike McCrae, “Birds Have a Mysterious ‘Quantum Sense’. For The First Time, Scientists Saw It in Action” at ScienceAlert (January 8, 2021)
While this finding is a significant step forward in understanding ways birds might migrate vast distances without getting lost (a “spectacular piece of science,” as Warrant puts it), the researchers have not yet worked with living birds. Thus they cannot definitively say that the cryptochrome 4 protein molecule is the critical ingredient in magnetosensing — only that it is a very promising candidate.
Experimental physicist Rob Sheldon offers Mind Matters News some further thoughts on what, exactly, the researchers did and the larger significance of their find:
Magnetic effects are so small, the molecule needs to be in a very fragile “excited” state to sense the magnetic field. It is thought that the “cytochrome” molecule gets excited by blue light, and in the excited state, magnetic fields preferentially cause it to de-excite in a certain direction.
They tested this in the lab, by coupling the cytochrome to a fluorescing or glowing molecule, shining a dim blue light on the cell, and watch it glow. Then when they passed a magnetic field over the cell, the glow was dimmed, proving that the cytochrome molecule was doing something in response to magnetic field.
Since this sensing is happening at the level of electron spins and excitation, it is an inherently QM [quantum mechanical] effect, hence the title of the article.
This isn’t spooky, and isn’t unusual. Lots of molecules have QM effects. Most of the odor receptors in your nose employ QM effects to identify odorants. Chlorophyll that makes leaves green absorbs light through a QM cascade of electrons. And of course, when the rods & cones in your retina sense photons, it is a QM effect.
What makes the magnetic QM effect so unusual, is that magnetic fields are perhaps 1000 times smaller than the other QM effects I mentioned. So the system has to detect a very weak signal-to-noise-ratio (SNR).
In the lab, we often use difference circuits that are modulated by a frequency and the result is integrated. The difference knocks out the common signal, so its called common mode rejection. The modulation averages over the noise, where real noise always has a zero sum. Then SNR can be boosted by factors of 1000 to 1,000,000, and somehow that is happening in a single cell. That’s the part that is spooky. Packing a $10,000 lock-in amplifier into a 2 micron cell.
Some birds are naturally very intelligent — the New Zealand crow, for example — but in this case, the birds with remarkable perception have access to magnetosensing, a sense we are only beginning to understand.
You may also wish to read:
We knew crows were smart but they turn out to be even smarter. We are only beginning to scratch the surface of the mysteries of animal intelligence. Questions abound: How did crows come to be smart when other birds did not? Most birds would survive better if they were smarter but that doesn’t make it happen.
Do birds really understand what they are saying? Remarkable claims are made for some birds. To understand what they are saying, birds would need to understand abstractions; it’s not clear that they can. | <urn:uuid:bb692dda-9589-4abb-87bb-db14202aa77e> | CC-MAIN-2022-05 | https://mindmatters.today/2021/10/physicist-migrating-birds-mysterious-quantum-sense-is-spooky/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00576.warc.gz | en | 0.945855 | 1,712 | 3.6875 | 4 |
People being people, most of us have gotten used to the idea that the methods we routinely use to protect our information are reliable and safe. This is why you educate your users to check if that little padlock appears in their browser search window before they check their bank balance. It's why we go to the trouble of implementing email encryption as well as secure file transfer systems.
But in the tech industry, change is always on the horizon, which means you need to get used to the idea that what you thought was invulnerable today might easily be threatened tomorrow. One of those changes is quantum computing, and it's a field that's developing quickly. For example, earlier this year, Google announced that it had built the largest quantum computing chip ever: a 72-qubit (a quantum bit) processor.
To put that into context, it's important to explain how a qubit differs from the bit you learned about back in computer science class. Those bits are basic units of information represented by either a 1 or a 0. Qubits, which are represented by the symbol '0> and '1>, can also encompass values of 1 or 0, but can then extend those values to essentially an infinite number of states in between 1 and 0. What happens is that the probability of some number changes as you move between 1 and 0.
We're not going to go into detail about how this works (you can read more about it here), except to say that, by having more potential values between 1 and 0, you can perform some types of computation faster. In some cases, many thousands of times faster than what's possible with today's more advanced desktop CPU architectures, like the Intel i9.
Because of the way quantum computers work, they can be used for jobs that are difficult for these more traditional CPU chipsets. This would include tasks such as multidimensional modeling, simulations, and, yes, codebreaking. It's the codebreaking and encryption cracking that's worrying security experts, and is also freaking out some folks involved with cryptocurrencies as well as those involved with the many other developments being made possible by blockchain technology. Blockchains and cryptocurrencies are, after all, simply very large numbers used to create a unit of whatever currency you're considering. Bitcoin, for example, depends on public key cryptography. Public key cryptography is considered one of the most vulnerable to cracking by a quantum computer, which is part of what's making folks with large Bitcoin investments sweat.
What this means to you is that some types of encryption that you depend on are no longer considered secure. Exactly how that may apply to you is described in more detail in this "Report on Post-Quantum Cryptography" published by the US Department of Commerce's National Institute of Standards and Technology (NIST). What you'll find in this NIST paper is that public key encryption is vulnerable to cracking by using algorithms on a quantum computer. But other means of encryption, including Advanced Encryption Standard (AES), which uses symmetric keys, and Secure Hash Algorithm (SHA-2 and SHA-3), will remain secure with some modifications.
Recommended by Our Editors
Table 1 - Impact of Quantum Computing on Common Cryptographic Algorithms - Credit: NIST
The most widely used version of AES, which uses 256-bit keys, is actually relatively secure against quantum computing attacks. AES-256 is commonly used for mundane tasks such as Wi-Fi encryption. However, another commonly used version of encryption, secure sockets layer (SSL), uses public key encryption.
Calming Your Quantum Computing Fears
For now, you don't need to worry, though as an IT professional, you should start to plan. Despite the rapid development of quantum computing, researchers don't appear to have reached the point where they can routinely decrypt routine business communications. While that may come someday, you're still fairly safe for now as long as you remember these key points:
SSL communications are still safe; and because they are ephemeral, your users don't need to worry that there'll be a stored copy of their banking session or credit card purchase to be retrieved and cracked at a later date. However, that may change in the future.
AES-256 will be safe, even against quantum attacks, for some time. Unless your data is valuable enough for a nation-state to spend millions of dollars to crack it, you don't need to worry. However, if your business handles national security data, then maybe you need to find a better way and it'd be a good idea to start staying on top of devleoping cryptographic trends.
Age is important. Unless you need to protect your data for decades against future quantum attacks by using advanced algorithms, then some form of symmetric encryption (including AES) will do.
Be prepared for encryption using longer key lengths because those are much harder to crack. Some keys can be found by using brute force techniques but, if the time to crack them by using the fastest quantum computer exceeds the expected age of the universe, then you're probably safe. Longer key lengths will require more computer power to handle, but probably not enough to bog down your systems when they're needed.
Remember that the quality of encryption is only one part of the security puzzle. Poorly executed encryption, weak or faulty software surrounding the encryption, and poor security practices can still expose your critical data through other vulnerabilities. For example, it doesn't help to encrypt your communications if the bad guys can walk into your office and steal the data out of an unlocked file cabinet or, more often, the trash can.
While some forms of encryption now have a limited lifetime, the fact is, you still have time to determine what data you have that may have vulnerabilities because of encryption, and then evaluate whether or not the risk down the road will affect you immediately. For most day-to-day operations, it won't. But if you deal with sensitive data that has a long lifetime, then you need to start planning for the future now.
Get Our Best Stories!
Sign up for What's New Now to get our top stories delivered to your inbox every morning.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!Sign up for other newsletters | <urn:uuid:f7664725-b583-4c5c-b4bb-5f38a7a9ab28> | CC-MAIN-2022-05 | https://www.pcmag.com/news/is-quantum-computing-really-a-threat-to-it-security | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00618.warc.gz | en | 0.958161 | 1,325 | 3.515625 | 4 |
University of New South Wales devised a two-qubit system inside a silicon chip and ran a computer code adapted to the quantum world. Their code passed the notoriously intransigent ‘Bell test’, making it the strongest evidence yet that quantum computers can be instructed to handle operations.
Why should you care about quantum computers
Previously, ZME Science reported how the same Australian researchers devised a working two-qubit logic gate all on silicon chip. Now, the team reports they’ve also crunched some numbers using two quantum particles an electron and the nucleus of a single phosphorus atom. To understand why this is quite the breakthrough, let’s do a short recap. Transistors perform logic operations by shuttling bits of data, each assigned a value which is either “0” or “1”. That’s how a classical, digital computer works. Quantum computers, however, use qubits or a quantum bit which can simultaneously exist in both states at once – both “0” and “1”. This is known as a superposition, and if scientists can leverage it then information could be processed in parallel. Two-qubits can perform operations on four values, three on eight values and so on in powers of two. Today’s computers have millions of transistors. Now imagine a quantum logic gate that works with millions of qubits. The computing force would be unheard of.
The quantum code written by UNSW exploits a quantum phenomena called entanglement or “spooky action at a distance”, as the baffled Einstein used to call it. When two quantum particles are entangled, measurements performed on of the two instantly affects the other, no matter how far apart they are. You can have an electron here on Earth, and its entangled mate at the other end of the universe and the two would still instantly react. In September, researchers at the National Institute of Standards and Technology (NIST) quantum teleported information from on proton to another one 100 kilometers away. To communicate with Mars, you have to wait a couple of minutes before you can expect a reply since information transfer is limited by the speed of light. It’s conceivable that using quantum teleportation via quantum entanglement that it will be possible to communicate instantly from any point in the universe. It’s quite exciting stuff.
This is spooky
A consequence of entanglement is superposition. Consider two atoms and their property known as “spin” – this is basically whether the magnetic field of the atom points up or down in an external magnetic field. If two atoms are coupled together in a quantum system (close to each other) like in the case of the H2 molecule, the spins of both atoms can be entangled together in certain circumstances. Whether the spins point in opposite directions, up or down, it doesn’t really matter since both atoms are pointing up and down at the same time. The diagram shows that we rotate H-H by 180 degrees we get H-H, which is identical. In quantum mechanics, we say these atoms exist in a superposition of states.
Ace that test
Superposition is a basic pre-requisite to writing code for quantum computers. The Australian researchers made an electron orbit the nucleus of a single phosphorus atom, so the two are on top of each other. But were the two particles actually entangled? This is where the famous Bell’s Inequality test comes in, named for the British physicist who devised the theorem in 1964.
“The key aspect of the Bell test is that it is extremely unforgiving: any imperfection in the preparation, manipulation and read-out protocol will cause the particles to fail the test,” said Dr Juan Pablo Dehollain, a UNSW Research Associate who with Dr Stephanie Simmons was a lead author of the Nature Nanotechnology paper.
“Nevertheless, we have succeeded in passing the test, and we have done so with the highest ‘score’ ever recorded in an experiment,” he added.
“Passing the Bell test with such a high score is the strongest possible proof that we have the operation of a quantum computer entirely under control,” said Morello. “In particular, we can access the purely-quantum type of code that requires the use of the delicate quantum entanglement between two particles.”
In a classical computer, operating on two bits, you can write four possible code words: 00, 01, 10 and 11. In a quantum computer, in addition to the bits, you can also write their superpositions such as (01 + 10), or (00 + 11).
“These codes are perfectly legitimate in a quantum computer, but don’t exist in a classical one,” said UNSW Research Fellow Stephanie Simmons, the paper’s co-author. “This is, in some sense, the reason why quantum computers can be so much more powerful: with the same number of bits, they allow us to write a computer code that contains many more words, and we can use those extra words to run a different algorithm that reaches the result in a smaller number of steps.”
“What I find mesmerising about this experiment is that this seemingly innocuous ‘quantum computer code’ — (01 + 10) and (00 + 11) — has puzzled, confused and infuriated generations of physicists over the past 80 years.
“Now, we have shown beyond any doubt that we can write this code inside a device that resembles the silicon microchips you have on your laptop or your mobile phone. It’s a real triumph of electrical engineering,” he added.
Journal reference: Juan P. Dehollain, Stephanie Simmons, Juha T. Muhonen, Rachpon Kalra, Arne Laucht, Fay Hudson, Kohei M. Itoh, David N. Jamieson, Jeffrey C. McCallum, Andrew S. Dzurak, Andrea Morello. Bell’s inequality violation with spins in silicon. Nature Nanotechnology, 2015; DOI: 10.1038/NNANO.2015.262 | <urn:uuid:0c794890-0fcc-4720-9564-3ad1a628db73> | CC-MAIN-2022-05 | https://www.zmescience.com/science/physics/quantum-computer-code-works-004234/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00303.warc.gz | en | 0.916733 | 1,282 | 3.75 | 4 |
Magnets in isolation (Stephen Blundell, Physics)
Professor in Physics
The magnetic properties of solids are due not only to the atoms that comprise them, but the way they interact with each other. A single of atom of iron does not behave as a magnet, but a piece of iron does, even though you might think that a piece of iron is “nothing but” a collection of iron atoms. When iron atoms come together inside a crystal (see picture), they do something extraordinary: each atom interacts with its neighbours by a quantum-mechanical mechanism which results in all the atoms behaving like little magnets that point in the same direction. The net result is a magnetized piece of iron, something that can then magically pick up paperclips, but it’s all due to the interactions between the atoms, not a property of the iron atoms themselves.
Sometimes, however, we want to study the magnetic properties of individual atoms, to find out how they behave without all those interactions. To do that, we need to isolate them, and one of the best ways of doing that is to arrange them in a crystal and surround them with non-magnetic atoms that isolate them from each other. If you’ve ever grown crystals in a school experiment, you may well have encountered copper sulphate. A small piece of copper sulphate is suspended on a piece of string into a jam jar with copper sulphate solution in it. Over a period of days, a beautiful blue copper sulphate starts to grow. Inside the crystal are magnetic copper ions, but they are surrounded by some big, bulky molecules (the sulphate ions and some waters of crystallization) and this keeps the copper ions from talking to each other. They’ve become isolated.
Copper sulphate crystals are only weakly magnetic. You would only notice their magnetic properties by placing them next to a very strong magnet. But, in contrast to iron, we can model those magnetic properties by considering the magnetism be due to individual, non-interacting copper ions. That makes the physical modelling much easier, and allows us to learn about how individual copper ions behave.
However, even though we think the copper ions are isolated and non-interacting, their interactions with each other haven’t been eliminated, just reduced to very low levels. If the crystals are cooled to around one degree above absolute zero then these weak interactions start to become relevant. This is fortunate because, otherwise, we would break one of the lesser-known, but still important, laws of thermodynamics: the third law.
The first law of thermodynamics is the famous one that says energy must be conserved. The second law says that entropy always increases (why your desk gets automatically messier over time but never tidies itself without determined intervention). The third law insists that entropy must go to zero at absolute zero of temperature. However, for a set of isolated magnetic atoms each atom would each have the freedom to do its own thing, blissfully ignoring its neighbours, resulting in a multiplicity of different possible states, incompatible with zero entropy. Thus, at sufficiently low temperature, when thermal energy is scarce, those weak interactions start to become relevant, linking the atomic magnets and making them drop into one collective state, in accordance with the third law. Thus, however much you try, you’re never in perfect isolation.
A Very Short Introduction to Magnetism, S. J. Blundell, Oxford University Press (2012).
Concepts in Thermal Physics, S. J. Blundell and K. M. Blundell, 2nd edition, Oxford University Press (2010).
Check out the Oxford Quantum Materials YouTube channel, run by the Department of Physics in the University: https://www.youtube.com/channel/UCtZ4lUlasLqmulrMXLNMXhw
Mansfield Isolation Conversation
- 3rd and 4th Century Social Distancing in the Desert (Jenn Strawbridge, Theology)
- Avoiding an Empty Universe with Solitary Neutrinos (Steve Biller, Physics)
- Daniel Defoe's Journal of the Plague Year (Ros Ballaster, English)
- Doing Community in Isolation: Mosques, Mecca and One Direction
- Isolation and Revelation (Alison Salvesen, Oriental Studies)
- Magnets in isolation (Stephen Blundell, Physics)
- Oscar Wilde in prison (Michèle Mendelssohn, English)
- Physically, but not socially, isolated: Insights from a small Micronesian island
- Power and politics amidst COVID-19 seclusions—perspectives from geography (Amber Murrey, Geography)
- Samuel Taylor Coleridge’s ‘Fears in Solitude’ (Ruth Scobie, English)
- Social Distancing in Ancrene Wisse (Lucinda Rumsey, English)
- Social distancing and quantum computing – are we all qubits now? (Jason Smith, Materials Science)
- Thomas Nashe: ‘Plague’s Prisoner’ (Chris Salamone, English)
- Even buildings need isolation (Sinan Acikgoz, Engineering) | <urn:uuid:466199f1-bee1-4221-885f-c358d47ea138> | CC-MAIN-2022-05 | https://www.mansfield.ox.ac.uk/magnets-isolation-stephen-blundell-physics | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00183.warc.gz | en | 0.916888 | 1,093 | 3.78125 | 4 |
Image Credit: Smile Fight/Shutterstock.com
Researchers have claimed that nano-diamond batteries could last for 28,000 years. Such batteries would not only be beneficial to the world of electric cars and mobile phones, but their application would also be useful in aerospace and medical technology. This article discusses the development, commercialization, and application of novel nano-diamond batteries.
In 2016, at the annual lecture of the Cabot Institute, University of Bristol, researchers, for the first time, demonstrated a novel technology that could use nuclear waste to generate energy. They named their product “diamond batteries”. In 2020, a California-based startup company, NDB, has developed a highly efficient nano-diamond battery that could last up to 28,000 years without charging. This battery is also based on the utilization of nuclear waste.
Commonly available electricity-generation technologies utilize energy for moving a magnet via a coil of wire to produce a current. However, the diamond battery can generate current when placed close to a radioactive source. A team of researchers from the University of Bristol has developed a human-made diamond. This material can generate a low electrical power when put under the influence of a radioactive field.
The researchers at the Cabot Institute have used Nickel-63 as a radioactive source for demonstrating a prototype 'diamond battery'. The radioactive source is encapsulated inside a diamond to produce a nuclear-powered battery. However, the team envisioned using radioactive carbon-14 to obtain a battery with greater efficiency. Tom Scott, Professor in Materials at the University of Bristol, explained the advantages of the technology. He said that this technology would involve the long-term production of clean energy from nuclear waste and not require any maintenance as there are no moving parts or emissions.
Development of Nano-Diamond Batteries by NDB
In 2020, NDB announced two proof-of-concept tests conducted at the Cavendish Laboratory at Cambridge University and Lawrence Livermore National Laboratory in California. As stated above, the nano-diamond battery from the NDB used nuclear waste to generate power. The radioactive core is protected with multiple layers of synthetic diamonds or polycrystalline diamond.
The polycrystalline diamond is an exceptionally thermally conductive material. This material also can contain the radiation within the device. The use of a polycrystalline diamond makes the nano-diamond battery immensely tough and tamperproof.
Technologies behind the development of nano-diamond batteries that ensure radiation, thermal, and mechanical safety are discussed below:
- Diamond Nuclear Voltaic (DNV) is a device that consists of a semiconductor. Individual units are connected to form a stack arrangement and fabricated to create a positive and negative contact surface analogous to a standard battery system. This design improves the system's overall efficiency, which includes the generation of a substantial amount of electricity and a multi-layer safety shield for the product.
- All radioactive isotopes can produce high amounts of heat energy. A single crystalline diamond (SCD) in the DNV unit and the strategic placement of radioactive source between the DNV units prevents self-absorption of heat by the radioisotope.
- NDB technology has utilized alpha, beta, and neutron radiations using boron-10 doping, helping to convert the extra neutron into the alpha ray. This design also enables the rapid conversion of radiation to usable electricity.
- The advanced flexible structural design enables it to take any shape based on its application. This feature makes NDB extremely market-friendly.
- The utilization of radioactive waste is a subject that many have not researched. NDB uses radioactive waste and reuses them by reprocessing and recycling. This technology ensures sustainability and gives rise to a clean energy source, and Achieving this has the added advantage of ensuring environmental safety.
Researchers believe that this technology would reduce the costs and challenges of storing nuclear waste in the most useful form. NDB envisioned the coexistence of innovation and restoration of a healthy environment. Implementing their innovative technology would improve the standards of living and pave the way towards the development of eco-friendly, green, and sustainable energy.
Applications of Nano-Diamond Batteries
Automotive: This battery could bring about a revolution in the world of electric cars. Researchers believe that this technology will benefit the electric car industry due to its immense longevity and efficiency, unlike any other existing batteries.
Medical Technology: These batteries could immensely contribute to medical devices, especially implantable devices, for example, pacemakers and hearing aids. The long battery life of nano-diamond batteries would be extremely beneficial for patients using such medical implants.
Aerospace: Recent advancements in space technology include electric aircraft development that has created the demand for batteries with longevity and safety. Space vehicles and satellites are currently supported by solar power, which is subjected to an unsettling space environment. NDB powers electric aircraft, drones, and space stations for a more extended period.
Electronics: The use of NDB for powering standard electronic devices such as laptops and smartphones negates the need to charge such devices continually. NDB claims the use of their product would benefit the consumers by providing them with power outlet independent devices and increasing personal quantum computing and the device’s computational power.
Defense: NDB can be used in surveillance systems and electronics.
The Future of Nano-Diamond Batteries
As our day to day life is heavily dependent on mobile battery-powered devices, there is a rapid increase in the demand for efficient and cost-effective batteries. Conventional batteries have several concerns that include global warming and waste accumulation. The nano-diamond batteries overcome these limitations of conventional batteries in terms of longevity and widespread applications. Dr. John Shawe-Taylor, University College of London, stated that this technology could be the solution to the world's energy crisis with 'close to zero environmental impact and energy transportation costs.'
The team at NDB announced that the first commercial prototype battery would be available later this year. They further expressed the high demand for their product by stating that many organizations, including aerospace companies and a leader in nuclear fuel cycle products, are lined up as customers.
References and Further Reading
NDB Technology [Online] Available at: https://ndb.technology/
The University of Bristol (2016) 'Diamond-age' of power generation as nuclear batteries developed. [Online] The University of Bristol. Available at: https://phys.org/news/2016-11-diamond-age-power-nuclear-batteries.html
Chatterjee, Abhishek. (2020) A battery made from nuclear waste that can last 28,000 years. [Online] The Hindu Times. Available at: https://www.thehindu.com/sci-tech/technology/a-battery-made-from-nuclear-waste-that-can-last-28000-years/article32484905.ece | <urn:uuid:7adc9e62-db2f-4bab-aadc-fd52837f0e26> | CC-MAIN-2022-05 | https://www.azonano.com/article.aspx?ArticleID=5591 | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00504.warc.gz | en | 0.915846 | 1,431 | 3.75 | 4 |
In the future, we will have Artificial Intelligent computers, which will be able to think, reason at the speed of thought. There will be computers that will store entire databases about the future, and those computers will “teleport” this information to your personal computer screen. In other words, it will upload into your personal computer, the future, and the past.
If we were to attempt to build a system that is capable of instantaneously uploading our thoughts and putting them on a remote server, and then using that server to run our entire computer system, and our entire life software, all without having to understand or even understand programming languages, then we would probably need to call it “quantum AI.” However, many researchers feel that this is simply a strawman tactic meant to raise funding and prevent real progress from being made.
In principle, once we are able to build a system that is able to quickly and easily duplicate every bit of data that is in one second of real-time, then we would essentially have created a quantum computer. The original question might be, how does a quantum computer work? And the answer is, using the latest technology and techniques. The transistors would be quantum chips, and while they can be shrunk down to sizes where they look like regular chips, their insides are different.
Inside the chip would be multiple millions of transistors, which when put together would form a network of thousands of lasers, all interacting with each other and producing millions of bits of information, which could be read by another device. By the time the information has been read, it could be reconstructed by measuring the positions of the individual bits. If the position of a trans transistor is transfigured, the information produced is multiplied, and the result is usually sent to a computer, which can be analyzed and calculated. In theory, once this type of computing is realized, it will be much easier for us to go from raw information to digital information, which will allow us to solve problems much faster than ever before.
What is Quantum AI?
Many leading intellectuals, including Albert Einstein, Max Tegmark, Stephen Wolfram, and Lee Wrinkle believe we will meet our Space Age goal of sending people to Mars within this century. In their view, quantum AI will enable us to design intelligent software that can perform every task we need or desire it to do.
So, what is quantum AI and how will we utilize its power in the future?
To understand what is quantum computing, one must first know what it is not. Unlike classical computers, which operate by storing classical information in memory, quantum machines work by generating quantum data. This data is fed through channels into channels on the hardware’s physical processors where it is processed and results are sent back to the programmers in the form of output and results. However, in order to understand what is quantum AI, one must also appreciate what is deep learning.
Deep learning uses quantum algorithms to achieve superior results than what can be achieved with classical algorithms. Because quantum computing operates by generating virtual outcomes, the programmers are able to make use of simulated execution environments in order to guide the program’s growth. The environment used for these virtual environments can be completely different from the environments utilized by classical computers. For instance, one might utilize a world with virtual pets where each pet plays a role in the programmers’ development.
While this might seem highly illogical, developers have found that this technique can lead to extremely effective deep learning algorithms.
How does quantum artificial intelligence (RAI) differ from classical computing?
In the case of classical computers, the developers need to deal with problems that cannot be solved by using classical algorithms. However, with the use of quantum computing, developers are able to generate solutions for problems that cannot previously be solved. In addition, ai systems can be made as efficient as possible in order to ensure that they meet the requirements of their customers.
As mentioned earlier, developers utilize quantum computing to address two major application areas. First, they can make ai systems as efficient as possible so that they can be used for solving practical problems. In these application areas, the developers use quantum computing to tackle problems such as optimization. Optimization is a common problem whose solutions are difficult to find because they are typically complex or involve highly specialized systems.
Another application area for which developers utilize quantum computing is machine learning. Machine learning is an area of science that utilizes large sets of numbers to approximate the results of scientific calculations. One example of this application is the Google Ion machine learning project which uses quantum data to optimize the search engine results.
As stated above, researchers can make use of quantum algorithms and techniques to solve a wide variety of problems. However, these methods are not meant to be used to implement highly specialized solutions for currently known problems. Instead, the developers make use of classical AI methods in their applications. Classical AI methods work by making use of rules that were proven to be effective in the past, which allow the human mind to emulate them. For instance, if one had to solve a mathematical problem using calculus, he could use proof from the history of mathematics in order to make his solution work.
However, the developers of quantum ai are trying to use more advanced techniques that can be designed using more powerful software. This would mean that even though these methods do not work well on classical computers, they can be used for designing artificially intelligent machine learning algorithms. This software will then be able to solve problems in the future using the principles of quantum computing. Ultimately, this means that we may soon reach the point when human minds can be artificially combined with computer software in order to create entirely intelligent machines that can solve any type of problem imaginable. In the future, you may witness the first true artificial superintelligent machine.
Is Quantum Storage possible?
The technology known as Quantum Storage is now becoming more popular day by day. What it stands for is ‘qbits’, which are qubits of information that can be stored in the form of a digital bit, no need to worry about colouring them up as they can be in any one of billions of different colours. These are stored in what is called a qubit chip.
Information centres, otherwise known as servers are what you will need in order to store your information for you. The servers themselves don’t actually store the information; rather they store it all on delicate hard drives, which are extremely delicate and can crash at any time if the information stored on them is not handled properly. It is in the memory of the server that the Quantum Information Centre stores the data for you. Once the machine is working at its optimal capacity, information is retrieved from the servers and given back to you in the form of digital files which you can access from your desktop.
Quantum storage is therefore theoretically possible because once your computer has retrieved the information from one of the information centres, it immediately starts saving it to another one of your Quantum Information Centers, making it possible for you to access your files from anywhere in the world, even on another laptop. This is just one of the things that Quantum Storage offers you. In fact, there are many other benefits, including eliminating the long delays that come with hard disk drives when transferring large amounts of data.
Quantum Storage is a sort of software that runs on a computer and allows users of a given network to send each other information. For example, if you were having ten files saved on a USB drive and you wanted to transfer the information to your home computer, it would be possible just by plugging the USB drive into a USB port on the computer and saving the file to the relevant folder. The software would run on the computer and immediately begin saving files to the relevant folder, thus allowing the files to be uploaded into the relevant Quantum Information Centre (QIC). The files would then be available for download by any user who is logged into the network. | <urn:uuid:a4f803a9-cd1e-4dc1-b98d-f311f82ee04e> | CC-MAIN-2022-05 | https://artificialintelligencezone.com/is-quantum-ai-possible/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00103.warc.gz | en | 0.956091 | 1,611 | 3.875 | 4 |
Australia’s first Women in STEM Ambassador, Professor Lisa Harvey-Smith, discusses the future of astrophysics, the importance of science communication and what it takes to boost STEM diversity.
Use this article as personal professional reading or with students to challenge them about their unconscious bias.
Use the downloadable STEM pack below to challenge this in your classroom.
Word Count: 1600
Not many people have the skill to draw an entire room full of everyday Australians – scientists, science-enthusiasts and the science-illiterate alike – to a two-hour talk on complex astrophysics.
This ability to capture and communicate the universe with such vibrancy is among many reasons why she was last year named Australia’s first Women in STEM Ambassador.
In her two-year appointment, Professor Harvey-Smith will focus on accelerating the cultural and systematic changes already underway in Australia to keep women in the STEM workforce.
She said women in STEM were often driven out of science by a lack of work flexibility or a toxic workplace culture. So to boost their numbers, she is tackling gender stereotypes that form at a young age.
Smashing stereotypes herself, Professor Harvey-Smith’s research focuses on an intergalactic event that will change our night sky forever.
Every hour, the Milky Way moves 400,000 kilometres closer to a neighbouring galaxy, Andromeda. In 3.8 billion years, these two galaxies will collide, causing brilliant bursts of star formations, the fusion of supermassive black holes and the ignition of fiery gas streams that will tear through space at almost the speed of light.
But before we worry about that, she explains that in half a billion years, humans will need to think about leaving Earth to get away from our hot, expanding sun.
Do you think humans are more likely to move to another planet or to live on a floating space station once the Earth becomes uninhabitable?
The human body doesn’t do too well in space. Although we’ve grown quite good over the past 20-30 years living in space stations, after a couple of years humans come back weak, with diminished eye sight.
Our bodies have adapted over hundreds and thousands of years to live in gravity, so we’ll have to develop spinning space stations to create artificial gravity – otherwise our bodies will waste away.
Alternatively, when the Earth is too hot we could move to the outer solar system on an icy moon like Saturn’s Enceladus. I think either way, we have some big problems to solve here on Earth today.
We’re destroying our own planet and it’s such an imminent problem. There’s no way we can possibly live on Mars, or another planet or moon, if we can’t control climate on our Earth now.
You’ve played a key role in developing the CSIRO Square Kilometre Array (SKA), which will not only expand our understanding of the universe but also drive technological developments worldwide. What are some examples of these technologies and why is this important?
It’s very exciting working in astronomy because we not only discover things in the universe, but also all of the money spent on space research is also spent for things that help us on Earth – that’s something people need to remember when you hear billions of dollars being spent on astronomy research.
The technologies we take for granted come from the most unexpected types of research that often seem unrelated.
Medical technology and imaging were also developed through fundamental leaps in astronomy and other sciences. Some of the medical technologies used to look at changes in moles to check for melanoma growth stem from astronomy research, for instance.
And the SKA helped developed faster, more reliable wi-fi because of a project at CSIRO to look for exploding black holes.
For the SKA, we’re developing cameras with multiple pixels for radio imaging. In radio astronomy, cameras previously just used one pixel to take images of space, which sounds weird, but it’s true.
We hope that these developments can be translated to medical technologies, for example medical imaging of the body to detect and treat cancers. That could be a very important spin-off.
How and when did you know you wanted to become a science communicator and educator?
It’s really about the way I got into science myself – through some amazing science communicators I grew up with in the UK. My key influencers were television and books.
The BBC had a program called Tomorrow’s World. It imagined the world of the future, but it wasn’t all silver foil, monorails and hovercrafts! It explored how the world can change with technology – that was so inspiring.
I always had this passion to teach but I didn’t want to be a teacher like my mum because, frankly, it’s a very difficult profession. I have the greatest respect for teachers in schools, but I knew it wasn’t for me.
I wanted to use my creativity rather than teach in a confined setting – that’s what I love about science communication, the creative aspect. The challenge of breaking out of my science niche and cutting through the jargon and explain these cool concepts. I find it challenging and engaging and I love watching people’s faces as the penny drops.
How do you think new technologies like machine learning, automation and quantum computing will affect the field of astrophysics over the next 50 years?
Machine learning and the automation of every part of astronomy research is definitely coming – we need it.
We have so much data coming from our new telescopes – going from taking images of just one pixel to multiple pixels and from one telescope to the 130,000 telescopes planned in Western Australia – we will need to use one giant supercomputer brain to study the sky.
Astronomers used to just go through data and images to study space but we can’t do that anymore. There isn’t enough human capacity in the world to do that, so we have to teach computers to be the new scientists.
That’s a very difficult thing to do because humans are surprisingly intelligent compared to computers – computers can only follow specific rules, whereas we have a bit more agency. These new technologies will be massively important and game-changing.
Research will not only be faster, but we will be able to find things we didn’t expect.
When we take a picture of the sky for a whole night using a camera on a telescope, we analyse every millisecond of that picture. Every millisecond the sky changes – there are things flashing and exploding, disappearing and appearing. Those are the flashes of light from a distant universe created by things we’ve not discovered yet.
And those are what bring in the game-changing discoveries. That’s a fundamental shift in the way we do science, because now the computer is alerting us to the things that we don’t expect to see.
Sounds like a great time to be in astrophysics!
What did you wish more people knew about astrophysics?
Astrophysics can be done by anyone.
We have so many citizen science projects where anyone can take part. You can go online and look up Galaxy Zoo, or Citizen Science and you can take part in classifying galaxies, look at how the sky is changing and discover supernovas and star explosions.
And the findings will be used in real research, so I wish people would get involved. It’s really exciting and a great opportunity to be a real scientist in your own home.
Congratulations on being appointed Australia’s first Women in STEM Ambassador! There’s currently a lot of funding pouring into initiatives aimed at increasing girls and women’s participation in STEM – why do you think Australia needs a Women in STEM Ambassador?
My role really is important because it works on a national scale to raise awareness of issues that create roadblocks to girls studying STEM in school at advance levels and progressing into STEM jobs and careers.
Once women are in science, they’re driven out by bad workplace culture and a lack of work flexibility, particularly around the time when they may have caring responsibility. There are many different issues, but really I’m trying to accelerate the cultural change that’s already underway in this country.
In particular, I want to tackle some of the stereotypes that form from a young age and, this year, I’m focusing on early learning facilities. We want young people to understand that STEM is for girls and boys, and it can lead to amazing, exciting, fun, world-changing careers.
I really want to go to primary schools and drive this message home, and work with education departments across the country to help young people make the most of their education.
What have you learnt in this role so far and what do you hope to achieve for the remainder of your time as Ambassador?
I’ve learnt that the education system in Australia is very complex. Targeting young children is really a good way to make change before they start forming stereotypes and before they start making decisions about their future study.
Talking to 14-year-olds is actually too late – they’ve already formed a lot of those opinions. Although girls actually outperform boys in many maths and science tests, they have a lower opinion of their ability to do those subjects.
It’s really about breaking stereotypes, building confidence and boosting the understanding of young women about what STEM really means.
Login or Sign up for FREE to download a copy of the full teacher resource | <urn:uuid:5dea8c5c-e3d8-4698-90e4-079f0008a3d6> | CC-MAIN-2022-05 | https://education.australiascience.tv/lisa-harvey-smith-smashing-galaxies-and-gender-stereotypes/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305260.61/warc/CC-MAIN-20220127103059-20220127133059-00303.warc.gz | en | 0.943942 | 1,985 | 3.5 | 4 |
Ordinary light could drive quantum computersby Eric Smalley, Technology Research News
One reason quantum computers are not likely to show up in your neighborhood electronics store any time soon is the laboratory equipment needed to build today's prototypes is hard to come by and difficult to use.
With some improvements to a couple of key devices, though, that could change. Thanks to a scheme concocted by researchers at the Los Alamos National Laboratory, researchers should be able to build quantum computers using common linear optics equipment.
Practical quantum computers could be developed sooner with the means for building prototypes within reach of a greater number of researchers. Quantum computers are expected to solve certain problems like cracking codes and searching large databases much faster than any other conceivable computer.
To achieve quantum computing, researchers manipulate the quantum states of photons or atoms to perform logic operations. Photon manipulation traditionally requires nonlinear optics methods, which use powerful lasers to coax photons from special materials.
The effect the lasers have on the atoms of these materials increases faster than the increase in intensity of the light. Ordinarily, the effect is proportional. This nonlinearity produces strange phenomena, like entangled pairs of photons, that are useful for quantum computing.
"We show that nonlinear optical elements can be simulated using linear optics and photo-detectors, a very surprising result," said Emanuel Knill, a mathematician at Los Alamos National Laboratory. "It opens up an entirely new path toward realizing quantum computers."
Quantum computers based on the Los Alamos linear optics scheme would create quantum bits, or qubits, by using two opposite conditions of individual photons to represent the 0 and 1 values used in binary computing.
There are two sets of opposite conditions. The first is the two possible paths a photon can take when it encounters a beam splitter. The second is either of two pairs of polarizations. Photons are polarized, or oriented, in one of four directions: vertical, horizontal, and two diagonals. Each polarization is paired with its opposite: vertical with horizontal and diagonal with diagonal.
Multiple bits can be used to represent larger numbers. Four bits can represent 24 or 16 numbers and 24 bits can represent 224 or more than 16 million numbers. Ordinary computers process these numbers one at a time. So, for example, in order to find one number out of 16 million an ordinary computer will have to look through an average of eight million numbers.
What makes a qubit different from an ordinary bit is that it can be in a third state, the quantum mechanical condition of superposition, which is essentially a mix of both 0 and 1. This means it's possible to perform a series of quantum mechanical operations on a series of qubits all at once. For some applications, the number of quantum mechanical operations is exponentially smaller than the number of steps required for a classical computer.
The quantum mechanical operations are sequenced to make up logic gates, which perform the basic mathematics of computing. Most quantum logic gate schemes require particles in more complicated quantum arrangements like entanglement. According to Knill, however, it is possible to create logic gates by manipulating the photons that are in the superpositions created by the linear optics.
Quantum computers based on photons rather than atoms will be easier to network because there will be no need to transfer quantum information between atoms and photons. "The only realistic proposals for long distance quantum communication are based on photons," Knill said.
Before the scheme can be implemented, however, researchers will need to improve both the light source and the photon detector. Two recently developed single-photon emitters hold out the promise that the necessary equipment could be available to researchers within a few years, said Knill.
"I think it's a neat idea," said John Preskill, professor of theoretical physics and director of the Institute for Quantum Information at the California Institute of Technology. "Any theoretical ideas that help make realizations of quantum logic technically less demanding might turn out to be important ideas."
Preskill led a research team that proposed a different scheme for quantum computing using linear optics, though that scheme requires its initial state to be prepared using nonlinear optics.
"There have been a lot of previous discussions of using information encoded in photons to [make] universal quantum gates, but always involving some kind of nonlinear coupling between photons, and those are hard to manage," said Preskill. "The stuff that Knill et al are talking about in principle is much easier. It uses tools that are available in lots of laboratories," he said.
Despite the potential for linear optics to speed things up, it would be a significant achievement if in 25 years a quantum computer can solve problems that are beyond the reach of classical computers, said Knill.
"Quantum computation by any means is a long way off," he said. "Our proposal adds to the tool box of possible experimental realizations, which may help speed things up. The fact is, the necessary experiments are extremely demanding."
Knill's research colleagues were Raymond Laflamme of Los Alamos National Laboratory and Gerard J. Milburn of the University of Queensland in Australia. They published the research in the January 4, 2001 issue of Nature. The research was funded by the Department of Energy and the National Security Agency.
Preskill's research colleagues were Daniel Gottesman of the University of California at Berkeley and Alexei Kitaev of Microsoft Research. Their work is scheduled the published in the journal Physical Review A. The research was funded by the Department of Energy and the Defense Advanced Research Projects Agency.
Timeline: 25 years
TRN Categories: Quantum Computing
Story Type: News
Related Elements: Technical paper, "A scheme for efficient quantum computation with linear optics," Nature, January 4, 2001; Technical paper, "Encoding a qudit in an oscillator," http://arXiv.org/abs/quant-ph/?0008040
January 31, 2001
Store globally, access locally
Ordinary light could drive quantum computers
Color deepens data storage
Motor goes all the way around
Switch channels atom beams
Research News Roundup
Research Watch blog
View from the High Ground Q&A
How It Works
News | Blog | Books
Buy an ad link
Ad links: Clear History
Buy an ad link
© Copyright Technology Research News, LLC 2000-2006. All rights reserved. | <urn:uuid:a002ce1e-b524-4538-9fcb-a2daa37b335f> | CC-MAIN-2022-05 | http://trnmag.com/Stories/013101/Ordinary_light_could_drive_quantum_computers_013101.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303356.40/warc/CC-MAIN-20220121101528-20220121131528-00144.warc.gz | en | 0.92631 | 1,345 | 4.09375 | 4 |
Over the years, supercomputers have played a pivotal role in pushing the frontiers of science. Earlier this year, Meta launched one of the fastest AI supercomputers, the AI Research SuperCluster (RSC), to build sophisticated AI models that can learn from trillions of examples; navigate hundreds of different languages; seamlessly analyse text, images, and video together; build AR tools etc.
However, the quest for something even faster than supercomputers led to the development of quantum computers. Last year, the University of Science and Technology of China (USTC) introduced the world’s fastest programmable superconducting quantum computer; Zuchongzhi 2.1 is a million times faster than a conventional computer.
Sign up for your weekly dose of what's up in emerging technology.
At last year’s I/O conference, Google unveiled a Quantum AI campus in Santa Barbara, California, complete with a quantum data centre, quantum hardware research labs, and quantum processor chip fab facilities. The tech giant plans to build a useful, error-corrected quantum computer within a decade.
Quantum computers of the future will solve complex problems faster and more efficiently than supercomputers. But does it mean supercomputers will become obsolete? Let’s find out.
The first supercomputer came into existence in the 60s. However, the modern supercomputers were developed much later in the 90s. In 1997, Intel developed its first 1 teraFLOPS supercomputer, ‘ASCI red’. Today, the Fugaku supercomputer located at RIKEN Centre for Computational Science in Japan, has thrice the processing power as the world’s second-fastest computer, IBM’s Summit. The Fugaku has clocked a maximum performance of 442,010 teraFLOPs.
Quantum computers, as a concept, were first proposed in the 80s by Richard Feynman and Yuri Manin. In 1998, Isaac Chuang of the Los Alamos National Laboratory, Neil Gershenfeld of MIT, and Mark Kubinec of the University of California built the first quantum computer (2-qubit). In 2017, IBM announced the world’s first quantum computer for commercial use.
“Quantum computing has seen a major boost in the last 10-15 years. Companies worldwide are investing in various quantum technologies and making their quantum hardware.
“Today, we are in the nisq (noisy intermediate-scale quantum) era, working on 100-qubit quantum systems. They may not deliver perfect results (read noisy and erroneous), but you can still work with them. However, we are still very far from achieving the maturity level to have a fully fault-tolerant quantum computer,” said Srinjoy Ganguly, senior data scientist.
Be it IBM’s Sierra or the Sunway TaihuLight, the supercomputers we see today operate at a high compute to I/O ratio. Compared to a conventional computer, a supercomputer runs on multiple processors. The Sunway TaihuLight, one of the top 5 fastest supercomputers globally, has around 40,960 processing modules, each with 260 processor cores.
While a conventional computer works on binary, quantum computers rely on a unit of information called qubits (subatomic particles such as electrons or photons) with far greater processing power. The qubits only work in a controlled quantum state–under sub-zero temperature or in ultra-high-vacuum chambers.
Quantum computing is predicated on two phenomena:
Superposition is the ability of qubits to be in different states simultaneously, allowing them to work on a million computations at the same time. However, qubits are sensitive to their environment, so they can’t maintain their state for long periods. As a result, quantum computers can’t be used to store information long-term.
Einstein described quantum entanglement as spooky action at a distance. It is the ability of two or more quantum systems to become entangled irrespective of how far apart they are. Thanks to the correlation between the entangled qubits, gauging the state of one qubit gives information about the other qubit. This particular property accelerates the processing speed of quantum computers.
Supercomputers are bound by the normal laws of physics. More the processors, the better the speed. Quantum computers are far more efficient than supercomputers as the former harnesses the power of quantum mechanics to carry out calculations. In 2020, China claimed to have developed a quantum computer that performs computations 100 trillion times faster than any supercomputer.
Development and infrastructure cost
Building a supercomputer would cost somewhere between USD 100 million to USD 300 million. For example, the Chinese Sunway TaihuLight cost around USD 273 million. Additionally, the annual maintenance charges fall between USD 4 to 7 million.
Quantum computers are prohibitively expensive. The hardware part alone will cost tens of billions of dollars. The cost per qubit has to come down drastically to make quantum computers commercially viable. At present, a single qubit costs around USD 10,000. Also, qubits operate in a quantum state either in a sub-zero temperature or a vacuum environment, which is very expensive to maintain.
Though a non-quantum algorithm can be run on quantum computers, a quantum algorithm, such as Shor’s algorithm for factoring and Grover’s algorithm for searching an unstructured database, doesn’t work on a supercomputer.
Applications of Supercomputers
Both quantum computing and supercomputing are deployed in cases where large databases are involved. Let’s look at a few use cases:
Weather forecasting: The weather reports we receive on our smart devices come from a supercomputer. Besides predicting the possibility of rain in your city, supercomputers also predict the path of hurricanes and help save thousands of lives.
Last year, the UK’s Met Office signed a 1.2 billion pound deal with Microsoft to develop the world’s most powerful supercomputer to help with preparedness in the face of extreme weather events.
Scientific research: Supercomputers provide insights into complex fields of study. Laboratory experiments are expensive and time-consuming. Hence it is logical to use supercomputers to simulate these laboratory experiments. For example, multiple supercomputers were leveraged across the world to fight the COVID virus and develop vaccines.
Application of quantum computers
“At present, we cannot perform operations on a qubit that lasts more than a few microseconds. Because of this, the quantum data gets lost, making it difficult to be used for AI or other general tasks,” said Ganguly.
Researchers are working on QRAM, a computing unit that will allow storing quantum states for several hours. Quantum computers have applications in fields such as:
Drug design & development: Quantum computers are used to test drug combinations and their interactions. Traditionally, drugs are developed via the trial and error method, which is expensive and risky at the same time.
Computational chemistry: Unlike supercomputers, quantum computers focus on the existence of both 0 and 1 simultaneously, offering immense machine power to map the molecules effectively.
Cryptography: Quantum computers could facilitate secure communications with the help of quantum key distribution. However, there is also a downside.
Recently, US President Joe Biden signed a memorandum asking government agencies to implement quantum-resistant cryptography on their most important systems. The RSA encryption, the most widely used form of encryption, is based on 2048-bit numbers. A quantum computer could break this encryption. As of yet, we don’t have a quantum computer with such capability. | <urn:uuid:335d261a-dc08-4e8c-bf17-0b76b36f3054> | CC-MAIN-2022-21 | https://analyticsindiamag.com/quantum-computers-vs-supercomputers-how-do-they-differ/?utm_source=rss&utm_medium=rss&utm_campaign=quantum-computers-vs-supercomputers-how-do-they-differ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00476.warc.gz | en | 0.920691 | 1,606 | 3.828125 | 4 |
Scientists at Princeton University used a scanning tunneling microscope to show the atomic structure of an iron wire into an atom wide on a lead surface. The enlarged portion of the image shows the quantum probability of the content in the wire of an elusive particle called the Majorana fermion. It is important to note that the picture shows particles at the end of the wire, which is exactly where the theoretical calculations predicted for many years.
If you thought that the search for the Higgs boson - the elusive particle that gives matter mass - was epic, then think about the physicists who were trying to find a way to discover another subatomic particle hidden since the 1930s, when the first assumption appeared.
But now, thanks to the use of 2 fantastic large microscopes, this very strange and potentially revolutionary particle has been discovered.
Imagine the Majorana fermion, a particle that is also its own antiparticle, a candidate for dark matter, and a possible mediator of quantum computing.
Fermion Majorana is named after the Italian physicist, Ettore Majorana, who formulated a theory describing this unique particle. In 1937, Majorana predicted that a stable particle can exist in nature, which is both matter and antimatter. In our everyday experience there is also matter (which is found in abundance in our Universe) and antimatter (which is extremely rare). If matter and antimatter meet, they annihilate, disappearing in a flash of energy. One of the biggest mysteries of modern physics is how the Universe became more matter than antimatter. Logic dictates that matter and antimatter are parts of the same thing, like opposing sides of a coin, and should have been created at the same pace. In this case, the universe would have been destroyed before it could establish itself. However, some process after the Big Bang shows that more matter was produced than antimatter, so it is important that matter won, which fills the Universe that we know and love today.
However, the Majorana fermion is different in its properties and is also an antiparticle. While the electron is matter, and the positron is the anti-material particle of the electron, the Majorana fermion is both matter and antimatter. It is this material / anti-material duality that has made this little beast so difficult to trace over the past 8 years. But the physicists did, and in order to accomplish the task, it took tremendous ingenuity and an enormously large microscope.
The theory shows that the Majorana fermion should extend on the edge of other materials. Thus, a team of Princeton University created an iron wire into an atom thick on the lead surface and made an increase at the end of the wire using a mega-microscope in the laboratory of ultra-low vibrations at Yadwin Hall in Princeton.
“This is the easiest way to see the Majorana fermion, which is expected to be created on the edge of some materials,” says leading physicist Ali Yazdani from Princeton University, New Jersey, in a press release. "If you want to find this particle inside the material, you must use a microscope that allows you to see where it really is." Yazdani's research was published in the journal Science on Thursday (October 2). The search for the fermoion Majorana is significantly different from the search for other subatomic particles that are more illuminated in the wide press. Hunting for the Higgs boson (and similar particles) requires the most powerful accelerators on the planet to generate the enormous energy collision necessary to simulate conditions soon after the Big Bang. This is the only way to isolate the rapidly decaying Higgs boson, and then study the products of its decay.
In contrast, the Majorana fermion can only be detected in a substance by its effect on the atoms and the forces surrounding it - so no powerful accelerators are required, but the use of powerful scanning tunneling microscopes is necessary. Very fine tuning of the target material is also required in order for the Majorana fermion to be isolated and displayed.
This strict control requires extreme cooling of thin iron wires to ensure superconductivity. Superconductivity is achieved when thermal fluctuations of a material are reduced to such an extent that electrons can pass through this material with zero resistance. By reducing the target to 272 degrees Celsius — to one degree above absolute zero, or 1 Kelvin — ideal conditions can be achieved for the formation of the Majorana fermion.
“This shows that this (Majorana) signal exists only on the edge,” said Yazdani. “This is a key signature. If you do not have it, then this signal may exist for other reasons. ” Previous experiments removed possible signals from the Majorana fermion in similar installations, but this is the first time that a particular particle signal has appeared, after removing all sources of interference, exactly in the place where it is predicted to be. “This can only be achieved through an experimental setup — simple and without the use of exotic materials that could interfere,” Yazdani said.
“What is interesting is that it is very simple: it is lead and iron,” he said.
It has now been found that there are some interesting opportunities for several areas of modern physics, engineering and astrophysics.
For example, the Majorana fermion weakly interacts with ordinary matter, as does the ghostly neutrino. Physicists are not sure whether neutrinos have a separate antiparticle, or, like the fermoion of Majorana, is its own antiparticle. Neutrinos abound in the universe, and astronomers often point out that neutrinos are a large part of the dark matter that is thought to fill Cosmos. Probably, neutrinos are the same as particles of Majorana and Fermions. Majorana are also candidates for dark matter.
There is also a potentially revolutionary industrial application if physicists can encode matter with Majorana fermions. Currently, electrons are used in quantum computing, potentially creating computers that can solve previously innumerable systems in an instant. But electrons are notoriously difficult to control, and often violate calculations after interacting with other materials around them. However, the Majorana fermion, which is extremely weakly interacting with the material, is surprisingly stable due to its material / anti-material duality. For these reasons, scientists can use this particle, technically applying it in materials, coding, and, possibly, discovering more and more new methods of quantum computing.
Thus, although its discovery does not create drama and the pushing of relativistic particles together in the vacuum chambers of the LHC detectors, the more subtle discovery of the Majorana can develop a new approach to dark matter and revolutionize computing.
And, perhaps, the 80-year wait for its opening was worth it, after all. | <urn:uuid:ed7fa6d4-71f8-405b-8bd5-51fce4b9a8e4> | CC-MAIN-2022-21 | https://great-spacing.com/publication/71771/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00676.warc.gz | en | 0.946906 | 1,408 | 3.578125 | 4 |
Quantum physics analyzes a quantum system (QS) that has the ability to exist in a superposition of different states, simultaneously. Lately, scientists have developed computers, which are based on quantum mechanical principles, instead of classical physics.
Researchers and physicists from the Max Planck Institute generated a photon pair from the energy of an electron through a light source. The first photon will be used for quantum information transmission, while the other will be observed, as it displays the specific state of its twin, at any time. This is what we call entanglement.
Quantum Communication at a Higher Level
Entanglement is one of the most important phenomena of quantum mechanics. Imagine that two particles exist and put into a state where there is a strong correlation (i.e., entanglement).
Measuring one particle affects the state of the other. This correlation exists, even if there is a great distance between them. Now, the most important part is the ability for scientists to learn information about the state of one by measuring the state of the second. When another particle comes into play, interacting with the second of the entangled two, the change is reflected on the former as well. This creates a mirror-effect of the twin, essentially causing teleportation.
Quantum particles change their states the moment they are measured. So, it is very difficult to have a clear view of the information that is transmitted by a photon. The solution is the pair of photons which gives us the ability to use the first photon as a messenger of its twin.
Information Inside the Quantum World
In the near future, scientists expect that quantum computers will be the key to secure information technology. One example that shows evidence of the significance of quantum security is when a quantum form of cryptography was examined, and the results indicated that it is unbreakable, even for quantum systems.
The discovery, by the scientists at the Max Planck Institute for Solid State Research, was a unique source that created the pairs of photons. This source, with the path, is known as a scanning tunneling microscope (STM).
Previous research has used this microscope to study the surfaces of conducting or semiconducting materials. The device is based on an effect known as quantum tunneling. Concepts in quantum mechanics state that electrons have both wave and particle-like properties. Tunneling is an effect of this wavelike nature, a quantum mechanical effect.
A tunneling current occurs when electrons move through a barrier (which is called tunneling). Under the rules of classical physics, the electrons should not be able to move through.
The image shows us that when an electron (the wave) hits a barrier, the wave doesn’t abruptly end, but tapers off very quickly – exponentially. This is a quantum mechanical effect that occurs when electrons move through a barrier due to their wave-like properties. Tunneling depends on the thickness of the barrier; the wave does not get past a thick barrier. (Source: Public Domain)
The microscope has the ability to apply voltage to a metallic tip causing electrons to tunnel, over a short distance, to a sample. If an electron loses energy during this procedure, then light is produced. Photon pairs are also formed at a rate 10, 000 times higher than theories predict.
“According to theory, the probability of a photon pair forming is so low that we should never see it. But our experiments show that photon pairs are being generated at a much higher rate. That was a huge surprise for us”, underlines researcher Christopher Leon.
Pair of Photons -- Fast and Lossless Data Transmission
Physicists use detectors in order to measure the time intervals between the arriving photons. Until now, the researchers have not been sure if the photons are produced simultaneously, or in rapid succession, because of the lack of resolution of the detectors. Now it has been estimated that the photons are 50 trillionths of a second apart (in a tunneling junction).
This discovery opens up innovative developments, in photonic and quantum communication, for tunneling junctions. The main difference between previous methods and the tunneling junction is that while the other techniques employed intense laser light, the latter is electronic on an atomic scale.
Many scientists agree that in the next generation of computer chips, electronic components will be replaced by optical components with the use of a light source.
“The fact that photon pairs are generated, indicated that a complicated process must be taking place. This process is thrilling because it opens up a new perspective on how light is produced,” explains theoretic scientist Olle Gunnarsson.
Presently, a quantum computer has 2,000 qubits and is estimated to have the capability of solving calculations up to 10,000 times faster than a standard computer. Every innovative theorem or discovery on quantum computing is definitely a big step for science evolution, as this era of computing is hailed as the new wave in the use and processing of big data, which will bring great technological, social and scientific changes.
“We must be clear that when it comes to atoms, language can be used only as in poetry.” - Niels Bohr
Top Image: Quantum particles behave both as particles and waves. (Source: Pixabay)
1. Scanning Tunneling Microscopy, 2018. [Online] Available at: https://www.nanoscience.com/techniques/scanning-tunneling-microscopy/
2. Max Planck Society, 2019: Quantum communication: making two from one. [Online] Available at: https://phys.org/news/2019-05-quantum.html
3. Calvin F. Quate, 2019. Scanning tunneling microscope. [Online] Available at: https://www.britannica.com/technology/scanning-tunneling-microscope
4. Chad Orzel, 2018. How Do You Create Quantum Entanglement? [Online] Available at: https://www.forbes.com/sites/chadorzel/2017/02/28/how-do-you-create-quantum-entanglement/#10b0bd861732 | <urn:uuid:d00cfcd8-4d1b-4ef3-bcfa-0898987adc2e> | CC-MAIN-2022-21 | https://www.evolving-science.com/information-communication/quantum-information-00963 | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00677.warc.gz | en | 0.928341 | 1,270 | 4 | 4 |
For physicists trying to harness the power of electricity, no tool was more important than the vacuum tube. This lightbulb-like device controlled the flow of electricity and could amplify signals. In the early 20th century, vacuum tubes were used in radios, televisions and long-distance telephone networks.
But vacuum tubes had significant drawbacks: They generated heat; they were bulky; and they had a propensity to burn out. Physicists at Bell Labs, a spin-off of AT&T, were interested in finding a replacement.
Applying their knowledge of quantum mechanics—specifically how electrons flowed between materials with electrical conductivity—they found a way to mimic the function of vacuum tubes without those shortcomings.
They had invented the transistor. At the time, the invention did not grace the front page of any major news publications. Even the scientists themselves couldn’t have appreciated just how important their device would be.
“At the dawn of the 20th century, a new theory of matter and energy was emerging.”
First came the transistor radio, popularized in large part by the new Japanese company Sony. Spreading portable access to radio broadcasts changed music and connected disparate corners of the world.
Transistors then paved the way for NASA’s Apollo Project, which first took humans to the moon. And perhaps most importantly, transistors were made smaller and smaller, shrinking room-sized computers and magnifying their power to eventually create laptops and smartphones.
These quantum-inspired devices are central to every single modern electronic application that uses some computing power, such as cars, cellphones and digital cameras. You would not be reading this sentence without transistors, which are an important part of what is now called the first quantum revolution.
Quantum physicists Jonathan Dowling and Gerard Milburn coined the term “quantum revolution” in a 2002 paper. In it, they argue that we have now entered a new era, a second quantum revolution. “It just dawned on me that actually there was a whole new technological frontier opening up,” says Milburn, professor emeritus at the University of Queensland.
This second quantum revolution is defined by developments in technologies like quantum computing and quantum sensing, brought on by a deeper understanding of the quantum world and precision control down to the level of individual particles.
A quantum understanding
At the dawn of the 20th century, a new theory of matter and energy was emerging. Unsatisfied with classical explanations about the strange behavior of particles, physicists developed a new system of mechanics to describe what seemed to be a quantized, uncertain, probabilistic world.
One of the main questions quantum mechanics addressed was the nature of light. Eighteenth-century physicists believed light was a particle. Nineteenth-century physicists proved it had to be a wave. Twentieth-century physicists resolved the problem by redefining particles using the principles of quantum mechanics. They proposed that particles of light, now called photons, had some probability of existing in a given location—a probability that could be represented as a wave and even experience interference like one.
This newfound picture of the world helped make sense of results such as those of the double-slit experiment, which showed that particles like electrons and photons could behave as if they were waves.
But could a quantum worldview prove useful outside the lab?
At first, “quantum was usually seen as just a source of mystery and confusion and all sorts of strange paradoxes,” Milburn says.
But after World War II, people began figuring out how to use those paradoxes to get things done. Building on new quantum ideas about the behavior of electrons in metals and other materials, Bell Labs researchers William Shockley, John Bardeen and Walter Brattain created the first transistors. They realized that sandwiching semiconductors together could create a device that would allow electrical current to flow in one direction, but not another. Other technologies, such as atomic clocks and the nuclear magnetic resonance used for MRI scans, were also products of the first quantum revolution.
Another important and, well, visible quantum invention was the laser.
In the 1950s, optical physicists knew that hitting certain kinds of atoms with a few photons at the right energy could lead them to emit more photons with the same energy and direction as the initial photons. This effect would cause a cascade of photons, creating a stable, straight beam of light unlike anything seen in nature. Today, lasers are ubiquitous, used in applications from laser pointers to barcode scanners to life-saving medical techniques.
All of these devices were made possible by studies of the quantum world. Both the laser and transistor rely on an understanding of quantized atomic energy levels. Milburn and Dowling suggest that the technologies of the first quantum revolution are unified by “the idea that matter particles sometimes behaved like waves, and that light waves sometimes acted like particles.”
For the first time, scientists were using their understanding of quantum mechanics to create new tools that could be used in the classical world.
The second quantum revolution
Many of these developments were described to the public without resorting to the word “quantum,” as this Bell Labs video about the laser attests.
One reason for the disconnect was that the first quantum revolution didn’t make full use of quantum mechanics. “The systems were too noisy. In a sense, the full richness of quantum mechanics wasn't really accessible,” says Ivan Deutsch, a quantum physicist at the University of New Mexico. “You can get by with a fairly classical picture.”
The stage for the second quantum revolution was set in the 1960s, when the North Irish physicist John Stewart Bell shook the foundations of quantum mechanics. Bell proposed that entangled particles were correlated in strange quantum ways and could not be explained with so-called “hidden variables.” Tests performed in the ’70s and ’80s confirmed that measuring one entangled particle really did seem to determine the state of the other, faster than any signal could travel between the two.
The other critical ingredient for the second quantum revolution was information theory, a blend of math and computer science developed by pioneers like Claude Shannon and Alan Turing. In 1994, combining new insight into the foundations of quantum mechanics with information theory led the mathematician Peter Shor to introduce a fast-factoring algorithm for a quantum computer, a computer whose bits exist in superposition and can be entangled.
Shor’s algorithm was designed to quickly divide large numbers into their prime factors. Using the algorithm, a quantum computer could solve the problem much more efficiently than a classical one. It was the clearest early demonstration of the worth of quantum computing.
“It really made the whole idea of quantum information, a new concept that those of us who had been working in related areas, instantly appreciated,” Deutsch says. “Shor’s algorithm suggested the possibilities new quantum tech could have over existing classical tech, galvanizing research across the board."
Shor’s algorithm is of particular interest in encryption because the difficulty of identifying the prime factors of large numbers is precisely what keeps data private online. To unlock encrypted information, a computer must know the prime factors of a large number associated with it. Use a large enough number, and the puzzle of guessing its prime factors can take a classical computer thousands of years. With Shor’s algorithm, the guessing game can take just moments.
Today’s quantum computers are not yet advanced enough to implement Shor’s algorithm. But as Deutsch points out, skeptics once doubted a quantum computer was even possible.
“Because there was a kind of trade-off,” he says. “The kind of exponential increase in computational power that might come from quantum superpositions would be counteracted exactly, by exponential sensitivity to noise.”
While inventions like the transistor required knowledge of quantum mechanics, the device itself wasn’t in a delicate quantum state, so it could be described semi-classically. Quantum computers, on the other hand, require delicate quantum connections.
What changed was Shor’s introduction of error-correcting codes. By combining concepts from classical information theory with quantum mechanics, Shor showed that, in theory, even the delicate state of a quantum computer could be preserved.
Beyond quantum computing, the second quantum revolution also relies on and encompasses new ways of using technology to manipulate matter at the quantum level.
Using lasers, researchers have learned to sap the energy of atoms and cool them. Like a soccer player dribbling a ball up field with a series of taps, lasers can cool atoms to billionths of a degree above absolute zero—far colder than conventional cooling techniques. In 1995, scientists used laser cooling to observe a long-predicted state of matter: the Bose-Einstein condensate.
Other quantum optical techniques have been developed to make ultra-precise measurements.
Classical interferometers, like the type used in the famous Michelson-Morley experiment that measured the speed of light in different directions to search for signs of a hypothetical aether, looked at the interference pattern of light. New matter-wave interferometers exploit the principle that everything—not just light—has a wavefunction. Measuring changes in the phase of atoms, which have far shorter wavelengths than light, could give unprecedented control to experiments that attempt to measure the smallest effects, like those of gravity.
With laboratories and companies around the world focused on advancements in quantum science and applications, the second quantum revolution has only begun. As Bardeen put it in his Nobel lecture, we may be at another “particularly opportune time ... to add another small step in the control of nature for the benefit of [hu]mankind.” | <urn:uuid:d9e3eaa4-546c-48ce-bc7f-8e503f7a25b1> | CC-MAIN-2022-21 | https://www.symmetrymagazine.org/article/the-second-quantum-revolution | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00477.warc.gz | en | 0.95665 | 2,012 | 4.0625 | 4 |
The roots of encryption go deep into human history. Encryption has been used for centuries to encode messages, usually to keep government secrets, but also to protect business or trade secrets such as the formula to make silk or pottery. Early encryption was fairly simplistic, largely relying on paper and pencil techniques like steganography, transposition and substitution. In the last century, encryption methods have advanced at a rapid clip, first by leveraging automation and the use of machinery and then by employing advanced mathematics and powerful computers.
While encryption today involves powerful computers, it wasn't always so complicated or ubiquitous.
Early Encryption Methods
It is said that in 700 B.C., the Spartan military used scytales to send secret messages during battle. The sender and the recipient each possessed a wooden rod of the same diameter and length. The sender would tightly wind a piece of parchment or leather around the stick and write a message. The unwound document would be sent to the recipient, who would wind it around his stick to decode the message. In its unwound state, the message was gibberish.
Julius Caesar created one of the simplest and most recognized encryption techniques: the Caesar cipher. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, D would be replaced by A, E would become B, and so on. He used this method in his private correspondence at a time when many of his enemies could not read and other may have assumed the message was written in a foreign language. It is therefore assumed to have been reasonably secure in the first century B.C., but today a single-alphabet substitution cipher is easily broken and offers essentially zero security.
In the 15th century, Italy’s Leon Battista Alberti was the quintessential Renaissance man. Mostly known for being an artist, he also is credited as an author, architect, priest, poet, linguist, philosopher and cryptographer. In 1467, Alberti invented the first polyalphabetic substitution cipher. The Alberti Cipher consisted of two metal discs on the same axle, one inside the other, and involved mixed alphabets and variable rotations. It changed the course of encryption: unlike previous ciphers, the Alberti Cipher was impossible to break without knowledge of the method. This was because the frequency distribution of the letters was masked, and frequency analysis – the only known technique for attacking ciphers at that time – was no help.
During his tenure as George Washington’s Secretary of State, Thomas Jefferson invented the Jefferson disk, or wheel cipher. The system used a set of wheels or disks, and the letters of the alphabet were inscribed on each wheel in random order. Turning them would scramble and unscramble words. Each disk is marked with a unique number, and the hole in the center of the disk allowed them to be stacked on an axle in any order desired. To encrypt the message, both sender and receiver had to arrange the disks in the same predefined order. By using 36 disks, Jefferson’s disk was considered unbreakable at the time.
Encryption and War
Jefferson’s disk was independently reinvented in the late 19th century by Commandant Etienne Bazeries, and named Bazeries cylinder. It was used as a U.S. Army field cipher after World War I. But perhaps the most famous war time encryption machine is Engima. Invented by Arthur Scherbius, Enigma was Germany's main cryptographic technology during World War II. The Enigma machine consisted of a basic keyboard, a display that would reveal the cipher text letter and a scrambling mechanism. Each plain text letter entered via the keyboard was transcribed to its corresponding cipher text letter. Enigma was eventually broken due in large part to the work of Marian Rejewski, a Polish statistician, mathematician and code breaker. Before Germany invaded Poland, Rejewski transferred all his research to the English and the French. The team at Bletchley Park, including Alan Turing, used Rejewski's work to build bombes, electromechanical machines that were designed specifically to break Enigma. This work is credited with being a crucial step to ending World War II.
Encryption in Today’s Computing World
Advances in computing led to even greater advances in encryption. In 1979, the National Bureau of Standards invented Data Encryption Standard (DES) using what was then state-of-the-art 56-bit encryption – even supercomputers of the day could not crack it. In general, the longer the key is, the more difficult it is to crack the code. This holds true because deciphering an encrypted message by brute force would require the attacker to try every possible key. DES was the standard for encryption for more than 20 years, until 1998, when the Electronic Frontier Foundation broke the DES key. It took 56 hours in 1998, and only 22 hours to accomplish the same feat in 1999.
As we can see, as technology advances, so does the quality of encryption. Once the internet began to see increased commercial transaction use, DES was finally replaced by the Advanced Encryption Standard, or AES, which was found through a competition open to the public and approved by NIST. This method is still in use today.
But perhaps one of the most notable advances in the study of cryptography since World War II is the introduction of the asymmetric key ciphers (also known as public key encryption). Whitfield Diffie and Martin Hellman were pioneers in the field of asymmetric cryptographic techniques. These are algorithms that use a pair of mathematically related keys, each of which decrypts the encryption performed using the other. By designating one key of the pair as private, and the other as public (often widely available), no secure channel is needed for key exchange. You can reuse the same key pair indefinitely – as long as the private key stays secret. Most importantly, in an asymmetric key system, the encryption and decryption keys are not identical, which means that, for the first time in history, two people could secure communications without any prior interaction – ideal for internet transactions.
Ronald L. Rivest, Adi Shamir and Leonard M. Adleman were inspired by Diffie and Hellman to create a practical public key system. The result was RSA, which was based on the difficulty of factoring large numbers, and is a common cryptograhic technique on the internet today.
Now that we have widespread use of encryption, what challenges do we face? To break encryption, the most basic method of attack is brute force. This is why keys are getting longer and longer – to create more possible solutions and increase the resources required to perform such large computations. There are more than a few informed experts who believe that quantum computing may bring forth the ability to break codes in the foreseeable future. Some of the industry’s brightest minds are working on quantum-resistant encryption so that we can continue to exchange sensitive information privately.
There are also concerns about cost and downtime when deploying encryption schemes. For enterprise-class encryption, you used to need to account and plan for downtime while tens of thousands of files or a large database was getting encrypted. But now you have the option of enterprise encryption without downtime, with Vormetric Live Data Transformation. In fact, a database of any size or any number of files can be used while undergoing encryption. We call it zero-downtime encryption, and it’s an industry game-changer.
And now as we have more and more services moving to the cloud, encrypting and securing data is even more critical. More sensitive data is residing in the cloud, and ensuring that data is secure can be a challenging task. However, there are new strategies for cloud data protection such as transparent and application-level encryption. Additional methods of encryption can involve tokenization and dynamic data masking. I would be remiss if I didn’t add key management to the mix, as well. Compliance mandates, data-residency requirements, government regulations and best practices require that enterprises protect and maintain encryption keys in accordance with specific frameworks and laws. Allowing organizations to “bring your own key,” also known as BYOK, enables maximum control and trust between the data owner and cloud provider, and is considered a best practice for internal and external compliance controls.
Later this month, Thales will release the results of our annual Global Encryption Study. While I won’t give away the findings, I can share that keeping pace with cloud adoption and escalating threats is a major pain point for organizations and business leaders. It is our focus and vision to make protecting your data as transparent and operationally “invisible” as possible. It is a tough mission, but a worthy one. I hope you’ll download that report when it becomes available, as I think you’ll find the results eye-opening. | <urn:uuid:7743a394-09aa-4956-b464-1987f3ea9535> | CC-MAIN-2022-21 | https://cpl.thalesgroup.com/2017/04/04/evolution-encryption | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662550298.31/warc/CC-MAIN-20220522220714-20220523010714-00279.warc.gz | en | 0.959299 | 1,848 | 3.875 | 4 |
StJohns Field is a massive helium reservoir and immense carbon storage basin located on 152,000 acres in Apache County, Arizona. Extensive third-party geological studies performed on the property indicate reserves of up to 33 billion cubic feet of helium in shallow, easily accessible reservoirs. Capable of producing one billion cubic feet of helium per year, it will be among the most prolific helium production sites in the world.
While most helium is extracted from natural gas deposits, the helium produced at St Johns is highly unusual in that it does not contain any hydrocarbons. The gas deposit is composed almost entirely of carbon dioxide, and as the helium is extracted in the production process, all of the excess CO2 will be reinjected into isolated geological formations and safely sequestered deep underground for millennia. As a result, the helium produced at St Johns is exceptionally clean and environmentally friendly, with a net zero carbon footprint.
Helium is the only element on the planet that is a completely non-renewable resource. It is both scarce and finite, with no commercially viable industrial process to replicate it. Helium is formed by the natural radioactive decay process of Uranium, and can be trapped underground if a halite or anhydrite cap exists above it. If helium is not trapped in this way, it escapes to the atmosphere and rises into space.
Helium is the coldest element, with a boiling point of only 4° Kelvin, and has unique superfluid properties. It has many applications as a high-tech coolant, and is a critical component for nearly all modern technology systems.
For example, liquid helium is used to cool the magnets in MRI systems, helping to optimize their function. It is also used to control the temperature of silicon in the semiconductor manufacturing process. Because Helium is inert and non-flammable, it is used in space and satellite systems as a purge gas in hydrogen systems, and as a pressurizing agent for ground and flight fluid systems. Both NASA and SpaceX are major consumers of helium.
Data centers use helium to encapsulate hard drives, which reduces friction and energy consumption - Google, Amazon, and Netflix are all major consumers. Quantum computing systems also use liquid helium in dilution refrigerators, providing temperatures as low as 2 mK.
Inaddition to its immense helium reserves, the geological characteristics of St Johns make it an ideal storage basin for carbon dioxide. With the ability to inject 22 million metric tons of CO2 per year and a total storage capacity of over 1 billion metric tons, St Johns is set to become one of the largest carbon capture sites in the world. Strategically located in the fast-growing American Southwest near several coal-fired power plants, Proton Green is well positioned to become a critical carbon sequestration hub in the region. The exceptionally well-suited geological storage structure, with its remote location, pipeline infrastructure, right of way, and Class VI storage permits (once granted) will be significant barriers to entry for competitors.
Hydrogen is steadily emerging as one of the most effective fossil fuel replacements and could become a lucrative opportunity for Proton Green as the global movement toward decarbonization and a net zero economy continues. Our processing plants are capable of producing large volumes of industrial-grade hydrogen while simultaneously sequestering the excess CO2 in underground storage basins, thereby qualifying as blue hydrogen. The hydrogen we produce can then be sold into the California markets and will be eligible for Low Carbon Fuel Standard (LCFS) credits as we help drive the transition toward a sustainable fuel and energy source.
Proton Green will partner with government agencies, NGOs, research institutions, and startup companies to create a cutting-edge incubator and innovation center for emerging carbon-neutral technologies and processes like blue hydrogen, CO2-enhanced geothermal energy, biomass energy, and carbon fiber materials. The research center will be located in a designated Opportunity Zone in the extreme southwest corner of the property, and Proton Green will provide CO2 to support research and development activities. We are currently pursuing an opportunity to develop a bioenergy plant that will convert forest-wood waste into biofuel.
A seasoned independent oil and gas producer since 1982, Mr. Looper has extensive experience drilling and operating wells in Colorado, Kentucky, Louisiana, New Mexico, Oklahoma, Texas and Wyoming. He also has project management in Botswana, Canada, South Africa and Zimbabwe. Since 1993, Mr. Looper has been focused on the development of large resource plays in West Texas at Riata Energy, Inc. and most recently in the Barnett Shale trend, where his capital providers achieved>100% rates of return. Mr. Looper is an alumni of West Texas State University, T. Boone Pickens School of Business and participated in the Harvard Business School, Executive Management Program 2003-2007.
Mr. Coates is a highly experienced oil and gas professional with a career emphasis on large-scale, unconventional resource development. He is currently involved in Helium development, carbon capture, oil and gas, and geothermal projects. His educational background in geology, geochemistry and engineering led to an initial career with Advanced Resources International, a domestic and international technical consulting firm at the forefront of unconventional resource development and Carbon Capture technology. He subsequently joined MCN Corp (now DTE Energy) in a senior management role to successfully develop a multi TCF natural gas reserve base in the US. He also co-founded an E&P company Patrick Energy with the funding of a family office that has led to a series of privately funded ($200MM capital) E&P companies built and sold over the past twenty years.
Ms. Fazio is an accomplished finance executive with broad functional expertise building and transforming finance functions. She has led diverse teams across multiple countries in accounting, finance, treasury, tax, risk management and investor relations. Her experience varies across industries with prior roles at Airswift, Frank’s International, Axon, and ThermoFisher Scientific. Ms. Fazio graduated from Bentley University in Waltham, MA. | <urn:uuid:b55dce43-4fe4-4e1d-82f4-b1489c0550cd> | CC-MAIN-2022-21 | https://www.protongreen.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530066.45/warc/CC-MAIN-20220519204127-20220519234127-00678.warc.gz | en | 0.940117 | 1,232 | 3.546875 | 4 |
Quantum mechanics is the set of principles used to explain the behaviour of particles at the atomic and subatomic scale. It all began around late 1800s and early 1900s, when scientist realized from a series of experimental observations that the behaviour of atoms didn’t agree with the rules of classical mechanics, where everyday objects exist in a specific place at a specific time. This changed the traditional concept of an atom with a nucleus surrounded by electrons, to orbitals that represent the probability of the electrons being in a given range at any given time. Electrons can jump from one orbital to another as they gain or lose energy, but they cannot be found between orbitals. From this idea, and over many decades, the rules of quantum mechanics were unveiled allowing scientists to build devices that followed those rules. This led to the first quantum revolution with the invention of the transistor, the laser, and the atomic clock that gave us computers, optical fibre communications and the global positioning system, respectively.
The reason why is getting again so much attention is because we are in the early stages of a second quantum revolution with scientists now being able to control individual atoms, electrons and photons. This is allowing our scientific community to build extremely fast quantum computers, interception-proof quantum communication and hyper-sensitive quantum measurement methods. All harnessed by strong technological companies across the world that are now in a frenetic race to redefine the limits of our technology and, with it, the very fabric of our everyday lives.
Classical computers have billions of transistors that turn on or off to represent a value that is a 0 or a 1. Hence, in classical computing we talk about binary digit or bits. In contrast, quantum computers process data using quantum bits or qubits that, unlike classical bits, can exist in simultaneous states or superposition at the exact same point of time thanks to the laws of quantum mechanics. This allows each qubit to be 1, or 0, or both states simultaneously.
The magic of quantum computers happens when these qubits are entangled. Entanglement is a type of correlation that ties qubits together so the state of one qubit is tied to another. Hence, by leveraging both superposition and entanglement, quantum computers can speed up computation and do things that classical computers can’t do.
Entangled qubits can be created in many different ways for example with superconductors electronic circuits, by trapping ionized atoms or by squeezing particles of light (photons). Each technology is currently trying to preserve the quantum effects for as long as possible as they scale-up in the number of qubits from the current hundreds to the targeted Million that will forever redefine the boundaries of computing technology.
Post-quantum cryptography (also known as quantum-proof, quantum-safe or quantum-resistant) refers to cryptographic algorithms that are thought to be secure against the attack of quantum computers in the future. These algorithms are called post-quantum because the security of most standard algorithms today relies on solving very difficult mathematical problems, sufficient for defending against modern computers but unable to resist the attack of a quantum computer once they reach certain computational power in number of Qubits.
Quantum cryptography, on the other hand, also known as Quantum Key Distribution (QKD), describes he use of quantum effects to enable unconditionally secure key distribution between two legitimate users, guaranteed by the fundamental laws of quantum physics.
Although some people tend to think that these two technologies are exclusive, they are in fact meant to be allies on securing future communications.
Quantum Key Distribution is a method for two parties, in cryptography referred as Alice and Bob, to securely establish a shared key to encode messages through optical fibre or space. To create the key, first Alice encrypts random bits into quantum signals (extremely weak photons) and transmits them through the channel. Bob measures the state of the arriving photons and obtains data that is partially correlated to the data encoded by Alice. These data can be used to distil a secret key by means of error correction and privacy amplification.
When a hacker tries to look at the information encoded into the quantum photons sent by Alice, he or she will irreversibly change their properties because quantum states cannot be cloned or copied. This means that Bob receives quantum signals that are not correlated to Alice’s as they should be, therefore letting them know that someone has tried to intercept the message. Alice and Bob discard this key that has been compromised and a new one following the same process is generated until it is guaranteed that is free from attacks.
In Discrete Variable QKD (DV-QKD) the emitter (Alice) prepares and sends to a receiver (Bob) quantum signals which consist of single photons with encoded random data. The encoding is done following a specific QKD protocol by using a discrete-valued degree of freedom of the photons such as polarization, time-bin or linear momentum. In the receiver, Bob measures the state of the arriving photons using single-photo detectors to distil a secret key.
In Continuous Variable QKD (CV-QKD), the quantum signals typically consist of coherent states of light with information encoded in the quadrature of electromagnetic fields. Instead of single photon detectors, CV-QKD uses coherent homodyne or heterodyne detection (known in telecommunication phase-diversity homodyne detection) to retrieve the quadrature value of the signal and thus distil a secret key.
Standardization and certification of QKD technology is vital to enable market penetration, ensure equipment interoperability and a strong supply chain. For that, the standards are quite comprehensive as they define frameworks that consider all aspects of the technology as well as the implementation into a complete system, performance, best operational practices, or security specifications to name some.
All the key standard organizations across the world (national, European, and world wide) already began years ago to write their specifications on QKD systems, which is an indicator of both increased maturity and a strong interest in the application and commercialization of QKD technology.
For further information on QKD standardization, you can read the comprehensive analysis ran by OpenQKD here.
Rather than competing, both mathematical and post-quantum cryptographies are complementary to quantum cryptography. This is emerging when talking to encryption and telecom providers and supported by the fact that European Commission plans to deploy EuroQCI. The idea is to continuously monitor the evolution of all these technologies and put together a roadmap for leveraging both physical against mathematical complexity security-based protocols.
That is what we are working on. There are numerous projects in Europe and private companies investing and researching on making the technology affordable by involving production experts and the know-how of both end user companies and network infrastructure owners.
European-made technology is desired/preferred for these kinds of systems to ensure European sovereignty. For that reason the European Union itself through many fund programs, as well as industry consortiums such as the European Quantum Industry Consortium (QuIC), are stimulating potential makers and suppliers to develop and produce all key components in Europe over the next years. | <urn:uuid:d7a37b51-0054-4a80-84c2-8a2c2757899e> | CC-MAIN-2022-21 | https://www.luxquanta.com/resources | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00079.warc.gz | en | 0.936301 | 1,453 | 3.78125 | 4 |
Scientists may now have found a solution to Stephen Hawking’s famous black hole paradox, which has puzzled experts since the 1970s.
A team of researchers from the University of Sussex, University of Bologna and Michigan State University have published two studies proposing that black holes feature something called “quantum hair,” which allows them to break out of this decades-old conundrum that highlighted possible inconsistencies between Einstein’s general theory of relativity and quantum mechanics.
This new work attempts to better integrate these two systems by utilizing new mathematical formulae developed by researchers during the last decade. If the notion of “quantum hair” does prove true, it would be a significant finding for theoretical physics, while eliminating the need to radically rethink how we see the universe — at least, for now.
“It was generally assumed within the scientific community that resolving this paradox would require a huge paradigm shift in physics, forcing the potential reformulation of either quantum mechanics or general relativity,” said University of Sussex professor of theoretical physics Xavier Calmet in a statement. “What we found — and I think is particularly exciting — is that this isn’t necessary.”
‘Hairy’ Black Holes
According to the laws of quantum mechanics, information that exists in our universe cannot be destroyed, and this conservation of “quantum information” is fundamental to the universe. However, black holes present a challenge to these laws, as black holes are regions of spacetime where gravity is so strong that nothing — not even light — can escape from them. So where does the information that has been sucked into these (supposedly) inescapable black holes go? That question is essentially the crux of Hawking’s black hole information paradox.
The researchers’ first paper, titled “Quantum Hair from Gravity” and recently published in the journal Physical Review Letters, addresses part of this question by showing that there are actually more to black holes than previously thought in classical physics.
Rather than being merely simple objects with a certain mass, speed and rotation, as defined under classical physics’ so-called “no-hair theorem“, the team’s new findings suggest that black holes are actually more complex and ‘hairier’ than general relativity might imagine.
That’s because as matter is sucked into a collapsing black hole, a barely imperceptible imprint — a “quantum hair” — is left in its gravitational field. It is this quantum imprint that is the mechanism for preserving information at the quantum level.
The team used their calculations to compare two theoretical stars that form from different initial chemical compositions, which then collapse into two black holes of the same mass and radii. Working under the notions of classical physics, it would be considered impossible to go back in time to differentiate between the two stars, given the similar final states of the two black holes.
However, the team’s new calculations show that while these two black holes may appear the same on the macroscopic level, they would have slight differences in their gravitational fields on the microscopic, quantum level. Information pointing to what the black holes were initially made of is stored in gravitons, a hypothetical elementary particle that acts as the mediator between the gravitational forces that operate in the field of quantum gravity.
According to the team, it’s quantum gravity that enabled them to discover these discrepancies in the gravitational field — creating a kind of “memory” in the gravitational field of the initial state of the black hole.
“It turns out that black holes are in fact good children, holding onto the memory of the stars that gave birth to them,” said Calmet.
Entangled ‘Quantum Hairs’
The researchers’ second follow-up paper, published separately in Physics Letters B, demonstrates how Hawking’s black hole information paradox is resolved through this mechanism of “quantum hair.” The team’s findings show that classical physics’ previous ideas about black holes’ inescapable event horizon are more complicated when examined more closely under the lens of quantum mechanics.
There are intricate entanglements on the quantum level between the matter that is inside the black hole, and the state of the gravitons outside the black hole. It is this subtle quantum entanglement that makes it possible to “encode” quantum information in the thermal radiation (also known as Hawking radiation) that is emitted from the event horizons of such black holes. Thus, quantum information is shown to be preserved even as a black hole collapses, because Hawking radiation from a black hole is entangled with the quantum state of spacetime itself.
For now, however, it is not possible for the team to empirically test their theory using our current astronomical technology, as such miniscule gravitational fluctuations would evade the tools that are available now. Nevertheless, the team’s findings presents a more consistent way to make calculations for black holes, without having to reinvent both classical and quantum physics.
Image: Aman Pal via Unsplash. | <urn:uuid:195660ab-37d9-4460-a811-fc8c5d37688c> | CC-MAIN-2022-21 | https://thenewstack.io/quantum-hair-may-resolve-stephen-hawkings-black-hole-paradox/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00280.warc.gz | en | 0.936661 | 1,055 | 3.6875 | 4 |
If you want to understand gravity, it makes sense to study black holes. Nowhere else can you find so much gravity so conveniently compacted into such a relatively small space.
In a way, in fact, black holes are nothing but gravity. As Einstein showed, gravity is just the warping of spacetime, and black holes are big spacetime sinks. All the matter falling in gets homogenized into nothingness, leaving behind nothing but warped spacetime geometry.
As black holes swallow more matter, they get bigger, of course. But curiously, it’s the black hole’s surface area, not its volume, that expands in proportion to how much stuff the black hole consumes. In some way, the black hole’s event horizon — the spherical boundary demarcating the points of no return for objects falling in — keeps a record of how much a black hole has eaten. More technically, a black hole’s surface area depends on its entropy, as John Archibald Wheeler’s student Jacob Bekenstein showed in the 1970s.
In the 1990s, other physicists (notably Gerard ’t Hooft and Leonard Susskind) developed this insight further, proposing the “holographic principle”: Information contained in a three-dimensional volume can be completely described by the two-dimensional boundary surrounding it. Just as an ordinary holographic image represents a 3-D scene on a 2-D flat surface, nature itself can store information about the interior of a region of space on the surface enclosing it.
If you think about it, it’s not entirely crazy. There are familiar ways that the information in a 3-D space can be contained on its boundaries. Just imagine a room full of 3-D objects with mirrors on the walls. You can reconstruct everything in the 3-D room from the images on the 2-D mirrors.
In 1995, physicist Juan Maldacena developed the holographic idea further. In essence, he showed that quantum math describing physics in three spatial dimensions without gravity can be equivalent to math describing a four-dimensional space with gravity. (Such an equivalence of two different mathematical descriptions is called a duality.)
Maldacena’s insight suggested that holography might be the key to merging gravity with quantum mechanics. Physicists have sought a way to incorporate gravity into a quantum field theory for decades. If Maldacena is right, then apparently all you need is an extra dimension of space (which is provided naturally in superstring theory). Given an added dimension, spacetime with gravity emerges from the physics described by quantum field theory on its boundary.
Lately this idea has resurfaced in a new context. Some physicists have proposed that gravity has something to with quantum entanglement — the spooky connection between distant particles that befuddled Einstein. And it seems that the holographic duality identified by Maldacena has something to do with the gravity-entanglement connection.
“The emergence of spacetime in the gravity picture is intimately related to the quantum entanglement … in the corresponding conventional quantum system, ” Mark Van Raamsdonk of the University of British Columbia argued in a 2010 paper. “It is fascinating that the intrinsically quantum phenomenon of entanglement appears to be crucial for the emergence of classical spacetime geometry.”
More recent work relates the gravity-entanglement link to mathematical tools called tensors. Describing entanglement in complicated systems of many particles is made easier by using networks of tensors to quantify how multiple particles are entangled. Using tensor networks, physicists have developed algorithms that enable simpler analysis of quantum matter such as superconductors. That work has been going on for years. Newer work with tensor networks has provided insights into how the holographic principle relates entanglement to gravity.
In particular, a formulation of tensor networks called MERA (for multi-scale entanglement renormalization ansatz) seems especially promising with respect to understanding gravity. MERA tensor networks describe patterns of entanglement in certain complicated quantum systems, generating a geometry reminiscent of the extra-dimensional space that Maldacena discussed in his duality. In other words, it’s a real-life realization of the quantum field theory-gravity duality.
“When seen from this perspective, ” writes Orús, “one would say that geometry (and gravity) seems to emerge from local patterns of entanglement in quantum many-body states.” Thus, he points out, the tensor network approach supports the conclusion suggested in previous work by Raamsdonk and others: “Gravitational spacetime emerges from quantum entanglement.”
This link between tensor networks, entanglement and gravity may prove useful in studying the physics of black holes or in investigating the quantum nature of spacetime at very small distances, Orús proposes.
Mathematical details of how tensor networks connect entanglement to the geometry of spacetime are beyond the scope of basic blogging. If you want the whole story of Hilbert space, entanglement renormalization and unitary and isometric tensors, start with Harvard physicist Brian Swingle’s 2012 paper in Physical Review D. (A preprint is available, and a paper with further developments is available here.) Orús has posteda more recent (and more accessible) survey of the field.thinking out of the box thinking outside the box thinking outside the box synonym thinking outside the cage thinking outside the box examples thinking outside of the box is considered thinking outside shed thinking outside the box meaning thinking outside the box quotes thinking over feeling thinking over synonym thinking over feeling meaning thinking over dana glover thinking over meaning thinking over and over again thinking over feeling personality thinking over and over again synonym thinking past tense thinking past the sale thinking past textbook thinking fast and slow audiobook thinking fast and slow summary pdf thinking pro rich thinking pros and cons thinking pro pro thinking definition thinking time pro design thinking pro con critical thinking pro critical thinking pro con since thinking disruptive thinking since 1826 ucl disruptive thinking since 1826 have been thinking since i were thinking or i was thinking thinking through the past thinking through synonym thinking through communication thinking through the past volume 1 thinking through sources thinking through the past volume 2 thinking through sources for ways of the world thinking through grammar thinking thru thru thinking meaning thinking things thru still thinking meaning thinking of you till it hurts john till thinking place thinking of you till it hurts lyrics thinking about something till it happens wishful thinking till svenska thinking of you till svenska what is thinking about thinking called thinking to myself thinking to yourself thinking to much quotes thinking to hard thinking to myself synonym thinking to do thinking towards the future thinking towards thinking towards life maternal thinking towards a politics of peace positive thinking towards life creative thinking towards success wishful thinking towards thinking with literature towards a cognitive criticism thinking under pressure thinking under stress thinking under the influence human communication thinking under the influence example thinking under pressure synonym thinking under fire thinking under fire bion thinking of you underneath the mexican moon thinking until head hurts thinking about something until it happens thinking of you until it hurts does thinking make your head hurt why does my head hurt when thinking can you think so much your head hurts when your head hurts is it your brain thinking up a storm thinking up math thinking up and leading up thinking up meaning thinking up a hurricane thinking versus feeling thinking versus critical thinking thinking vs thought thinking vs doing design thinking via zoom critical thinking via the abstraction ladder thinking with mathematical models answers | <urn:uuid:03b7ba88-7c90-473c-95db-e6ce85b73131> | CC-MAIN-2022-21 | http://hologram-and-holography.com/HologramSticker/quantum-holography | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00080.warc.gz | en | 0.913378 | 1,558 | 3.6875 | 4 |
How AI and Quantum Could Help Fight Climate Change
Earth Day is the day to celebrate our blue planet. The day to remember that it’s the only home we’ve got.
During his very first days in office, President Joe Biden signed a flurry of climate change-related executive orders. He rejoined the Paris Climate Accord, pledged to double offshore wind-produced energy by 2030 and freeze new oil and gas leases on public lands. All in all, he’s committed himself to an ambitious goal.
Ambitious, yes, but realistic — especially with the help of cutting-edge science and technology. To make it happen though, academia and industry should join forces and change our established, traditional approach to the discovery of new materials. We should accelerate the rate of design of new advanced materials, crucial to create sustainable solutions to climate change.
The good news is, we already have the ingredients to make it happen. They are artificial intelligence and, soon, quantum computing.
Adieu to serendipity
Traditionally, we’ve been discovering new materials either by accident (think graphene) or using a lengthy and expensive trial-and-error process. As part of IBM’s Future of Climate initiative, IBM researchers have now successfully used AI to design new molecules for climate change-related applications much quicker that they would have with the traditional discovery methods.
“We’ve designed molecules that could lead to more efficient polymer membranes to filter off carbon dioxide better than currently used membranes in carbon capture technologies,” says Mathias Steiner of IBM Research Brazil, the lead scientist on the project.
That’s incredibly timely, too. The International Energy Agency (IEA) is forecasting a huge surge in CO2 emissions from energy later this year, when the pandemic finally starts easing off. While total energy emissions in 2020 will be a bit lower than the year before, CO2 emissions will increase by the second largest annual amount on record.
Typically, researchers rely on their knowledge and whatever they can find in published literature to design a molecule, hoping it will have the desired properties. Based on the initial design, they then follow many cycles of synthesis and testing of potential molecules until they create a satisfactory one.
The process often takes months, sometimes years, even with the help of computers to run advanced simulations. The most complex molecule we can simulate today is of the size of pentacene, with 22 electrons and 22 orbitals. Anything more complex, and computers stumble.
But the possibilities for molecular configurations are incredibly vast — there are more possible combinations for a new molecule than there are atoms in the universe. That propels the number of potential new materials to infinity. Equally vast is the ever-surging amount of data. In 2018 alone, about 450,000 new papers were published in the field of material science — impossible for any human to go through in a reasonable amount of time.
Enter artificial intelligence. Just five years ago, AI was mostly good at predicting characteristics of an existing material. Now, researchers are using it more and more to rapidly design brand-new materials with desired properties. “The application of AI to accelerated materials discovery is incredibly exciting and it will allow researchers to be far more efficient in their research,” says Stacey Gifford, a climate scientist at IBM Research.”As new technologies, like quantum computing, expand, the pace of discovery will only increase.”
From digital design to the lab
To design a new polymer for CO2 filtering membranes, Steiner and his team first had to outline the desired properties: permeability, chemical selectivity for specific gases, and durability. Next, an AI sifted through the past knowledge on polymer manufacturing — all the previous research tucked away in patents and publications. Then the researchers used predictive, so-called generative models to create a possible new molecule based on the existing data — a molecule that would make the polymer membrane more efficient in separating CO2.
The next step was to simulate this new molecule and the reactions interactions it should have with its neighbors on a high-performance computing cluster, to confirm that it performed as expected. In the future, a quantum computer could improve on these molecular simulations, but we are not there yet today.
Once everything is tip top with the design, the final step in molecular design is AI-driven lab tests to validate the predictions experimentally and create the actual molecules. This could be done using a tool like RoboRXN. Developed at IBM Research in Zurich, this ‘chemistry lab’ combines AI, a cloud computing platform, and robots to help researchers design and synthesize new molecules anywhere and at any time.
Steiner’s team hasn’t yet turned their digitally validated molecules into real ones, but other IBM researchers have done this last step for a different project. While not related to climate change, that study could help us make greener gadgets. IBM scientists used the same AI-boosted ‘accelerated discovery’ approach to create new molecules called photoacid generators (PAGs), important components of computer chips. The PAGs used today have recently come under enhanced scrutiny from global environmental regulators so the world is in need of more sustainable ones.
Sustainable hybrid cloud and AI
But material design isn’t the only way to help the climate. Another group of IBM researchers is working on making the company’s hybrid cloud more sustainable.
IBM is well-known for its hybrid cloud technology and OpenShift as the unified control plane on- and off- premises. A sustainable hybrid cloud enables companies to transparently assess the carbon footprint of their workloads, and reduce it if necessary. “To quantify and optimize the carbon footprint of cloud workloads, we are developing a carbon quantification and optimization method that attempts to make maximum use of renewable energy,” says the lead researcher on the project, Tamar Eilam. “IBM Research is also working on improving the overall efficiency of AI training by developing more efficient hardware.”
Another team, led by Shantanu Godbole at IBM research India, is using AI to help companies cut their carbon emissions associated with processes such as logistics, transportation, manufacturing, agriculture, and so on.
Meanwhile, IBM researchers led by Kommy Weldemariam are creating an AI to assess potential impacts of climate change on supply chain and infrastructure, from railroad lines to roads, bridges and tunnels. Dubbed the Climate Impact Modelling platform, the technology aims to improve regional climate modelling by bringing the model size down to about one kilometer. “At the moment, most climate models have a fairly low resolution, making it tricky to create accurate predictions,” says Weldemariam.
The researchers use physics-based and AI models to predict, assess and quantify the risks from extreme events — such as floods, wildfires or drought. The models can be integrated into enterprise processes, from the supply chain to the asset and infrastructure management, making it easier for companies to deal with a natural disaster.
While the ongoing research is promising, we are nowhere near the finish line. There is still a lot more to do to develop effective solutions to help our planet. And while Biden’s environmental goals are certainly ambitious, it’s almost certain that new technology will help us meet them. | <urn:uuid:2e9031a0-71da-4421-b084-fefffbc736ca> | CC-MAIN-2022-21 | https://ibm-research.medium.com/earth-day-how-ai-and-quantum-could-help-fight-climate-change-4156fe6ee16d?source=user_profile---------5------------------------------- | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00080.warc.gz | en | 0.933023 | 1,508 | 3.578125 | 4 |
The Key Device Needed for a Quantum Internet
Advances in quantum information science have brought on the possibility of a quantum internet—networks that carry information via photons in superpositions of states, called qubits, rather than the 0’s and 1’s that today’s networks shuttle from place to place.
In the last decade or so, researchers around the world have taken big steps toward building quantum networks. While many groups have started testing small networks tens of miles in size, major obstacles, including the need to develop a key piece of hardware, lie in the way of larger quantum networks. “There’s still lots of research to be demonstrated,” says Gabriella Carini of Brookhaven National Laboratory, New York, an organizer of a “Quantum Internet Blueprint Workshop” that took place in February. “But if you don’t have a vision, all the pieces won’t talk together.” The workshop was a step toward “establishing a nationwide quantum internet” in the US, an effort that has gained momentum with the National Quantum Initiative Act in 2018 and the recent budget request by the Trump administration to fund plans for a quantum internet.
The appeal of quantum networks lies in both immediately practical applications and potential advances for basic science research. One of the clearest applications is the ability to send secure messages without the threat of eavesdroppers. Because information is encoded with superpositions of states, any interception of a message would make qubits’ wave functions collapse, signaling that the message was intercepted.
Qubits can also encode more information than classical bits, so quantum networks could potentially carry higher densities of information more efficiently. “It’s a fundamentally new way to connect information,” says David Awschalom, a researcher at the University of Chicago and Argonne National Laboratory who is working on a quantum network effort in the Chicago area. Quantum networks could advance developments in remote-sensing technology and telescopes as well as applications that scientists don’t yet realize. A quantum internet could be “another revolution at the same level as the classical internet,” Carini says.
However, the same properties that make quantum networks useful present significant challenges. Ground-based networks, whether classical or quantum, often use optical fibers to direct information from place to place in the form of photons. As photons travel through a network, some will be lost over time as a result of impurities in the fibers, weakening the signal. In classical networks, devices called “repeaters” intermittently detect the signal, amplify it, and send it off again. But for information carried by photons in superpositions of states, or qubits, “it’s not possible to read the signal without perturbing it,” Awschalom says.
The key to long-distance quantum communication, researchers say, is to figure out how to build a “quantum repeater” equivalent to the existing classical one. Without a quantum repeater, a qubit would typically only be able to travel through a few miles or up to about 100 miles of fiber—far too little range for widespread networks.
“This quantum repeater is as important for the field of quantum communication as a quantum computer itself is for the field of quantum computing,” says Eden Figueroa of Stony Brook University, New York, and Brookhaven National Laboratory.
But quantum repeaters are far more complicated than classical repeaters, and no one has made a functional one yet. “I think it will still be a while before it can be a practical technology,” says Jian-Wei Pan of the University of Science and Technology of China. In China, Pan and other researchers have made progress toward quantum networks even without quantum repeaters. One example is a 1200-mile fiber network constructed to connect Beijing and Shanghai. But since it doesn’t have quantum repeaters, this network isn’t fully quantum encrypted from end to end; there are several nodes along the route where the information is decrypted and then encrypted again. These nodes are a “temporary solution,” Pan says, while researchers work to develop quantum repeaters.
Another temporary way around not having quantum repeaters is to employ satellites. Earlier this month, Pan and other researchers in China announced that they were able to use a satellite to transfer a quantum “key” between two ground stations about 700 miles apart.
In the US, researchers are building and testing small quantum networks in locations near Chicago, Boston, and New York. One of these is a fiber network connecting Stony Brook University and Brookhaven National Laboratory that Figueroa’s team has been working on for a couple of years. But even before the inter-campus network, Figueroa and colleagues were working on developing hardware for quantum communication. Researchers in the field have known that quantum repeaters would be necessary for long-distance quantum networks for quite a while, Figueroa says, but the technology to really tackle the problem wasn’t available until recently. “Now we’re getting there, where we can build the first prototypes,” he says. “The goal is, in a few years, to try to really demonstrate a quantum repeater in the field.”
There are still many questions to address before researchers can assess whether a nationwide quantum internet is possible, Figueroa says—like whether they will be able to design quantum repeaters that work well enough outside of the lab and whether it will be feasible to produce a large enough volume of the necessary hardware.
“Those are questions that, right now, don't have an answer,” he says. “And they need to be answered before we can make the claim that this quantum internet can be built.”
–Erika K. Carlson
Erika K. Carlson is a Corresponding Editor for Physics based in New York City. | <urn:uuid:d131271c-e9a1-4da3-b25d-34809e8aabc4> | CC-MAIN-2022-21 | https://physics.aps.org/articles/v13/104 | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00280.warc.gz | en | 0.937566 | 1,243 | 3.703125 | 4 |
Within each and every cellphone lies a tiny mechanical heart, beating various billion situations a 2nd. These micromechanical resonators engage in an important job in cellphone communication. Buffeted by the cacophony of radio frequencies in the airwaves, these resonators decide on just the proper frequencies for transmitting and receiving signals in between cellular gadgets.
With the growing importance of these resonators, scientists need to have a trusted and effective way to make certain the devices are functioning adequately. That’s best attained by cautiously researching the acoustic waves that the resonators crank out.
Now, researchers at the Nationwide Institute of Criteria and Engineering (NIST) and their colleagues have designed an instrument to image these acoustic waves in excess of a huge range of frequencies and produce “flicks” of them with unprecedented depth.
The scientists calculated acoustic vibrations as fast as 12 gigahertz (GHz, or billions of cycles for each next) and may be able to extend individuals measurements to 25 GHz, furnishing the needed frequency protection for 5G communications as effectively as for potentially strong long term purposes in quantum information.
The challenge of measuring these acoustic vibrations is likely to raise as 5G networks dominate wi-fi communications, making even tinier acoustic waves.
The new NIST instrument captures these waves in motion by relying on a product known as an optical interferometer. The illumination supply for this interferometer, ordinarily a constant beam of laser mild, is in this scenario a laser that pulses 50 million occasions a next, which is significantly slower than the vibrations becoming calculated.
The laser interferometer compares two pulses of laser mild that travel alongside various paths. 1 pulse travels via a microscope that focuses the laser gentle on a vibrating micromechanical resonator and is then reflected back again. The other pulse acts as a reference, touring alongside a path that is regularly altered so that its length is within a micrometer (one particular millionth of a meter) of the distance traveled by the to start with pulse.
When the two pulses satisfy, the gentle waves from each pulse overlap, generating an interference pattern — a established of darkish and mild fringes wherever the waves cancel or boost a person a different. As subsequent laser pulses enter the interferometer, the interference sample alterations as the microresonator vibrates up and down. From the altering pattern of the fringes, researchers can evaluate the height (amplitude) and section of the vibrations at the area of the laser spot on the micromechanical resonator.
NIST researcher Jason Gorman and his colleagues intentionally chose a reference laser that pulses amongst 20 and 250 moments much more slowly than the frequency at which the micromechanical resonator vibrates. That system enabled the laser pulses illuminating the resonator to, in impact, slow down the acoustic vibrations, related to the way that a strobe light seems to sluggish down dancers in a nightclub.
The slowdown, which converts acoustic vibrations that oscillate at GHz frequencies to megahertz (MHz, hundreds of thousands of cycles for every 2nd), is essential mainly because the light-weight detectors used by the NIST crew function much extra specifically, with much less noise, at these reduce frequencies.
“Moving to lower frequencies removes interference from communication signals usually uncovered at microwave frequencies and enables us to use photodetectors with reduce electrical sound,” explained Gorman.
Just about every pulse lasts only 120 femtoseconds (quadrillionths of a second), providing very exact minute-to-second information on the vibrations. The laser scans across the micromechanical resonator so that the amplitude and phase of the vibrations can be sampled throughout the complete floor of the vibrating unit, producing significant-resolution pictures around a wide assortment of microwave frequencies.
By combining these measurements, averaged in excess of a lot of samples, the researchers can make three-dimensional flicks of a microresonator’s vibrational modes. Two kinds of microresonators were being utilised in the review one particular experienced dimensions of 12 micrometers (millionths of a meter) by 65 micrometers the other measured 75 micrometers on a facet — about the width of a human hair.
Not only can the pictures and motion pictures expose no matter if a micromechanical resonator is running as expected, they can also show trouble regions, these kinds of as sites where by acoustic energy is leaking out of the resonator. The leaks make resonators significantly less productive and lead to loss of info in quantum acoustic devices. By pinpointing problematic areas, the procedure presents researchers the info they need to have to increase resonator style and design.
In the Feb. 4, 2022, version of Mother nature Communications, the researchers reported that they could picture acoustic vibrations that have an amplitude (peak) as tiny as 55 femtometers (quadrillionths of a meter), about just one-5-hundredth the diameter of a hydrogen atom.
More than the earlier ten years, physicists have suggested that micromechanical resonators in this frequency assortment may perhaps also provide to keep fragile quantum information and facts and to transfer the info from 1 section of a quantum computer to an additional.
Creating an imaging procedure that can routinely evaluate micromechanical resonators for these purposes will involve even further investigate. But the existing review is already a milestone in assessing the potential of micromechanical resonators to properly execute at the large frequencies that will be essential for successful conversation and for quantum computing in the in the vicinity of future, Gorman mentioned. | <urn:uuid:68cf9780-8194-4746-a554-dc74fb89f3ee> | CC-MAIN-2022-21 | https://guruproofreading.com/movies-of-minuscule-vibrations-reveal-how-well-5g-and-other-mobile-networks-are-operating-sciencedaily.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00081.warc.gz | en | 0.922175 | 1,147 | 3.546875 | 4 |
Alternate format: Using encryption to keep your sensitive data secure (ITSAP.40.016) (PDF, 391 KB)
Encryption technologies are used to secure many applications and websites that you use daily. For example, online banking or shopping, email applications, and secure instant messaging use encryption. Encryption technologies secure information while it is in transit (e.g. connecting to a website) and while it is at rest (e.g. stored in encrypted databases). Many up-to-date operating systems, mobile devices, and cloud services offer built-in encryption, but what is encryption? How is it used? And what should you and your organization consider when using it?
What is encryption?
Encryption encodes (or scrambles) information. Encryption protects the confidentiality of information by preventing unauthorized individuals from accessing it.
For example, Alice wants to send Bob a message, and she wants to ensure only he can read it. To keep the information confidential and private, she encrypts the message using a secret key. Once encrypted, this message can only be read by someone who has the secret key to decode it. In this case, Bob has the secret key.
Eve is intentionally trying to intercept the message and read it. However, the message is encrypted, and even if Eve gets a copy of it, she can’t read it without acquiring the secret key.
If an individual accidentally receives a message that includes encrypted information, they will be unable to read the encrypted contents without the key to decrypt the message.
How is encryption used?
Encryption is an important part of cyber security. It is used in a variety of ways to keep data confidential and private, such as in HTTPS websites, secure messaging applications, email services, and virtual private networks. Encryption is used to protect information while it is actively moving from one location to another (i.e. in transit) from sender to receiver. For example, when you connect to your bank’s website using a laptop or a smartphone, the data that is transmitted between your device and the bank’s website is encrypted. Encryption is also used to protect information while it is at rest. For example, when information is stored in an encrypted database, it is stored in an unreadable format. Even if someone gains access to that database, there’s an additional layer of security for the stored information. Encryption is also used to protect personal information that you share with organizations. For example, when you share your personal information (e.g. birthdate, banking or credit card information) with an online retailer, you should make sure they are protecting your information with encryption by using secure browsing.
Many cloud service providers offer encryption to protect your data while you are using cloud based services. These services offer the ability to keep data encrypted when uploading or downloading files, as well as storing the encrypted data to keep it protected while at rest.
When properly implemented, encryption is a mechanism that you and your organization can use to keep data private. Encryption is seamlessly integrated into many applications to provide a secure user experience.
How can I use encryption?
Your organization likely already uses encryption for many applications, such as secure browsing and encrypted messaging applications.
If you access a website with padlock icon and HTTPS in front of the web address, the communication (i.e. the data exchanged between your device and the website’s servers) with the website is encrypted.
To protect your organization’s information and systems, we recommend that you use HTTPS wherever possible. To ensure that users are accessing only HTTPS-supported websites, your organization should implement the web security policy tool HTTP Strict Transport Security (HSTS). HSTS offers additional security by forcing users’ browsers to load HTTPS supported websites and ignore unsecured websites (e.g. HTTP).
Encrypted messaging applications
Most instant messaging applications offer a level of encryption to protect the confidentiality of your information. In some cases, messages are encrypted between your device and the cloud storage used by the messaging service provider. In other cases, the messages are encrypted from your device to the recipient’s device (i.e. end-to-end encryption). When using end-to-end encryption services, not even the messaging service provider can read your encrypted messages.
In deciding which tools to use, you need to consider both the functionality of the service and the security and privacy requirements of your information and activities. For further information, refer to protect how you connect.
Encryption is just one of many security controls necessary to protect the confidentiality of data.
What else should I consider?
Encryption is integrated into many products that are commonly used by individuals and organizations to run daily operations. When choosing a product that uses encryption, we recommend that you choose a product that is certified through the Common Criteria (CC) and the Cryptographic Module Validation Program (CMVP). The CC and the CMVP list cryptographic modules that conform to Federal Information Processing Standards. Although the CC and the CMVP are used to vet products for federal government use, we recommend that everyone uses these certified products.
The CCCS recommends
When choosing a suitable encryption product for your organization, consider the following:
- Evaluate the sensitivity of your information (e.g. personal and proprietary data) to determine where it may be at risk and implement encryption accordingly.
- Choose a vendor that uses standardized encryption algorithms (e.g. CC and CMVP supported modules).
- Review your IT lifecycle management plan and budget to include software and hardware updates for your encryption products.
- Update and patch your systems frequently.
Prepare and plan for the quantum threat to cyber security. For more information, please see Addressing the quantum computing threat to cryptography ITSE.00.017.
Encryption for highly sensitive data
Systems that contain highly sensitive information (e.g. financial, medical, and government institutions) require additional security considerations. Contact us for further guidance on cryptographic solutions for high-sensitivity systems and information: email@example.com. | <urn:uuid:75a05c1c-6c60-4e53-adff-0cbaa4bb68d6> | CC-MAIN-2022-21 | https://cyber.gc.ca/en/guidance/using-encryption-keep-your-sensitive-data-secure-itsap40016 | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00682.warc.gz | en | 0.912074 | 1,274 | 3.609375 | 4 |
This is part 3 of a Guide in 6 parts about Artificial Intelligence. The guide covers some of its basic concepts, history and present applications, possible developments in the future, and also its challenges as opportunities.
By Maria Fonseca and Paula Newton
Artificial Intelligence: What is It?
Artificial intelligence is coming up in conversations more and more these days. But do you know what it really is? There are many ideas about artificial intelligence that originated in science fiction films, with artificially intelligent robots going wrong and taking over the world. Yet we are now at the point where machines are becoming able to both talk and think. There are also different terms that are used that can be confusing – such as what is machine learning, deep learning and how are these different from AI. So let’s find out what AI really is, and what the terms mean.
Artificial Intelligence, Machine Learning and Deep Learning
Artificial intelligence is considered to be anything that gives machines intelligence which allows them to reason in the way that humans can. Machine learning is an element of artificial intelligence which is when machines are programmed to learn. This is brought about through the development of algorithms that work to find patterns, trends and insights from data that is input into them to help with decision making. Deep learning is in turn an element of machine learning. This is a particularly innovative and advanced area of artificial intelligence which seeks to try and get machines to both learn and think like people.
Typically, AI is considered to include capabilities such as self-driving cars, military simulations, and the ability to compete at the top levels in strategic games, such as Chess, for example. Generally, AI is categorised into three types:
- human inspired
- humanised artificial intelligence
The first type, analytical AI is considered to be paralleled to cognitive intelligence, where learning from what has already happened can be used to make future decisions and solve problems. Human inspired AI includes this as well as human emotions that are involved in decision making. Meanwhile, humanised AI includes cognitive, emotional and social intelligence, and this requires self-awareness.
There are various areas that artificial intelligence can be programmed to work on. These include reasoning and problem solving, knowledge representation, planning, learning, natural language processing, perception, motion and manipulation and social intelligence, as well as general intelligence. Results have varied in all of these different areas depending on precisely what programmers have been trying not do.
What are the Challenges of AI ?
Concerns have arisen about artificial intelligence and the threat it brings. There is the issue raised at the outset about AI threatening people if it is allowed to continue without being reined in. As well as the fear that has been driven into us by sci-fi about artificial intelligence taking over the world, there are also economic worries. Specifically, artificial intelligence is considered by some to be a threat to work for people, as it already has the capability to perform some mundane and menial jobs that people do. There are concerns that mass unemployment could result from this. Others believe that businesses will be optimised through artificial intelligence, and actually there will still be a need for people, to validate the work that AI does. However, others talk of the idea of a universal basic income to be paid to people to allay the fears of not enough work.
What are the Benefits of AI
Parts of Ai, such as machine learning are beneficial in a number of ways. One of the most important is the fact that programming machines to learn in this way significantly cuts back on the requirement to code them manually to deal with a wide range of possibilities and how the machine should react in each case. There have been significant advances in this area, and examples include efforts to try and prevent disease through machine learning analysing genome sets, having machines diagnose depression through interpreting patterns of speech, and pinpointing people that might commit suicide.
Deep learning is even more complex and capable than this. Deep learning needs to be built within frameworks that are complex, which work on copying the way the human brain works. This is problematic since the way the human brain works is still not entirely understood today. The potential for what deep learning could do is phenomenal, but tremendous computing power is required to deliver such as the one only offered by quantum computing. The idea is to programme the machine to have an adaptable mind, so that it can reason within the programme.
There is a lot more to artificial intelligence than this, but this brief outline of its basic concepts hopefully provides a high-level overview that explains some of the basic elements of artificial intelligence and what it may be able to do.
Paula Newton is a business writer, editor and management consultant with extensive experience writing and consulting for both start-ups and long established companies. She has ten years management and leadership experience gained at BSkyB in London and Viva Travel Guides in Quito, Ecuador, giving her a depth of insight into innovation in international business. With an MBA from the University of Hull and many years of experience running her own business consultancy, Paula’s background allows her to connect with a diverse range of clients, including cutting edge technology and web-based start-ups but also multinationals in need of assistance. Paula has played a defining role in shaping organizational strategy for a wide range of different organizations, including for-profit, NGOs and charities. Paula has also served on the Board of Directors for the South American Explorers Club in Quito, Ecuador. | <urn:uuid:16004353-286d-495f-bb35-5c8951e24d3b> | CC-MAIN-2022-21 | https://www.intelligenthq.com/guide-artificial-intelligence-can-change-world-part-3/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00682.warc.gz | en | 0.969781 | 1,113 | 3.515625 | 4 |
Quantum teleportation is a well-founded discipline already, taking full advantage of the ‘spooky’ (Einstein’s own word for it) properties of quantum entanglement. For those not already aware of its uses, it’s essential to get Star Trek out of your head from the get-go, this is the transfer of information across vast distances, not people or things.
So far, the principle hasn’t been made to work on anything larger than a molecule.
The base unit that is teleported is known as a qubit, which devotees of quantum computing and teleportation alike will already be well aware of. This is the fundamental particle at the heart of quantum computing, an object that can be read as a one or zero (on or off/true or false) or a superposition of both.
Using the concept of quantum entanglement, whereby a particle is split, and any change made to, or state observed in, one half of this particle will instantaneously be imposed on the other, information can be transferred over vast distances at incredible speeds.
The first proof of the tricksy concept of teleportation came over twenty years ago, in 1998. Since then the distance over which the information has been transferred has steadily increased from a matter of meters to over 100 kilometers, on the Earth’s surface at least, with Chinese scientists sending entangled objects into orbit to see quantum teleportation at a distance of 1400km.
And, in theory, there should be no upper limit on the distance apart in which the sender and receiver elements of the qubit can be placed while still remaining entangled.
So that’s where quantum teleportation stands so far. Now it gets even more complicated and spooky. Teleporting a qutrit is the next step. Like the qubit, it is able to be in a superposition of any of its states, but it is also able to occupy a state of one and two (like the qubit) or three.
This allows for a considerable amount more information to be sent at once. And if a particle with three different states sounds like a tricky thing to create – it is.
In a paper available on arxiv.org (and soon to be in the peer-reviewed journal Physical Review Letters), a research team has demonstrated the ability to create and teleport this new quantum particle.
The researchers took a photon (the fundamental particle of light) and used an arrangement of beam splitters, barium borate crystals, and lasers to split the photon’s path, into three separate but close paths, and thus create a three-part entangled object – their qutrit.
Their teleportation wasn’t flawless, however. They measured twelve different entanglements and received a 75% success rate but, for a first try at creating and teleporting a new quantum object, perfect wasn’t what the researchers were looking for.
Equally, the setup period to generate the qutrit was long and slow, but they remain undaunted. Because, for now, it is enough to prove that qutrit teleportation is possible, not just theoretically but practically.
“Combining previous methods of teleportation of two-particle composite states and multiple degrees of freedom, our work provides a complete toolbox for teleporting a quantum particle intact,” the researchers write, demonstrating that this is just a first step on the road to more practical applications in the future.
The team’s only immediate fear is that they may have been beaten to the punch. A report in Scientific American shows that a rival group, who have yet to have their research peer-reviewed, have also managed to teleport qutrits, although their efforts have only been recorded across 10 quantum states rather than 12.
Quantum teleportation is still an impressive and mysterious area of practical physics. It was initially named in 1993 by Charles Bennett, whose co-authors Asher Peres and William Wootters preferred the less science fiction term ‘telepheresis,’ and used the principle of quantum entanglement, an area of physics that still messes with the minds of many an undergraduate, to create practical applications.
This area of quantum behavior is what had Einstein freaking out so much that he described entanglement as ‘spooky action at a distance’ and feared that it may mean there was something fundamental lacking in our understanding of the quantum realm.
In fact, if it were not for the fact that the received information can only be taken up at the speed of light or less, the changing state brought about by one half of a qubit on the other would seem to break the theory of special relativity. But already we are seeing those bizarre anomalies preparing to be harnessed for everyday practical applications.
Quantum teleportation presents the possibility of an incredible leap forward in encrypted communications, with the potential to even create unhackable networks, where any attempt to break into the code being transmitted would be an attempt to violate the very laws of physics (and, in case you needed telling, those laws definitely can’t be broken, no matter how great your hacking skills).
This is because, in order to preserve the laws of physics, the state change that communicates the information from sender to receiver is destroyed once sent, preserving the fundamental principle in quantum physics known as ‘no-cloning.’
And this is what makes quantum teleportation so useful in encrypting data, since no copies can be made, and since any effort to ‘eavesdrop’ on communication will bring about a quantum state change, instantly revealing the act of eavesdropping. These same fundamental laws are also behind the handling errors which still need to be resolved in quantum computing.
The researchers behind the arxiv paper are certainly optimistic about the breakthrough that their experiment represents.
“We expect that our results will pave the way for quantum technology applications in high dimensions,” they write, “Since teleportation plays a central role in quantum repeaters and quantum networks.”
Sign up for our newsletter to get the best of The Sized delivered to your inbox daily. | <urn:uuid:5245e984-225e-4189-8e39-75ce4d6f4507> | CC-MAIN-2022-21 | https://www.thesized.com/breakthrough-quantum-teleportation-first-qutrit-teleported/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00283.warc.gz | en | 0.936613 | 1,260 | 3.609375 | 4 |
Envisioning a future quantum internet
The quantum internet, which connects particles linked together by the principle of quantum entanglement, is like the early days of the classical internet – no one can yet imagine what uses it could have, according to Professor Ronald Hanson, from Delft University of Technology, the Netherlands, whose team was the first to prove that the phenomenon behind it was real.
You are famous for proving that quantum entanglement is real, when, in 2015, you linked two particles that were 1.3 kilometres apart. But the main objective of your work has always been to connect entangled particles into a 'quantum internet.' What could such a network enable us to do?
One of the things that we could do is to generate a key to encode messages using the quantum internet. The security of that key would now be based on this property of entanglement, and this is basically the properties of the laws of physics.
You will get a means of communication whose security is guaranteed by physical laws instead of (by) assumptions that no one is able to hack your code.
That's probably the first real application, but there are many, many more applications that people are thinking about where this idea of entanglement, this invisible link at a distance, could actually be helpful. For example, people have calculated that you can increase the baseline of telescopes by using quantum entanglement. So, two telescopes quite far apart could have better precision than each of them individually would have. You could envision using this quantum internet to create entanglement between atomic clocks in different locations around the world, and this would increase the accuracy of timekeeping locally.
So the quantum internet is primarily a tool for encryption?
There is no quantum internet as of yet. And if you think back to the time when people were developing the classical internet, I don't think anybody was really thinking about the applications that we are using it for right now.
The first real users of the internet were like, "Ok there is a big computer somewhere in one place, and I'm in the other place, and I actually want to use that computer because they are very expensive, so how can I make use of that computer remotely? Well, I need an internet to connect to it."
And now we are using the internet in a totally different way. We are all part of this huge global information highway. And I think some of the same things could happen with the quantum internet. It's very hard right now to imagine what we could do (with it), and I think it is even harder than with the classical internet, because this concept of quantum entanglement is so counterintuitive that it is not easy to use your intuition to find applications for it.
How do you envisage the quantum internet? How would we use it?
I envision that, in the end, when you are using the web most of the time, you are using the classical internet, and when you need some extra feature that requires quantum entanglement, then you are using the parallel quantum infrastructure that is also on the internet to get the functionality that you want to have. So it is not going to be a replacement to the classical internet, but it will be something that is added on top of it.
Back in 2014, you announced that you connected particles three metres apart and 'teleported' information between them. In what sense was this information teleported?
Quantum teleportation is the idea that quantum states—and they contain information of course—disappear on one side and then reappear at the other side. What is interesting is that, since the information does not travel on a physical carrier, it's not encoded in a pulse of light—it does not travel between sender and receiver, so it cannot be intercepted. The information disappears on one side and reappears on the other side.
Quantum teleportation is the most fundamental operation that can be done on the quantum internet. So to get entanglement distributed over long distances, you are actually teleporting the entanglement from one node to the other.
In a classical network, you send your data package, and there is an address contained in that, and the router will read off that information and send it on to the next node. We don't want to do that with these quantum signals. We want to send these quantum signals by teleportation so they don't have to go through the (optical) fibre; they disappear on one side and reappear on the other.
Your work is based on this crazy concept of entanglement. What is your personal opinion of how entanglement works?
What I have learned is to let go of all my intuition when I talk about quantum entanglement. Any analogy you try to make with something in the world that we see around us will fail because it is a quantum concept and we don't really see quantum concepts in our daily lives. So I have given up on trying to have an intuitive explanation of what entanglement is. | <urn:uuid:7af52b8c-c184-45c1-8217-deb5b611ee62> | CC-MAIN-2022-21 | https://phys.org/news/2017-05-envisioning-future-quantum-internet.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00284.warc.gz | en | 0.959812 | 1,073 | 3.5 | 4 |
Newswise — A study of weakly electric fishes from a remote area of the Brazilian Amazon Basin has not only offered a unique window into how an incredibly rare fish has adapted to life in caves over tens of thousands of years, it has also revealed for the first time that electric fish are able to interact with each other over longer distances than known possible in a way similar to AM radio.
In findings published in the journal Frontiers, researchers have shown how a cave-adapted glass knifefish species of roughly 300 living members (Eigenmannia vicentespelea) has evolved from surface-dwelling relatives (Eigenmannia trilineata) that still live just outside their cave door -- by sacrificing their eyes and pigmentation, but gaining slightly more powerful electric organs that enhance the way they sense prey and communicate in absolute darkness.
The study, which analyzed the fishes' electric-based communication and behavior, has detailed the discovery that weakly electric fishes tap into a special channel for long-distance messaging via changes in the amplitude of electrical signals sent to one another. Researchers have adapted Einstein's famous quote on the theory of quantum entanglement -- "spooky interaction at a distance" -- to describe how the weakly electric fishes perceive these social messages, altering each other's behavior at distances up to several meters apart.
Of the nearly 80 species of cavefish known today to have evolved from surface-dwelling fish, all have developed sensory enhancements of some kind for enduring cave life, commonly adapting over millions of years while losing sensory organs they no longer need in the process.
However, biologists have questioned how weakly electric fishes, which use their electrical senses for navigating the dark and murky conditions of the Amazon River, might also adapt -- either evolving heightened electric senses to see and communicate in absolute darkness, or by powering down their electric fields to save on energetic cost when most caves have few food resources.
"One of the big questions about fish that successfully adapt to living in caves is how they adapt to life without light," said Eric Fortune, lead author of the study and biologist at New Jersey Institute of Technology (NJIT). "My colleagues were split between two groups ... one group that predicted that the electric fields of the cavefish would be weaker due to limited food supplies, and another that bet that the electric fields would be stronger, allowing the fish to use their electric signals to see and talk more clearly in the complete darkness of the cave.
"It seems that using their electric sense to detect prey and communicate with each other is quite valuable to these animals; they have greater electric field strengths. Interestingly, our analysis of their electric fields and movement shows that they can communicate at distances of meters, which is quite a long way for fish that are around 10cm in length."
"Nearly all research of cavefish species until now has been limited to behavioral experiments in labs, and that is why this study is special," said Daphne Soares, NJIT associate professor of biology and co-author on the study. "This is the first time we've been able to continuously monitor the behavior of any cavefish in their natural setting over days. We've gained great insight into their nervous system and specialized adaptations for cave life, but it's just as exciting to learn how sociable and chatty they are with each other ... it's like middle school."
Spooky Interactions & Shocking Adaptations
For the investigation, NJIT and Johns Hopkins researchers teamed with biologist Maria Elina Bichuette from the Federal University of São Carlos, who began studying the two groups of fish nearly two decades ago in the remote São Vicente II Cave system of Central Brazil's Upper Tocantins river basin.
Over several days, the team applied a customized electric fish-tracking technique involving placing electrode grids throughout the fishes' water habitats to record and measure the electric fields generated by each fish, allowing the team to analyze the fishes' movements and electricity-based social interactions.
The researchers were able to track more than 1,000 electrical-based social interactions over 20-minute-long recordings taken from both surface and cavefish populations, discovering hundreds of specialized long-distance exchanges.
"When I began studying these fishes, we could watch behavior associated with these fishes' unique and specialized morphology, but in this project, it was fascinating to apply these new technical approaches to reveal just how complex and refined their communication could be," said Bichuette.
"Basically, our evidence shows that the fishes are talking to each other at distance through electricity using a secret hidden channel, amplitude modulations that emerge through the summation of their electric signals. It is not unlike how an AM radio works, which relies on amplitude modulations of a radio signal." said Fortune.
The recordings also showed that strengths of electric discharges in the cavefish were about 1.5 times greater than those of surface fish despite coming at a cost of up to a quarter of their overall energy budget. The team conducted CT scans of both species, showing that the cavefish also possess relatively larger electric organs than their stream-mates, which could explain the source of the cavefishes' extra electrical power.
Another consequence of trading their eyes and surface life for heightened electrosensory perception is that the cavefish were more social and territorial at all hours. Unlike their freely-foraging surface relatives that sleep during the day and forage at night, the cavefish lacked a day-night cycle.
For now, the discovery of the fishes' AM radio-style distant interactions is noted by Fortune as the first of its kind reported among electric cavefish, though he says similar phenomena is now being reported in some other species as well, recently by researchers in Germany who have observed a form of long-distance electrical communication among a group of fish known as Apteronotus. Fortune says the finding could have implications for the field of neurobiology, where weakly electric fish is a unique and powerful model for exploring the nature of the brain-body connection in other animals including humans.
"Electric fish are great systems for understanding the neural basis of behavior, so we have been studying their brains for decades," said Fortune. "These new data are forcing a reexamination of the neural circuits used for the control of behavior of these fishes." | <urn:uuid:86ac8331-02d7-496d-baf0-b752a0a58cfd> | CC-MAIN-2022-21 | https://www.newswise.com/articles/spooky-interactions-shocking-adaptations-discovered-in-electric-fish-of-brazil-s-amazon | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00685.warc.gz | en | 0.965859 | 1,291 | 3.609375 | 4 |
Optical atomic clocks will likely redefine the international standard for measuring a second in time. They are far more accurate and stable than the current standard, which is based on microwave atomic clocks.
Now, researchers in the United States have figured out how to convert high-performance signals from optical clocks into a microwave signal that can more easily find practical use in modern electronic systems.
Synchronizing modern electronic systems such as the Internet and GPS navigation is currently done using microwave atomic clocks that measure time based on the frequency of natural vibrations of cesium atoms. Those vibrations occur at microwave frequencies that can easily be used in electronic systems.
But newer optical atomic clocks, based on atoms such as ytterbium and strontium, vibrate much faster at higher frequencies and generate optical signals. Such signals must be converted to microwave signals before electronic systems can readily make use of them.
“How do we preserve that timing from this optical to electronic interface?” says Franklyn Quinlan, a lead researcher in the optical frequency measurements group at the U.S. National Institute of Standards and Technology (NIST). “That has been the big piece that really made this new research work.”
By comparing two optical-to-electronic signal generators based on the output of two optical clocks, Quinlan and his colleagues created a 10-gigahertz microwave signal that synchronizes with the ticking of an optical clock. Their highly precise method has an error of just one part in a quintillion (a one followed by 18 zeros). The new development and its implications for scientific research and engineering are described in the 22 May issue of the journal Science.
The improvement comes as many researchers expect the international standard that defines a second in time—the Système International (SI)—to switch over to optical clocks. Today’s cesium-based atomic clocks require a month-long averaging process to achieve the same frequency stability that an optical clock can achieve in seconds.
“Because optical clocks have achieved unprecedented levels of accuracy and stability, linking the frequencies provided by these optical standards with distantly located devices would allow direct calibration of microwave clocks to the future optical SI second,” wrote Anne Curtis, a senior research scientist at the National Physical Laboratory in the United Kingdom, in an accompanying article. Curtis was not involved in the research.
Optical clocks can already be linked together physically through fiber-optic networks, but this approach still limits their usage in many electronic systems. The new achievement by the U.S. research team—with members from NIST, the University of Colorado-Boulder, and the University of Virginia in Charlottesville—could remove such limitations by combining the performance of optical clocks with microwave signals that can travel in areas without a fiber-optic network.
For its demonstration, the team built its own version of an optical frequency comb, a pulsed-laser device that uses very brief light pulses to create a repetition rate that, when converted to frequency numbers, resembles “a comb of evenly spaced frequencies or tones spanning the optical regime,” Curtis explains. Modern optical frequency combs were first developed 20 years ago and have played a starring role in both fundamental research experiments and various technological systems since that time.
By measuring the optical beats between a single comb tone and an unknown optical frequency, researchers knew they should be able to directly link faster optical frequencies to slower microwave frequencies. Doing that required a photodetector developed by researchers at the University of Virginia to carry out the optical-to-microwave conversion and generate an electrical signal. The team also wrote its own software for off-the-shelf digital sampling hardware to help digitize and extract the phase information from the optical clocks.
“The piece that has lagged a bit is the high-fidelity conversion of optical pulses to microwave signals with the optical-to-electrical convertor,” Quinlan says. “So if you have pulses where you know the timing to within a femtosecond (one quadrillionth of a second), how do you convert those photons to electrons while maintaining that level of timing stability? That has taken a lot of effort and work to understand how to do that really well.”
The researchers didn’t quite reach their original benchmark for minimizing the potential instability and errors in the microwave signals synchronized with the optical clocks. But even with the current performance, Quinlan and his colleagues realized: “Okay, great, that'll support current and next-generation optical clocks.”
Curtis describes the improved capability to synchronize microwave signals with optical clock signals as a “paradigm shift” that will impact “fundamental physics, communication, navigation, and microwave engineering.” One of the most immediate applications could involve higher-accuracy Doppler radar systems used in navigation and tracking. A more stable microwave signal can help radar detect even smaller frequency shifts that could, for example, better distinguish slow-moving objects from the background noise of stationary objects.
Future space telescopes based on very-long-baseline interferometry (VLBI) could also benefit from the highly stable microwave signals synchronized with optical clocks. Today’s ground-based VLBI telescopes use receiver devices spread across the globe to detect microwave and millimeter-wave signals and combine them into high-resolution images of cosmic objects such as black holes. A similar VLBI telescope located in space could boost the imaging resolution while avoiding the Earth’s atmospheric distortions that interfere with astronomers’ observations. In that scenario, having optical-clock-level stability to synchronize all the signals received by the VLBI telescope could improve observation time from seconds to hours.
“Essentially you’re collecting signals from multiple receivers and you need to time-stamp those signals to combine them in a meaningful way,” Quinlan says. “Right now the atmosphere distorts the signal enough so that [it] is a limitation rather than the time-stamping from a stable clock, but if you get away from atmospheric distortions, you could do much better and then you can utilize a much more stable clock.”
There is still more work to be done before more electronic systems can take advantage of such optical-to-microwave conversion. For one thing, the sheer size of optical clocks means that nobody should expect a mobile device to have a tiny optical clock inside anytime soon. In the team’s latest research, their optical atomic clock setup occupied a lab table about 32 square feet in size (almost 3 square meters).
“Some of my coauthors on this effort led by Andrew Ludlow at NIST, as well as other folks around the world, are working to make this much more compact and mobile so that we can kind of have optical-clock-level performance on mobile platforms,” Quinlan says.
Another approach that could bypass the need for miniature optical clocks involves figuring out whether microwave transmissions could maintain the stability of the optical clock performance when transmitted across large distances. If this works, stable microwave transmissions could wirelessly synchronize performance across multiple mobile devices.
At the moment, optical clocks can be linked only through either fiber-optic cables or lasers beamed through the air. The latter often becomes ineffective in bad weather. But the team plans to explore the beaming possibility further with microwaves, especially after its initial success and with support from both NIST and the Defense Advanced Research Projects Agency.
“What would be great is if we had a microwave link that basically maintains the stability of the optical signal but can then be transmitted on a microwave carrier that doesn't suffer from rainy days and from dusty conditions,” Quinlan says. “But it's still yet to be determined whether or not such a link could actually maintain the stability of the optical clock on a microwave carrier.”
Jeremy Hsu has been working as a science and technology journalist in New York City since 2008. He has written on subjects as diverse as supercomputing and wearable electronics for IEEE Spectrum. When he’s not trying to wrap his head around the latest quantum computing news for Spectrum, he also contributes to a variety of publications such as Scientific American, Discover, Popular Science, and others. He is a graduate of New York University’s Science, Health & Environmental Reporting Program. | <urn:uuid:e44ab295-be36-476e-afd9-b1102eeea05c> | CC-MAIN-2022-21 | https://spectrum.ieee.org/optical-atomic-clock-advantage-expands-electronics | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531779.10/warc/CC-MAIN-20220520093441-20220520123441-00485.warc.gz | en | 0.941917 | 1,719 | 3.8125 | 4 |
Researchers at the National Institute of Standards and Technology (NIST) have constructed and tested a system that allows commercial electronic components—such as microprocessors on circuit boards—to operate in close proximity with ultra-cold devices employed in quantum information processing. That design allows four times as much data to be output for the same number of connected wires.
In the rising excitement about quantum computing, it can be easy to overlook the physical fact that the data produced by manipulation of quantum bits (qubits) at cryogenic temperatures a few thousandths of a degree above absolute zero still has to be initiated, read out, and stored using conventional electronics, which presently work only at room temperature, several meters away from the qubits. This separation has obstructed development of quantum computing devices that outperform their classical counterparts.
That extra distance between the quantum computing elements and the external electronics requires extra time for signals to travel, which also causes signals to degrade. In addition, each (comparatively very hot) wire needed to connect the electronics to the cryogenic components adds heat, making it hard to maintain the ultracold temperature required for the quantum devices to work.
“If you consider our modern computers, what practically limits their speed is the time it takes information to move around between the CPU and graphics and memory—the physical distances, even though it is moving at the speed of light,” said the project scientist, Joshua Pomeroy. “Those finite distances kill the performance speed. Everything has to be as close as possible so the information gets there fast. So, you need electronics that live with the qubits.”
One obvious way to do that is to place the electronics package inside the cryo environment next to the quantum components. But until now, very few conventional circuit components have been shown to operate properly there. “Moreover, nobody really knows how much energy modern electronics consumes at these temperatures, which is another aspect in addition to just ‘getting something to work,'” Pomeroy said.
To expand circuit functionality at cryogenic temperatures, Pomeroy selected promising standard, commercial electronic chips and constructed a circuit designed to address another problem: The long time required to cool quantum devices, and the restricted number of measurement wires, bottlenecks how many devices can be measured. Since only one device at a time is measured, the new cryo-circuit routes each measurement line to a selected quantum device, “like a railroad switch yard where the track to a distant destination can be connected to many different local destinations, called multiplexing,” Pomeroy said. “In our case, we have 24 measurement lines, each of which can be connected to four different destinations. That requires a lot of switches.”
And all of those switches need to be set correctly. “We need to be able to control the switches (where the train goes) so that we choose which device on the cryo-stage is connected to each of the 24 wires that come out to room temperature,” Pomeroy said. For that task, he employed a standard device from room-temperature electronics: a “shift register” that uses only three control lines but can generate an arbitrarily complex set of control instructions.
“This device uses digital pulses (0 or 1) on the first measurement line that are timed by ‘clock’ pulses from a second wire to build a digital number—for example, 0010—that selects the destination,” Pomeroy said. “In this example, the ‘1’ in the third position would route the measurement to the third device for measurement.” Once the address is set, a pulse on a third control line applies the selected address to the switches, and the measurements can begin.
The system quadruples the amount of measurement data that can be output without adding more wires.
“This work represents a milestone of technical effort that is important for enabling advanced measurement and technology at cryogenic temperatures,” said David Gundlach, chief of NIST’s Nanoscale Device Characterization Division.
“As one additional note,” Pomeroy said, “this entire effort took place during the pandemic shutdown at NIST Gaithersburg. My first (virtual) meeting with our electronics shop staff was in May 2020, and the planning and design continued through winter 2020. Parts and the custom devices were ordered in December and January, with final assembly and bench testing in the spring of 2021. The circuit boards were deployed for mounting and samples for testing in early summer. Since then, they have enabled more than 20 vastly different devices to be measured.”
National Institute of Standards and Technology
Novel design greatly improves output from commercial circuit boards next to superconducting qubits (2022, February 24)
retrieved 28 February 2022
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only. | <urn:uuid:374c1804-e08a-468e-83e2-dcf6852eb019> | CC-MAIN-2022-21 | https://qnewscrunch.com/science-and-tech/novel-design-greatly-improves-output-from-commercial-circuit-boards-next-to-superconducting-qubits/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00489.warc.gz | en | 0.93922 | 1,077 | 3.6875 | 4 |
We might use our computers to play games, browse, watch and work/learn very few times!!. But these machines have far more uses than we think and are being used on a day-to-day basis to solve enormous problems.
Primarily, these types of problems were solved using supercomputers (CPUs, lots of CPUs), but since humans won’t settle for anything lesser, we have invented a more efficient device – Quantum Computers. Don’t get afraid of a few physics terms that will be explained below!!
Need for Quantum Computers
For years, we are relying on classical computers (supercomputers) to solve our problems, and it has been helpful, but not always. The problem with these machines is that they are sequential, and perform calculations one by one. Therefore, a few problems might take a much longer time to get solved and in an inefficient way.
This becomes the primary cause and use case of quantum computers. A problem described by IBM is a great example.
Before we dive into the topic!!
Learning how quantum computers work, requires a basic understanding of a few topics. Not to worry, we have tried to explain them in a simple way.
Superposition is a quantum property, where a particle can exist in any state until it is being measured. A famous example used would be the electrons, who, when unobserved may have any spin, and can define their state only while watching it. A simple explanation for superposition would be the coin toss.
In a classical case, you would flip a coin and be certain that the outcome will be either Heads or Tails. However, superposition would be a state where the coin takes both heads and tails and every other state between them.
Qubits are the basic unit of data in quantum computing. In a classical computer, the basic unit of data would be bits – which is either 0 or 1. Whereas, in quantum computing, qubits are superpositions of 0 and 1 (0 and 1 at the same time). This way, more possibilities of an answer can be calculated within a short span of time.
Entanglement is the simplest to understand than all others. We might have heard this before, “If 2 quantum entangled objects are kept at 2 ends of the universe, and if one particles state is disturbed, it affects the other particle at the other end of the universe”. This property is employed in qubits so that their state can be correlated and can solve more complicated problems (teamwork).
Now, What is Quantum Computing?
Quantum computing is the collective utilization of quantum properties discussed before – superposition, entanglement and so on to solve problems or perform calculations. Theoretically, there are various methods quantum computing can be achieved. The method we currently use is the quantum circuit, which is based on qubits.
After knowing the complexity involved in this, you might think of these computers to be the size of ENIAC, but that’s not the case. These are mostly the size of a refrigerator and are maintained in super cold conditions.
The machine is made to work with the help of Superconductors (super-cooled conductors that offer zero electrical resistance), and employs electrons on them, following a process where the signals are converted into the quantum state, calculations are performed and back into an understandable form.
When scientists found out that there are few problems that aren’t feasible through classical computers, quantum computers was the solution they came to.
Quantum computer finds uses in various domains. With the world now focussing more on AI and Machine learning, these computers speed up the process and also break a few barriers we previously had like the high computational cost for training models and so on.
It also helps in computational chemistry, providing more knowledge in pharmaceutical researches. These can be used in the drug industry, where the current method of development is trial and error which is risky and expensive.
The other fields include quantum cryptography, financial modelling and others.
Are we there yet?
Though we have achieved more in this field in the past decade, we have not reached the peak of its abilities. We have seen most of its use cases, but quantum computers are not the solution for all cases.
For a few scenarios, supercomputers are said to be more useful than quantum computers, and hence we might want to combine those two to create the best machine.
Few companies like Google, IBM, Honeywell, and others are constantly involved in researches regarding this domain. Google AI. along with NASA, has created a 54-qubit quantum processor. IBM has created the first circuit-based commercial quantum computer called IBM Q System One. It has also planned to create a 1000-qubit quantum computer by 2023.
These machines also have downsides. Since it involves super-cold conditions and quantum levels, they are highly sensitive. Heat, electromagnetic fields, and collision with air can cause these qubits to lose their properties and result in system crashes. The more particles involved, the more vulnerable the device becomes.
Therefore, these machines must be kept away from environmental interference and additions qubits are required to correct the errors happening!
Finally, I would say that the present quantum computers may not be the perfect ones we are looking for, but continuous research will lead us to more ideal machines and help our causes. | <urn:uuid:6a0a07e6-e5a8-4f6a-b60c-d2dc1d8060a3> | CC-MAIN-2022-21 | https://techmedok.com/technology/quantum-computing-next-gen-computer/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00689.warc.gz | en | 0.957689 | 1,109 | 3.84375 | 4 |
In a nutshell, multivariable calculus extends the familiar concepts of limits, derivatives, and integrals to functions with more than one independent variable.
Multivariable calculus is much more than just a repeat of single-variable calculus, however. It's a rich subject with its own unique puzzles and surprises.
It introduces new tools that solve important problems in machine learning, neural networks, engineering, quantum computing, and astrophysics, to name just a few.
This course engages you with expertly designed problems, animations, and interactive three-dimensional visualizations all prepared to help you hone your multivariable calculus skills.
This quiz, in particular, sets the stage for our first chapter, which provides a compact introduction to the essential ideas of multivariable calculus.
Vectors play an essential role in multivariable calculus. For now, we can think of vectors as arrows in space, like the one in the 3D interactive below. A vector is defined by its direction and its length (or magnitude).
You have control over the vector's length as well as the two angles and setting the vector's direction. Can you adjust these sliders so that the tip of the arrow sits exactly on the point in space?
Hint: By touch interaction, you can adjust your viewing perspective on the vector and the point. You can also zoom in and out. You might want to consider a perspective where the point is centered in the viewing screen to figure out first.
Later in the course, we'll see that vectors are also collections of numbers, making them ideal building blocks for multivariable functions.
For example, the point in the last problem sits in space and locating it requires three numbers called coordinates. The vector whose tip sits at the point can also be described with these same three numbers!
Looking at the last problem from a different perspective, we can use the two angles specifying the direction of the vector and its length to locate a point in space.
This is the essential idea behind spherical coordinates, a topic covered in detail in Coordinates in 3D.
Calculus truly is the mathematics of limits. Without limits, we couldn't define derivatives or integrals, the two pillars of our subject. This is true no matter how many independent variables we have.
A single-variable limit can often be done with the help of continuity. Mathematically, continuity at a point means Intuitively, it means that the graph of the function has no holes or jumps or breaks.
Later in the course, we'll learn precisely what it means to take a multivariable limit. We'll find continuity to be a huge help in this setting, too.
The graph below represents a function of two variables we'll soon encounter. Use only intuition to determine if this function is continuous everywhere or discontinuous at some points.
The integral was originally designed to solve planar area problems. Similarly, multiple integrals are very useful in solving volume problems in higher dimensions.
We can start thinking about volumes of simple objects in higher dimensions even though we don't know how to integrate in higher dimensions yet or even how to properly visualize them with our 3D minds. We can do this by analogy.
Spheres in dimensions are characterized by a radius. A sphere consists of all points at a fixed distance from a given center. The circle is the lowest dimensional sphere familiar to you. If it has radius its area is Also, the sphere in 3D has volume if it has radius
Arguing by analogy, complete the statement
A sphere of radius in dimensions has volume proportional to
It may seem silly to consider volumes that are more than three-dimensional, but they play important roles in probability and physics where there could be thousands, millions, or even billions of variables.
Mathematically, if on then the area between the graph and the lines is When we generalize to multiple variables, we'll have an integral sign for each new variable, or dimension. For example, the -dimensional sphere has volume If this expression doesn't make sense yet, don't worry: it will soon! The upcoming 3D volumes quiz will set us on the right path by introducing two-variable integrals through the Riemann sum.
One of the greatest applications of calculus (specifically derivatives) is finding the maximum and minimum values of a function.
The upcoming Finding Extreme Values quiz walks us through how this extends to a function with many variables. Before we get there, let's get a sense of what optimizing a function of two variables is like.
Let's say and are any two real numbers that obey the inequality Geometrically, this means that sits inside (or on) the unit circle centered at the origin.
Let's also define the rule which outputs a single number for a pair of input values. For example, Select all of the options that apply to this function if we only consider input satisfying
The example was chosen since it could be optimized without the help of multivariable calculus.
We'll encounter many new problems in our course where algebra and single-variable calculus simply won't be enough. Our next quiz dives deeper into multivariable optimization. There, we'll uncover a powerful new tool and our first truly multivariable concept: the partial derivative. | <urn:uuid:c5a04f2d-2995-4c3f-a827-e4214ee4e842> | CC-MAIN-2022-21 | https://brilliant.org/practice/multivariable-calculus-in-a-nutshell/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00289.warc.gz | en | 0.926178 | 1,085 | 4.53125 | 5 |
Princeton researchers have developed a new method that may allow the quick and reliable transfer of quantum information throughout a computing device.
The method, formulated by a team led by Princeton physicist Jason Petta, could eventually allow engineers to design quantum computers consisting of millions of quantum bits, or qubits. So far, quantum researchers have only been able to manipulate small numbers of qubits, which are unfortunately insufficient for use with a practical machine.
“The whole game at this point in quantum computing is trying to build a larger system,” explained Andrew Houck, an assistant professor of electrical engineering who is part of the research team.
To conduct the transfer, Petta’s team used a stream of microwave photons to analyze a pair of electrons trapped in a tiny cage called a quantum dot. The “spin state” of the electrons – information about how they are spinning – serves as the qubit, a basic unit of information. The microwave stream allows the scientists to read that information.
“We create a cavity with mirrors on both ends – but they don’t reflect visible light, they reflect microwave radiation,” said Petta. “Then we send microwaves in one end, and we look at the microwaves as they come out the other end. The microwaves are affected by the spin states of the electrons in the cavity, and we can read that change.”
In an ordinary sense, the distances involved are very small; the entire apparatus operates over a little more than a centimeter. But on the subatomic scale, they are vast. Researchers say this is somewhat akin to coordinating the motion of a top spinning on the moon with another on the surface of the earth.
“[Really], it’s the most amazing thing,” said Jake Taylor, a physicist at the National Institute of Standards and Technology and the Joint Quantum Institute at the University of Maryland. “You have a single electron almost completely changing the properties of an inch-long electrical system.”
For years, teams of scientists have pursued the idea of using quantum mechanics to build a new machine that would revolutionize computing. The goal is not build a faster or more powerful computer, but to construct one that approaches problems in a completely different fashion.
Standard computers store information as classical “bits”, which can take on a value of either 0 or 1. These bits allow programmers to create the complex instructions that are the basis for modern computing power. Since Alan Turing took the first steps toward creating a computer at Princeton in 1936, engineers have created vastly more powerful and complex machines, but this basic binary system has remained unchanged.
The power of a quantum computer originates from the strange rules of quantum mechanics, which describe the universe of subatomic particles. Quantum mechanics says that an electron can spin in one direction, representing a 1, or in another direction, a 0. However, it can also be in something called “superposition” – representing all states between 1 and 0. If scientists and engineers can manage to build a working machine that takes advantage of this, it would open up entirely new fields of computing.
“The point of a quantum computer is not that they can do what a normal computer can do but faster; that’s not what they are,” said Houck. “The quantum computer would allow us to approach problems differently. It would allow us to solve problems that cannot be solved with a normal computer.”
Mathematicians are still working on possible uses for a quantum system, but the machines could allow them to accomplish tasks such as factoring currently unfactorable numbers, breaking codes or predicting the behavior of molecules.
One challenge facing scientists is that the spins of electrons, or any other quantum particles, are incredibly delicate. Any outside influences, whether a wisp of magnetism or glimpse of light, destabilizes the electrons’ spins and introduces errors.
Over the years, scientists have developed techniques to observe spin states without disturbing them. However, analyzing small numbers of spins is still not enough, as millions will be required to make a real quantum processor.
To tackle the problem, Petta’s team combined techniques from two distinct branches of science: from materials science, they used a structure called a quantum dot to hold and analyze electrons’ spins; and from optics, they adopted a microwave channel to transfer the spin information from the dot.
To make the quantum dots, the team isolated a pair of electrons on a small section of material called a “semiconductor nanowire.” Basically, that means a wire that is so thin that it can hold electrons like soda bubbles in a straw. They then created small “cages” along the wire. The cages are set up so that electrons will settle into a particular cage depending on their energy level.
This is how the team reads the spin state: electrons of similar spin will repel, while those of different spins will attract. So the team manipulates the electrons to a certain energy level and then reads their position. If they are in the same cage, they are spinning differently; if they are in different cages, the spins are the same.
The second step is to place this quantum dot inside the microwave channel. This allows the team to transfer the information about the pair’s spin state – the qubit.
Petta says the next step is to increase the reliability of the setup for a single electron pair. After that, the team plans to add more quantum dots to create more qubits. Team members are cautiously optimistic as there appear to be no insurmountable problems at this point in time. However, as with any system, increasing complexity could lead to unforeseen difficulties.
“The methods we are using here are scalable, and we would like to use them in a larger system… But to make use of the scaling, it needs to work a little better. The first step is to make better mirrors for the microwave cavity,” Petta added. | <urn:uuid:95795202-63a2-4664-bc92-edcb26b63887> | CC-MAIN-2022-21 | https://tgdaily.com/technology/hardware/66977-a-new-route-to-large-scale-quantum-computing/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00089.warc.gz | en | 0.939599 | 1,237 | 4.21875 | 4 |
If you understand how these systems operate, then you understand why they could change everything.
If someone asked you to picture a quantum computer, what would you see in your mind?
Maybe you see a normal computer-- just bigger, with some mysterious physics magic going on inside? Forget laptops or desktops. Forget computer server farms. A quantum computer is fundamentally different in both the way it looks, and ,more importantly, in the way it processes information.
There are currently several ways to build a quantum computer. But let’s start by describing one of the leading designs to help explain how it works.
Imagine a lightbulb filament, hanging upside down, but it’s the most complicated light you’ve ever seen. Instead of one slender twist of wire, it has organized silvery swarms of them, neatly braided around a core. They are arranged in layers that narrow as you move down. Golden plates separate the structure into sections.
The outer part of this vessel is called the chandelier. It’s a supercharged refrigerator that uses a special liquified helium mix to cool the computer’s quantum chip down to near absolute zero. That’s the coldest temperature theoretically possible.
At such low temperatures, the tiny superconducting circuits in the chip take on their quantum properties. And it’s those properties, as we’ll soon see, that could be harnessed to perform computational tasks that would be practically impossible on a classical computer.
Traditional computer processors work in binary—the billions of transistors that handle information on your laptop or smartphone are either on (1) or they’re off (0). Using a series of circuits, called “gates,” computers perform logical operations based on the state of those switches.
Classical computers are designed to follow specific inflexible rules. This makes them extremely reliable, but it also makes them ill-suited for solving certain kinds of problems—in particular, problems where you’re trying to find a needle in a haystack.
This is where quantum computers shine.
If you think of a computer solving a problem as a mouse running through a maze, a classical computer finds its way through by trying every path until it reaches the end.
What if, instead of solving the maze through trial and error, you could consider all possible routes simultaneously?
Quantum computers do this by substituting the binary “bits” of classical computing with something called “qubits.” Qubits operate according to the mysterious laws of quantum mechanics: the theory that physics works differently at the atomic and subatomic scale.
The classic way to demonstrate quantum mechanics is by shining a light through a barrier with two slits. Some light goes through the top slit, some the bottom, and the light waves knock into each other to create an interference pattern.
But now dim the light until you’re firing individual photons one by one—elementary particles that comprise light. Logically, each photon has to travel through a single slit, and they’ve got nothing to interfere with. But somehow, you still end up with an interference pattern.
Here’s what happens according to quantum mechanics: Until you detect them on the screen, each photon exists in a state called “superposition.” It’s as though it’s traveling all possible paths at once. That is, until the superposition state “collapses” under observation to reveal a single point on the screen.
Qubits use this ability to do very efficient calculations.
For the maze example, the superposition state would contain all the possible routes. And then you’d have to collapse the state of superposition to reveal the likeliest path to the cheese.
Just like you add more transistors to extend the capabilities of your classical computer, you add more qubits to create a more powerful quantum computer.
Thanks to a quantum mechanical property called “entanglement,” scientists can push multiple qubits into the same state, even if the qubits aren’t in contact with each other. And while individual qubits exist in a superposition of two states, this increases exponentially as you entangle more qubits with each other. So a two-qubit system stores 4 possible values, a 20-qubit system more than a million.
So what does that mean for computing power? It helps to think about applying quantum computing to a real world problem: the one of prime numbers.
A prime number is a natural number greater than 1 that can only be divided evenly by itself or 1.
While it’s easy to multiply small numbers into giant ones, it’s much harder to go the reverse direction; you can’t just look at a number and tell its factors. This is the basis for one of the most popular forms of data encryption, called RSA.
You can only decrypt RSA security by factoring the product of two prime numbers. Each prime factor is typically hundreds of digits long, and they serve as unique keys to a problem that’s effectively unsolvable without knowing the answers in advance.
In 1995, M.I.T. mathematician Peter Shor, then at AT&T Bell Laboratories, devised a novel algorithm for factoring prime numbers whatever the size. One day, a quantum computer could use its computational power, and Shor’s algorithm, to hack everything from your bank records to your personal files.
In 2001, IBM made a quantum computer with seven qubits to demonstrate Shor’s algorithm. For qubits, they used atomic nuclei, which have two different spin states that can be controlled through radio frequency pulses.
This wasn’t a great way to make a quantum computer, because it’s very hard to scale up. But it did manage to run Shor’s algorithm and factor 15 into 3 and 5. Hardly an impressive calculation, but still a major achievement in simply proving the algorithm works in practice.
Even now, experts are still trying to get quantum computers to work well enough to best classical supercomputers.
That remains extremely challenging, mostly because quantum states are fragile. It’s hard to completely stop qubits from interacting with their outside environment, even with precise lasers in supercooled or vacuum chambers.
Any noise in the system leads to a state called “decoherence,” where superposition breaks down and the computer loses information.
A small amount of error is natural in quantum computing, because we’re dealing in probabilities rather than the strict rules of binary. But decoherence often introduces so much noise that it obscures the result.
When one qubit goes into a state of decoherence, the entanglement that enables the entire system breaks down.
So how do you fix this? The answer is called error correction--and it can happen in a few ways.
Error Correction #1: A fully error-corrected quantum computer could handle common errors like “bit flips,” where a qubit suddenly changes to the wrong state.
To do this you would need to build a quantum computer with a few so-called “logical” qubits that actually do the math, and a bunch of standard qubits that correct for errors.
It would take a lot of error-correcting qubits—maybe 100 or so per logical qubit--to make the system work. But the end result would be an extremely reliable and generally useful quantum computer.
Error Correction #2: Other experts are trying to find clever ways to see through the noise generated by different errors. They are trying to build what they call “Noisy intermediate-scale quantum computers” using another set of algorithms.
That may work in some cases, but probably not across the board.
Error Correction #3: Another tactic is to find a new qubit source that isn’t as susceptible to noise, such as “topological particles” that are better at retaining information. But some of these exotic particles (or quasi-particles) are purely hypothetical, so this technology could be years or decades off.
Because of these difficulties, quantum computing has advanced slowly, though there have been some significant achievements.
In 2019, Google used a 54-qubit quantum computer named “Sycamore” to do an incredibly complex (if useless) simulation in under 4 minutes—running a quantum random number generator a million times to sample the likelihood of different results.
Sycamore works very differently from the quantum computer that IBM built to demonstrate Shor’s algorithm. Sycamore takes superconducting circuits and cools them to such low temperatures that the electrical current starts to behave like a quantum mechanical system. At present, this is one of the leading methods for building a quantum computer, alongside trapping ions in electric fields, where different energy levels similarly represent different qubit states.
Sycamore was a major breakthrough, though many engineers disagree exactly how major. Google said it was the first demonstration of so-called quantum advantage: achieving a task that would have been impossible for a classical computer.
It said the world’s best supercomputer would have needed 10,000 years to do the same task. IBM has disputed that claim.
At least for now, serious quantum computers are a ways off. But with billions of dollars of investment from governments and the world’s biggest companies, the race for quantum computing capabilities is well underway. The real question is: how will quantum computing change what a “computer” actually means to us. How will it change how our electronically connected world works? And when? | <urn:uuid:d172a542-b660-462d-9b2a-f407e8abfa62> | CC-MAIN-2022-21 | https://www.scientificamerican.com/video/how-does-a-quantum-computer-work/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00289.warc.gz | en | 0.932561 | 2,010 | 4.0625 | 4 |
From brain to heart to stomach, the bodies of humans and animals generate weak magnetic fields that a supersensitive detector could use to pinpoint illnesses, trace drugs – and maybe even read minds. Sensors no bigger than a thumbnail could map gas deposits underground, analyze chemicals, and pinpoint explosives that hide from other probes.
Now scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California at Berkeley, working with colleagues from Harvard University, have improved the performance of one of the most potent possible sensors of magnetic fields on the nanoscale – a diamond defect no bigger than a pair of atoms, called a nitrogen vacancy (NV) center.
The research team’s discoveries may eventually enable clocks smaller than computer chips yet accurate to within a few quadrillionths of a second, or rotational sensors quicker and more tolerant of extreme temperatures than the gyroscopes in smart phones. Before long, an inexpensive chip of diamond may be able to house a quantum computer.
A sensor made of diamond
Nitrogen vacancy centers are some of the most common defects in diamonds. When a nitrogen atom substitutes for a carbon atom in the diamond crystal and pairs with an adjacent vacancy (where a carbon atom is missing altogether), a number of electrons not bonded to the missing carbon atoms are left in the center.
The electron spin states are well defined and very sensitive to magnetic fields, electric fields, and light, so they can easily be set, adjusted, and read out by lasers.
“The spin states of NV centers are stable across a wide range of temperatures from very hot to very cold,” says Dmitry Budker of Berkeley Lab’s Nuclear Science Division, who is also a physics professor at UC Berkeley. Even tiny flecks of diamond costing pennies per gram could be used as sensors because, says Budker, “we can control the number of NV centers in the diamond just by irradiating and baking it,” that is, annealing it.
The challenge is to keep the information inherent in the spin states of NV centers, once it has been encoded there, from leaking away before measurements can be performed; in NV centers, this requires extending what’s called the “coherence” time of the electron spins, the time the spins remain synchronized with each other.
Recently Budker worked with Ronald Walsworth of Harvard in a team that included Harvard’s Nir Bar-Gill and UC Berkeley postdoc Andrey Jarmola. They extended the coherence time of an ensemble of NV electron spins by more than two orders of magnitude over previous measurements.
“To me, the most exciting aspect of this result is the possibility of studying changes in the way NV centers interact with one another,” says Bar-Gill, the first author of the paper, who will move to Hebrew University in Jerusalem this fall. “This is possible because the coherence times are much longer than the time needed for interactions between NV centers.”
Bar-Gill adds, “We can now imagine engineering diamond samples to realize quantum computing architectures.” The interacting NV centers take the role of bits in quantum computers, called qubits. Whereas a binary digit is either a 1 or a 0, a qubit represents a 1 and a 0 superposed, a state of Schrödinger’s-cat-like simultaneity that persists as long as the states are coherent, until a measurement is made that collapses all the entangled qubits at once.
“We used a couple of tricks to get rid of sources of decoherence,” says Budker. “One was to use diamond samples specially prepared to be pure carbon-12.” Natural diamond includes a small amount of the isotope carbon-13, whose nuclear spin hurries the decoherence of the NV center electron spins. Carbon-12 nuclei are spin zero.
“The other trick was to lower the temperature to the temperature of liquid nitrogen,” Budker says. Decoherence was reduced by cooling the samples to 77 degrees Kelvin, below room temperature but still readily accessible.
Working together in Budker’s lab, members of the team mounted the diamond samples inside a cryostat. A laser beam passing through the diamond, plus a magnetic field, tuned the electron spins of the NV centers and caused them to fluoresce. Their fluorescent brightness was a measure of spin-state coherence.
“Controlling the spin is essential,” Budker says, “so we borrowed an idea from nuclear magnetic resonance” – the basis for such familiar procedures as magnetic resonance imaging (MRI) in hospitals.
While different from nuclear spin, electron spin coherence can be extended with similar techniques. Thus, as the spin states of the NV centers in the diamond sample were about to decohere, the experimenters jolted the diamond with a series of up to 10,000 short microwave pulses. The pulses flipped the electron spins as they began to fall out of synchronization with one another, producing “echoes” in which the reversed spins caught up with themselves. Coherence was reestablished.
Eventually the researchers achieved spin coherence times lasting over half a second. “Our results really shine for magnetic field sensing and for quantum information,” says Bar-Gill.
Long spin-coherence times add to the advantages diamond already possesses, putting diamond NVs at the forefront of potential candidates for practical quantum computers – a favorite pursuit of the Harvard researchers. What Budker’s group finds an even hotter prospect is the potential for long coherence times in sensing oscillating magnetic fields, with applications ranging from biophysics to defense.
ABSTRACT – Solid-state spin systems such as nitrogen-vacancy colour centres in diamond are promising for applications of quantum information, sensing and metrology. However, a key challenge for such solid-state systems is to realize a spin coherence time that is much longer than the time for quantum spin manipulation protocols. Here we demonstrate an improvement of more than two orders of magnitude in the spin coherence time (T2) of nitrogen-vacancy centres compared with previous measurements: T2≈0.6 s at 77 K. We employed dynamical decoupling pulse sequences to suppress nitrogen-vacancy spin decoherence, and found that T2 is limited to approximately half of the longitudinal spin relaxation time over a wide range of temperatures, which we attribute to phonon-induced decoherence. Our results apply to ensembles of nitrogen-vacancy spins, and thus could advance quantum sensing, enable squeezing and many-body entanglement, and open a path to simulating driven, interaction-dominated quantum many-body Hamiltonians.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements. | <urn:uuid:6a753154-04fa-4bd0-bfc4-3669985e9e8b> | CC-MAIN-2022-21 | https://www.nextbigfuture.com/2013/05/spin-coherence-times-up-to-one-seoncd.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00291.warc.gz | en | 0.923679 | 1,558 | 3.78125 | 4 |
Image credit: Umberto Unsplash
Imagine being able to disappear from one place and then reappear in the exact same condition at another location. You could visit your favorite bakery in Paris for breakfast, spend the afternoon on a beach in Thailand, and — why relegate yourself to Earth? — beam yourself up to the moon before going home to your dinner. The idea of teleportation is fascinating and prevalent in science fiction, with Star Trek’s “Beam me up, Scotty” catchphrase immediately coming to mind.
While seemingly unattainable, it is not actually impossible according to the laws of physics … it just depends on scale.
The bizarre properties of subatomic particles
In 1993, a group of six international scientists discussed the idea that teleportation is possible on the subatomic level, and demonstrated the transportation of systems such as single photons, coherent light fields, nuclear spins, and trapped ions. While perhaps disappointing for the avid traveler, quantum teleportation cannot be applied to matter, but could be revolutionary in transporting information and in the creation of quantum computers; perhaps — according to some experts — even leading to a quantum internet in which the limitations of current networks are overcome with improved privacy, security, and computational capabilities.
Prior to this, scientists believed that perfect teleportation was not possible even on the subatomic scale as it violated the uncertainty principle in quantum mechanics. In simple terms, this principle states that the more accurately an object is measured or observed, the more it is disturbed by the process of measuring it. This means that it would be impossible to make a perfect replica because we cannot accurately measure the original without changing its quantum state.
But in 1993, the team of scientists found a way around this in the form of a paradoxical feature of quantum mechanics called the Einstein-Podolsky-Rosen (EPR) effect. First put forth in a paper in 1935 by Einstein and his post-doctoral researchers, the thought experiment behind EPR describes a phenomenon known as “quantum entanglement”, in which the quantum states of two or more entangled objects, such as a pair of photons, have the same state even when separated by great distances. Changing the state of one of these objects simultaneously changes the state of the other even when their is no physical connection. This phenomenon was famously dubbed “spooky action at a distance” by Einstein.
The role of quantum entanglement
Scientists are only beginning to understand the mysteries of entanglement and how it makes quantum teleportation possible. In brief, one could “scan” an object to be teleported and supplement the missing information about that object (that arises as a result of the uncertainty principle) using information from a pair of entangled partners. The process would involve three particles in which one particle “teleports” its state to two distant entangled particles. Scientists call this teleportation in the sense that a particle with a particular set of properties disappears at one location and one with the exact same properties appears somewhere else.
Entanglement provides the means of sending qubits of information — the quantum version of a binary bit; a basic unit of information — without any physical contact. Individual atoms or particles could replace transistors, vastly expanding our computing powers and capabilities. As opposed to binary computers, which operate in basic units of information represented by 1s and 0s, qubits can exist as a “1” or “0” simultaneously through the principle of superposition. This simple attribute allows a quantum computer to store greater amounts of data and quickly reason through complex problems or computing tasks by simultaneously exploring multiple pathways and choosing the most efficient one.
This, coupled with entanglement and a principle called the no-cloning theorem, which forbids the identical copying of unknown quantum states, will forever and completely change the way in which we store, transfer, and encrypt data. Though we are still quite a way away from realizing a true quantum age, researchers are making some impressive strides and getting us ever closer, one experiment at a time.
Entanglement in the “real” world
Perhaps the most memorable occurred in 2017 when a team of Chinese researchers teleported information to the orbiting Micuis satelite and back, demonstrating “the first quantum teleportation of independent single-photon qubits from a ground observatory to a low Earth orbit satellite … with a distance up to 1400 km”.
Prior to this, teleportation experiments had only been demonstrated between locations that were limited to a distance on the order of 100 kilometers. However, to realize a global-scale quantum internet, the team proposed exploiting “space-based links” to connect two remote points on the Earth, which would reduce what they called “channel loss” (essentially signal loss) because most of the photons’ propagation path is in empty space. The team also demonstrated advancements in their data transmission abilities at greater distances and with better encryption.
More recently, researchers extended entanglement to electrons by making qubits from individual electrons. This in particular has been challenging compared to using photons — which naturally propagate over large distances — because electrons are confined in space. These types of studies pave the way to explore quantum teleportation in all spin states of matter.
All of this is part of what researchers call “the second quantum revolution”, which follows the initial discovery of the quantum world and its seemingly bizarre principles in the 20th century by key players such as Heisenberg, Schrödinger, and Einstein.
It’s exciting to see this field developing from infancy. Similar to the way in which we scoff at the first room-sized computers unveiled in the 1950s, we may one day look at our current binary computers in the same way. Although we are still far off from this new world built on the mind-bending principles of quantum mechanics, we are on the verge of unlocking incredible new capabilities that will provide endless possibilities. | <urn:uuid:0b72e2c7-43a7-4feb-bce8-01a243180ec6> | CC-MAIN-2022-21 | https://www.advancedsciencenews.com/teleportation-is-possible-it-just-depends-on-scale/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00292.warc.gz | en | 0.937631 | 1,213 | 3.59375 | 4 |
Rice University physicists have created the world’s first laser-cooled neutral plasma, completing a 20-year quest that sets the stage for simulators that re-create exotic states of matter found inside Jupiter and white dwarf stars.
The findings are detailed this week in the journal Science and involve new techniques for laser cooling clouds of rapidly expanding plasma to temperatures about 50 times colder than deep space.
“We don’t know the practical payoff yet, but every time physicists have laser cooled a new kind of thing, it has opened a whole world of possibilities,” said lead scientist Tom Killian, professor of physics and astronomy at Rice. “Nobody predicted that laser cooling atoms and ions would lead to the world’s most accurate clocks or breakthroughs in quantum computing. We do this because it’s a frontier.”
Killian and graduate students Tom Langin and Grant Gorman used 10 lasers of varying wavelengths to create and cool the neutral plasma. They started by vaporizing strontium metal and using one set of intersecting laser beams to trap and cool a puff of strontium atoms about the size of a child’s fingertip. Next, they ionized the ultracold gas with a 10-nanosecond blast from a pulsed laser. By stripping one electron from each atom, the pulse converted the gas to a plasma of ions and electrons.
Energy from the ionizing blast causes the newly formed plasma to expand rapidly and dissipate in less than one thousandth of a second. This week’s key finding is that the expanding ions can be cooled with another set of lasers after the plasma is created. Killian, Langin and Gorman describe their techniques in the new paper, clearing the way for their lab and others to make even colder plasmas that behave in strange, unexplained ways.
Plasma is an electrically conductive mix of electrons and ions. It is one of four fundamental states of matter; but unlike solids, liquids and gases, which are familiar in daily life, plasmas tend to occur in very hot places like the surface of the sun or a lightning bolt. By studying ultracold plasmas, Killian’s team hopes to answer fundamental questions about how matter behaves under extreme conditions of high density and low temperature.
To make its plasmas, the group starts with laser cooling, a method for trapping and slowing particles with intersecting laser beams. The less energy an atom or ion has, the colder it is, and the slower it moves about randomly. Laser cooling was developed in the 1990s to slow atoms until they are almost motionless, or just a few millionths of a degree above absolute zero.
“If an atom or ion is moving, and I have a laser beam opposing its motion, as it scatters photons from the beam it gets momentum kicks that slow it,” Killian said. “The trick is to make sure that light is always scattered from a laser that opposes the particle’s motion. If you do that, the particle slows and slows and slows.”
During a postdoctoral fellowship at the National Institute of Standards and Technology in Bethesda, Md., in 1999, Killian pioneered the ionization method for creating neutral plasma from a laser-cooled gas. When he joined Rice’s faculty the following year, he started a quest for a way to make the plasmas even colder. One motivation was to achieve “strong coupling,” a phenomenon that happens naturally in plasmas only in exotic places like white dwarf stars and the center of Jupiter.
“We can’t study strongly coupled plasmas in places where they naturally occur,” Killian said. “Laser cooling neutral plasmas allows us to make strongly coupled plasmas in a lab, so that we can study their properties”
“In strongly coupled plasmas, there is more energy in the electrical interactions between particles than in the kinetic energy of their random motion,” Killian said. “We mostly focus on the ions, which feel each other, and rearrange themselves in response to their neighbors’ positions. That’s what strong coupling means.”
Because the ions have positive electric charges, they repel one another through the same force that makes your hair stand up straight if it gets charged with static electricity.
“Strongly coupled ions can’t be near one another, so they try to find equilibrium, an arrangement where the repulsion from all of their neighbors is balanced,” he said. “This can lead to strange phenomena like liquid or even solid plasmas, which are far outside our normal experience.”
In normal, weakly coupled plasmas, these repulsive forces only have a small influence on ion motion because they’re far outweighed by the effects of kinetic energy, or heat.
“Repulsive forces are normally like a whisper at a rock concert,” Killian said. “They’re drowned out by all the kinetic noise in the system.”
In the center of Jupiter or a white dwarf star, however, intense gravity squeezes ions together so closely that repulsive forces, which grow much stronger at shorter distances, win out. Even though the temperature is quite high, ions become strongly coupled.
Killian’s team creates plasmas that are orders of magnitude lower in density than those inside planets or dead stars, but by lowering the temperature they raise the ratio of electric-to-kinetic energies. At temperatures as low as one-tenth of a Kelvin above absolute zero, Killian’s team has seen repulsive forces take over.
“Laser cooling is well developed in gases of neutral atoms, for example, but the challenges are very different in plasmas,” he said.
“We are just at the beginning of exploring the implications of strong coupling in ultracold plasmas,” Killian said. “For example, it changes the way that heat and ions diffuse through the plasma. We can study those processes now. I hope this will improve our models of exotic, strongly coupled astrophysical plasmas, but I am sure we will also make discoveries that we haven’t dreamt of yet. This is the way science works.”
The research was supported by the Air Force Office of Scientific Research and the Department of Energy’s Office of Science. | <urn:uuid:0a9cdc66-f1b9-45ed-882b-314c1728784c> | CC-MAIN-2022-21 | https://www.tunisiesoir.com/tech/tech-physicists-are-first-to-laser-cool-neutral-plasma-report-11661-2019/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00292.warc.gz | en | 0.927539 | 1,360 | 3.609375 | 4 |
The values of two inherent properties of one photon – its spin and its orbital angular momentum – have been transferred via quantum teleportation onto another photon for the first time by physicists in China. Previous experiments have managed to teleport a single property, but scaling that up to two properties proved to be a difficult task, which has only now been achieved. The team’s work is a crucial step forward in improving our understanding of the fundamentals of quantum mechanics and the result could also play an important role in the development of quantum communications and quantum computers.
Alice and Bob
Quantum teleportation first appeared in the early 1990s after four researchers, including Charles Bennett of IBM in New York, developed a basic quantum teleportation protocol. To successfully teleport a quantum state, you must make a precise initial measurement of a system, transmit the measurement information to a receiving destination and then reconstruct a perfect copy of the original state. The “no-cloning” theorem of quantum mechanics dictates that it is impossible to make a perfect copy of a quantum particle. But researchers found a way around this via teleportation, which allows a flawless copy of a property of a particle to be made. This occurs thanks to what is ultimately a complete transfer (rather than an actual copy) of the property onto another particle such that the first particle loses all of the properties that are teleported.
The protocol has an observer, Alice, send information about an unknown quantum state (or property) to another observer, Bob, via the exchange of classical information. Both Alice and Bob are first given one half of an additional pair of entangled particles that act as the “quantum channel” via which the teleportation will ultimately take place. Alice would then interact the unknown quantum state with her half of the entangled particle, measure the combined quantum state and send the result through a classical channel to Bob. The act of the measurement itself alters the state of Bob’s half of the entangled pair and this, combined with the result of Alice’s measurement, allows Bob to reconstruct the unknown quantum state. The first experimentation teleportation of the spin (or polarization) of a photon took place in 1997. Since then, the states of atomic spins, coherent light fields, nuclear spins and trapped ions have all been teleported.
But any quantum particle has more than one given state or property – they possess various “degrees of freedom”, many of which are related. Even the simple photon has various properties such as frequency, momentum, spin and orbital angular momentum (OAM), which are inherently linked.
More than one
Teleporting more than one state simultaneously is essential to fully describe a quantum particle and achieving this would be a tentative step towards teleporting something larger than a quantum particle, which could be very useful in the exchange of quantum information. Now, Chaoyang Lu and Jian-Wei Pan, along with colleagues at the University of Science and Technology of China in Hefei, have taken the first step in simultaneously teleporting multiple properties of a single photon.
In the experiment, the team teleports the composite quantum states of a single photon encoded in both its spin and OAM. To transfer the two properties requires not only an extra entangled set of particles (the quantum channel), but a “hyper-entangled” set – where the two particles are simultaneously entangled in both their spin and their OAM. The researchers shine a strong ultraviolet pulsed laser on three nonlinear crystals to generate three entangled pairs of photons – one pair is hyper-entangled and is used as the “quantum channel”, a second entangled pair is used to carry out an intermediate “non-destructive” measurement, while the third pair is used to prepare the two-property state of a single photon that will eventually be teleported.
The image above represents Pan’s double-teleportation protocol – A is the single photon whose spin and OAM will eventually be teleported to C (one half of the hyper-entangled quantum channel). This occurs via the other particle in the channel B. As B and C are hyper-entangled, we know that their spin and OAM are strongly correlated, but we do not actually know what their values are – i.e. whether they are horizontally, vertically or orthogonally polarized. So to actually transfer A’s polarization and OAM onto C, the researchers make a “comparative measurements” (referred to as CM-P and CM-OAM in the image) with B. In other words, instead of revealing B’s properties, they detect how A’s polarization and OAM differ from B. If the difference is zero, we can tell that A and B have the same polarization or OAM, and since B and C are correlated, that C now has the same properties that A had before the comparison measurement.
On the other hand, if the comparative measurement showed that A’s polarization as compared with B differed by 90° (i.e. A and B are orthogonally polarized), then we would rotate C’s field by 90° with respect to that of A to make a perfect transfer once more. Simply put, making two comparative measurements, followed by a well-defined rotation of the still-unknown polarization or OAM, would allow us to teleport A’s properties to C.
One of the most challenging steps for the researchers was to link together the two comparative measurements. Referring to the “joint measurements” box in the image above, we begin with the comparative measurement of A and B’s polarization (CM-P). From here, either one of three scenarios can take place – one photon travels along path 1 to the middle box (labelled “non-destructive photon-number measurement”); no photons enter the middle box along path 1; or two single photons enter the middle box along path 1.
The middle box itself contains the second set of entangled photons mentioned previously (not shown in figure) and one of these two entangled photons is jointly measured with the incoming photons from path 1. But the researcher’s condition is that if either no photons or two photons enter the middle box via path 1, then the measurement would fail. Indeed, what the middle box ultimately shows is that exactly one photon existed in path 1, and so exactly one photon existed in path 2, given that two photons (A and B) entered CM-P. To show that indeed one photon existed in path two required the third and final set of entangled photons in the CP-OAM box (not shown), where the OAM’s of A and B undergo a comparative measurement.
The measurements ultimately result in the transfer or teleportation of A’s properties onto C – although it may require rotating C’s (as yet unknown) polarization and OAM depending on the outcomes of the comparative measurements, but the researchers did not actually implement the rotations in their current experiment. The team’s work has been published in the journal Nature this week. Pan tells physicsworld.com that the team verified that “the teleportation works for both spin-orbit product state and hybrid entangled state, achieving an overall fidelity that well exceeds the classical limit”. He says that these “methods can, in principle, be generalized to more [properties], for instance, involving the photon’s momentum, time and frequency”.
Physicist Wolfgang Tittel from the University of Calgary, who was not involved in the current work (but wrote an accompanying “News and Views” article in Nature) explains that the team verified that the teleportation had indeed occurred by measuring the properties of C after the teleportation. “Of course, the no-cloning theorem does not allow them to do this perfectly. But it is possible to repeat the teleportation of the properties of photon A, prepared every time in the same way, many times. Making measurements on photon C (one per repetition) allows reconstructing its properties.” He points out that although the rotations were not ultimately implemented by the researchers, they found that “the properties of C differed from those of A almost exactly by the amount predicted by the outcomes of the comparative measurements. They repeated this large number of measurements for different preparations of A, always finding the properties of C close to those expected. This suffices to claim quantum teleportation”.
While it is technically possible to extend Pan’s method to teleport more than two properties simultaneously, this is increasingly difficult because the probability of a successful comparative measurement decreases with each added property. “I think with the scheme demonstrated by [the researchers], the limit is three properties. But this does not mean that other approaches, either other schemes based on photons, or approaches using other particles (e.g. trapped ions), can’t do better,” says Tittel.
Pan says that to teleport three properties, their scheme “needs the experimental ability to control 10 photons. So far, our record is eight photon entanglement. We are currently working on two parallel lines to get more photon entanglement.” Indeed, he says that the team’s next goal is to experimentally create “the largest hyper-entangled state so far: a six-photon 18-qubit Schrödinger cat state, entangled in three degrees-of-freedom, polarization, orbital angular momentum, and spatial mode. To do this would provide us with an advanced platform for quantum communication and computation protocols”. | <urn:uuid:ff3a72aa-357a-4720-bbdd-985fb4b1c50d> | CC-MAIN-2022-21 | https://www.quantumactivist.com/quantum-teleportation/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00692.warc.gz | en | 0.933521 | 1,969 | 3.921875 | 4 |
Every high school science course focuses on the fundamental states of matter in the form of gases, liquids, and solids—states that are straightforward to study and manipulate. But there is a fourth state of matter that most people are much less familiar with because it does not exist freely on Earth.
This is plasma—a gas in which electrons have been stripped from atoms. The sun is such a mixture of ions and electrons, and much of interstellar space is filled with plasma. But on Earth, plasmas tend to occur fleetingly—in lightning, for example.
However, in the past 100 years, scientists and engineers have begun exploiting this form of matter to create light (neon lights are plasmas) and to interact with materials in a way that modifies the properties of their surfaces.
Because plasmas are generally hard to make and control, they are often confined to industrial machinery or specialized labs. But an easier way to make and control plasmas could change all that.
Enter Kausik Das of the University of Maryland Eastern Shore, and several colleagues who have found a way to create plasmas in an ordinary kitchen microwave. Their technique opens the way for a new generation to experiment with this exotic form of matter and perhaps to develop new applications.
First, some background. One way to make plasmas is to break apart molecules using powerful electric fields. This creates ions that the electric fields then accelerate, causing them to smash into other molecules. These collisions knock electrons off the atoms, creating more ions.
In the right circumstances, this process triggers a cascade that causes the entire gas to become ionized.
Das and his colleagues have worked out how to do this in a standard kitchen microwave oven (they don’t identify the brand). They also use a cheap glass flask capable of holding a vacuum as well as a seal.
Kitchen microwaves produce electromagnetic radiation with a wavelength of around 12 centimeters. These waves particularly influence polar molecules that have a positive charge at one end and a negative charge at the other.
Water is a good example of a polar molecule. As the alternating field changes, water molecules attempt to align themselves with the field. This rotation causes them to bump into other molecules, thereby raising their temperature.
But if the density of molecules is low, they do not bump into other molecules and so cannot dissipate this extra energy. In that case, the alternating field causes the water molecules to rotate ever faster and eventually rip apart.
That’s the process that triggers the formation of a plasma. Das and company exploit it by sucking air out of their flask to create a low pressure. The low-pressure gas consists mostly of nitrogen and oxygen, but a few water molecules are also inevitably present.
Das's team then places the flask in the microwave and switches it on. The microwaves rip apart the water molecules inside the flask and accelerate them. If the pressure is low enough, they gain enough kinetic energy to knock electrons off nitrogen molecules, and the cascade begins. This creates a plasma that glows with a soft blue light.
But only for a few seconds. Soon the process begins to tear apart oxygen atoms, which creates a purple light. So the plasma changes color.
Das and company observe exactly this color evolution in their experiments, although they had to experiment carefully with the pressure in the flask. Too much gas prevents the water molecules from gaining enough kinetic energy to trigger the cascade. Too little gas means that collisions are less likely, so a plasma is more difficult to form. Das and his colleagues say their goal is to operate at the sweet spot between these regimes.
To get a better idea of what is going on, the team has analyzed the spectrum of light produced by the plasma to reveal the telltale signature of oxygen and nitrogen. And voilà—they have a plasma generated in a kitchen microwave.
That turns out to be useful for a variety of things that are otherwise impossible outside specialized labs. For example, Das and company show how to use the plasma to change the properties of polydimethylsiloxane, or PDMS, a common silicon-based polymer.
This is usually hydrophyllic—it attracts water. But bathing the material in the plasma for just a few seconds makes it hydrophobic. This property can be quantified by measuring the contact angle that a drop of water makes with the surface. Before treatment, PDMS has a contact angle of 64 degrees. After treatment, the angle increases to 134 degrees.
This is probably because the various ions in the plasma become embedded in the surface of the material during exposure. Those ions repel water.
The team goes on to show how to modify surfaces so they can become more adhesive and even change their electronic properties.
That’s interesting work that can be done not just in any lab but in any kitchen. It will certainly be a useful teaching method, but it may also allow home-based makers to experiment with plasma cleaning and etching.
As Das and his colleagues conclude: “These simple techniques of plasma generation and subsequent surface treatment and modification may lead to new opportunities to conduct research not only in advanced labs, but also in undergraduate and even high school research labs.”
Ref: arxiv.org/abs/1807.06784 : Plasma Generation by Household Microwave Oven for Surface Modification and Other Emerging Applications
Quantum computing has a hype problem
Quantum computing startups are all the rage, but it’s unclear if they’ll be able to produce anything of use in the near future.
These hackers showed just how easy it is to target critical infrastructure
Two Dutch researchers have won a major hacking championship by hitting the software that runs the world’s power grids, gas pipelines, and more. It was their easiest challenge yet.
Russia hacked an American satellite company one hour before the Ukraine invasion
The attack on Viasat showcases cyber’s emerging role in modern warfare.
Russia is risking the creation of a “splinternet”—and it could be irreversible
If Russia disconnects from—or is booted from— the internet’s governing bodies, the internet may never be the same again for any of us.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more. | <urn:uuid:c853cb06-fe5b-4181-979f-ed41ba4e2169> | CC-MAIN-2022-21 | https://www.technologyreview.com/2018/08/02/141212/how-to-turn-a-kitchen-microwave-into-a-plasma-etching-device/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00093.warc.gz | en | 0.934019 | 1,313 | 4.0625 | 4 |
What is quantum computing and why do we need it
Quantum computing is a type of computing that relies on the quantum mechanics phenomenon called quantum superposition. Quantum computing harnesses the power of atoms and molecules to allow us to solve problems that we can’t yet solve with our traditional computers. We’ve managed to use this type of computer for factoring large numbers, developing new material, identifying complex networks, and simulating physical phenomena. However, quantum computing is limited to extremely low temperatures and only represents a tiny fraction of today’s computers. Therefore, it is difficult to imagine how this technology will impact the world in the near future. By learning more about quantum computing and how it works, we can make practical use of this technology in our lives, making this amazing discovery useful for everyone.
The first thing to know is that computers, or other forms of computation, can be broken into two parts: hardware and software. Hardware consists of individual components while software includes all of the rules required to use those components. More specifically, hardware is divided into processors (which handle operations) and memory (which holds data). Software can be broken down even further into applications. This article about quantum computing will only cover the hardware side of the equation as this is more useful in day-to-day life.
Each computer chip today contains billions of transistors that can perform logical operations. However, the speed of these transistors is restricted by the laws of thermodynamics. These limitations make it difficult to perform calculations quickly enough to be useful for complex tasks like encryption or factoring large numbers. Each logical operation takes about 10 microseconds, which is about 1,000 times slower than what a human can achieve in this time frame when memorizing numbers.
How quantum computers work
All matter, including atoms and molecules, can be in a superposition state. According to the rules of quantum mechanics, this means that an atom or molecule can exist in any state of a particular observable property. This can allow an atom or molecule to exist as multiple states at the same time. Since classical computers represent information as a binary system (either 1 or 0), the superposition state of atoms and molecules can be used to create new types of computers. This is the idea behind quantum computers: they are based on quantum mechanics and can solve certain problems much faster than classical computers.
In a classical computer, for example, each bit of information is represented by an on or off state (1 or 0). Electrons orbiting an atom’s nucleus represent one set of information in a quantum computer, while the nucleus’ position represents another set of information. Each piece of information can exist as multiple states at once. This superposition allows the quantum computer to make computations that surpass the speed of any classical computer.
How Quantum Computers Work
The benefits of quantum computing
a. More Powerful: The quantum superposition state can store more information than the classical state, making quantum computers much more powerful. In fact, it’s possible to use a single atom to perform computations that are comparable in speed to a supercomputer.
b. More Efficient: Because quantum computers can process large amounts of data, they can solve problems much more quickly than classical computers, making them much more efficient.
c. More Secure: Because quantum computers are less susceptible to hackers, they are a lot more difficult to break into. Additionally, because the information is stored in quantum superpositions, it’s impossible to tamper with the information without doing so on a system-wide scale.
d. More Transparent: Because the superposition state is more robust, it’s possible to use quantum computers to hide information (in other words, to “quantum-encrypt” information). In fact, this ability could be used to build large-scale encryption and key management systems with which it would be impossible for anyone to hack into.
e. More Robust: Because the quantum state is much less susceptible to noise and interference, it’s possible to build quantum computers in which errors don’t occur as often.
f. More Secure: Because the quantum state is much more robust, it’s possible to build quantum computers that are resistant to hacking.
g. More Robust: Because it’s impossible to hack the information in a quantum state without destroying the superposition, a quantum computer is much more difficult to hack or crack than one that uses classical information.
Challenges in developing quantum computers
a. Unstable: Quantum computers must be kept at very low temperatures in order to function properly, which limits their practical use. In addition, quantum computers can not be easily integrated into current computer systems because they operate on completely different principles. b. Expensive: Large amounts of resources are required to build and maintain quantum computers. For example, some materials used in the development of quantum computers can cost thousands of dollars per gram. This restricts their use to only wealthy countries. c. Difficult to interface with classical computers: Once quantum computing is developed, scientists must come up with a way to connect existing computers with these new devices in order for them to work optimally together. d. Limited to solving only specific problems: Quantum computers are very specialized, and are currently limited to solving specific types of problems. For example, quantum computing is not ideal for tasks that require processing large amounts of data quickly.
a. Coulomb-blockade technology b. Superconducting circuits c. Ion traps d. Chemical reactions e. Spin glass f. Biological quantum computing
Share this: Google
Like this: Like Loading… Related
Posted in Uncategorized | <urn:uuid:5b2c1bba-1fdc-4301-98f1-e54868be6948> | CC-MAIN-2022-21 | https://cecileparkmedia.com/what-is-quantum-computing-and-why-do-we-need-it-how-quantum-computers-work-the-benefits-of-quantum-computing-applications-of-quantum-computing-challenges-in-developing-quantum-computers/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00294.warc.gz | en | 0.90858 | 1,165 | 3.8125 | 4 |
Back in 1958, in the earliest days of the computing revolution, the US Office of Naval Research organized a press conference to unveil a device invented by a psychologist named Frank Rosenblatt at the Cornell Aeronautical Laboratory. Rosenblatt called his device a perceptron, and the New York Times reported that it was “the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence.”
Those claims turned out to be somewhat overblown. But the device kick-started a field of research that still has huge potential today.
A perceptron is a single-layer neural network. The deep-learning networks that have generated so much interest in recent years are direct descendants. Although Rosenblatt’s device never achieved its overhyped potential, there is great hope that one of its descendants might.
Today, there is another information processing revolution in its infancy: quantum computing. And that raises an interesting question: is it possible to implement a perceptron on a quantum computer, and if so, how powerful can it be?
Today we get an answer of sorts thanks to the work of Francesco Tacchino and colleagues at the University of Pavia in Italy. These guys have built the world’s first perceptron implemented on a quantum computer and then put it through its paces on some simple image processing tasks.
In its simplest form, a perceptron takes a vector input—a set of numbers—and multiplies it by a weighting vector to produce a single-number output. If this number is above a certain threshold the output is 1, and if it is below the threshold the output is 0.
That has some useful applications. Imagine a pixel array that produces a set of light intensity levels—one for each pixel—when imaging a particular pattern. When this set of numbers is fed into a perceptron, it produces a 1 or 0 output. The goal is to adjust the weighting vector and threshold so that the output is 1 when it sees, say a cat, and 0 in all other cases.
Tacchino and co have repeated Rosenblatt’s early work on a quantum computer. The technology that makes this possible is IBM’s Q-5 “Tenerife” superconducting quantum processor. This is a quantum computer capable of processing five qubits and programmable over the web by anyone who can write a quantum algorithm.
Tacchino and co have created an algorithm that takes a classical vector (like an image) as an input, combines it with a quantum weighting vector, and then produces a 0 or 1 output.
The big advantage of quantum computing is that it allows an exponential increase in the number of dimensions it can process. While a classical perceptron can process an input of N dimensions, a quantum perceptron can process 2N dimensions.
Tacchino and co demonstrate this on IBM’s Q-5 processor. Because of the small number of qubits, the processor can handle N = 2. This is equivalent to a 2x2 black-and-white image. The researchers then ask: does this image contain horizontal or vertical lines, or a checkerboard pattern?
It turns out that the quantum perceptron can easily classify the patterns in these simple images. “We show that this quantum model of a perceptron can be used as an elementary nonlinear classifier of simple patterns,” say Tacchino and co.
They go on to show how it could be used in more complex patterns, albeit in a way that is limited by the number of qubits the quantum processor can handle.
That’s interesting work with significant potential. Rosenblatt and others soon discovered that a single perceptron can only classify very simple images, like straight lines. However, other scientists found that combining perceptrons into layers has much more potential. Various other advances and tweaks have led to machines that can recognize objects and faces as accurately as humans can, and even thrash the best human players of chess and Go.
Tacchino and co’s quantum perceptron is at a similarly early stage of evolution. Future goals will be to encode the equivalent of gray-scale images and to combine quantum perceptrons into many-layered networks.
This group’s work has that potential. “Our procedure is fully general and could be implemented and run on any platform capable of performing universal quantum computation,” they say.
Of course, the limiting factor is the availability of more powerful quantum processors capable of handling larger numbers of qubits. But most quantum researchers agree that this kind of capability is close.
Indeed, since Tacchino and co did their work, IBM has already made a 16-qubit quantum processor available via the web. It’s only a matter of time before quantum perceptrons become much more powerful.
Ref: arxiv.org/abs/1811.02266 : An Artificial Neuron Implemented on an Actual Quantum Processor
Russia is risking the creation of a “splinternet”—and it could be irreversible
If Russia disconnects from—or is booted from— the internet’s governing bodies, the internet may never be the same again for any of us.
Quantum computing has a hype problem
Quantum computing startups are all the rage, but it’s unclear if they’ll be able to produce anything of use in the near future.
These hackers showed just how easy it is to target critical infrastructure
Two Dutch researchers have won a major hacking championship by hitting the software that runs the world’s power grids, gas pipelines, and more. It was their easiest challenge yet.
Inside the plan to fix America’s never-ending cybersecurity failures
The specter of Russian hackers and an overreliance on voluntary cooperation from the private sector means officials are finally prepared to get tough.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more. | <urn:uuid:c8c27a9b-94f2-493a-8cf9-3f7c292c9361> | CC-MAIN-2022-21 | https://www.technologyreview.com/s/612435/machine-learning-meet-quantum-computing/amp/?__twitter_impression=true | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00495.warc.gz | en | 0.91216 | 1,266 | 3.859375 | 4 |