text
stringlengths
4.06k
10.7k
id
stringlengths
47
47
dump
stringclasses
20 values
url
stringlengths
26
321
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.71
0.98
token_count
int64
1.02k
2.05k
score
float64
3.5
4.53
int_score
int64
4
5
Could cloud quantum computing be possible? Google wants to make it happen, although there are some who doubt it will happen anytime soon. Potential benefits of quantum computing Quantum computing could be a large step above the computers we typically use today. They are made up of quantum bits, or qubits, that not only process information as ones or zeros, but also any state in between. It uses this mechanism to solve problems and attempt to work more quickly than previously possible. Unfortunately, technology hasn’t given us a fully functional, available quantum computer yet. However, its future potential means that, according to Jerry Chow, a member of IBM’s experimental quantum computing department, “at 50 qubits, universal quantum computing would reach that inflection point and be able to solve problems existing computers can’t handle.” This future might be closer than we think. IBM has plans to construct and distribute this 50-qubit system within the next few years, while Google is projecting that they’ll complete a 49-qubit system by the end of this year. One of the real-life uses of quantum computing is for pharmaceutical science. Right now, there’s a struggle when it comes to understanding how each structure bonds together. It takes complex computer simulations to understand the atomic and subatomic motion when creating new drugs. Solving this could result in lower-cost and better drugs. Scott Crowder from IBM explained that “you don’t even ask those questions on a classical computer because you know you’re going to get it wrong.” Once quantum computing hits its prime, though, medicine can potentially be made much more quickly and at much lower prices. Another problem solved with quantum computing is one that you wouldn’t expect: fertilizer production, according to computing sciences fellow at Lawrence Berkeley National Laboratory, Jarrod McClean. Just making “mass-produced fertilizer accounts for one percent to two percent of the world’s energy use per year.” However, there is a much more energy-efficient option. The problem, according to McClean, is that “it’s been too challenging for classical systems to date” to help researchers create this energy-efficient option in the lab, but he has high hopes for quantum computers’ abilities in the near future to accomplish it. And, the application doesn’t have to be revolutionary to be helpful. In fact, these computers could help organize delivery routes, especially during particularly busy times like Christmas, by organizing thousands of self-driving cars (assuming they will be commonly used in the future). Quantum computers could also help translation software or other small, but productive uses. The potential of quantum computing is almost endless, from finance to economy to energy. It is beginning to become available to certain people right now and is expected to become more mainstream soon (although there are debates about when, exactly). But can Google bring it to the cloud? What is Google doing While Google has been working on quantum computing for years, they’ve just recently started looking at how to turn it into a business. In fact, Google has already started offering “science labs and artificial intelligence researchers early access to its quantum machines over the Internet in recent months.” Their motivation, according to Bloomberg, for giving this early access is so the research will build more tools to go with this technology, helping to make their cloud quantum computing service as fast and powerful as possible. Google is also considering a ProjectQ, or “an open-source effort to get developers to write code for quantum computers.” According to a quantum computing researcher at Stanford University, Google is not trying to hide that “they’re building quantum hardware and they would, at some point in the future, make it a cloud service.” Additionally, according to scientist Jonathan DuBois at Lawrence Livermore National Laboratory, Google “pledged that government and academic researchers would get free access.” While there’s still quite a bit of debate about when quantum computers will actually be usable, Google’s efforts could skyrocket them to the top of the ongoing cloud wars. If what many companies are predicting, processing tasks could become millions of times faster. Offering cloud quantum computing is an extremely smart business decision, considering that these computers are very large and hard to contain so very few companies could have them themselves. As of right now, Google rents storage by the minute, so if their quantum computers can cut the compute time by such a large percentage, they would have a huge price advantage over the competition. Google’s cloud compute prices are currently higher than Amazon’s and Microsoft’s for most instances. Unfortunately, though, we may be getting ahead of ourselves. Seth Lloyd, a professor at the Massachusetts Institute of Technology, argued that useful applications won’t arrive until a system has at least 100 qubits, although, other researchers and organizations seem to disagree. While Google announced their quantum computing efforts back in 2014, they claimed that they would prove their “supremacy” by performing equal to or better than supercomputers by the end of 2017. Of course, though, Google isn’t the only one going after quantum computers. IBM already offers access to their specialized quantum computing platform and plans to create a 50-qubit quantum system within the next five years. This past May, they added a 17 qubit prototype quantum processor to their service as well, although it’s still in its experimental phase. The future of cloud quantum computing Chad Rigetti, founder of Rigetti Computing, which has netted over $69 million from investors for quantum computing software and equipment, believes that quantum computing will become as popular as AI is now, although he isn’t sure exactly when. “This industry is very much in its infancy,” Rigetti said. “No one has built a quantum computer that works.” Hopefully, the future of cloud quantum computing will be here sooner rather than later. Scientists believe that its applications are almost endless, from “improving the work of solar panels, creating medicines, and even fertilizers.” With the numerous applications, faster speeds, and potentially lower prices, cloud quantum computing could revolutionize technology. Photo credit: Pixabay
<urn:uuid:8dacbd72-10bf-4373-a245-e4f3b86e229d>
CC-MAIN-2022-49
https://techgenix.com/google-cloud-quantum-computing/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00550.warc.gz
en
0.957107
1,321
3.65625
4
We live in a time where the phrase “artificial intelligence” (called AI for short) is trendy and appears in the marketing descriptions of many products and services. But what is precisely AI? Broadly speaking, AI originated as an idea to create artificial “thinking” along the lines of the human brain. As of today, however, we can only make assumptions about how the human brain works, primarily based on medical research and observation. From a medical point of view, we know that the brain looks like a complex network of connections in which neurons are the main element and that our thoughts, memory, and creativity are a flow of electrical impulses. This knowledge has given hope to construct an analogous brain in an electronic version, either hardware or software, where neurons are replaced by electronics or software. However, since we are not 100% sure exactly how the brain works, all current models in AI are certain mathematical approximations and simplifications, serving only certain specific uses. Nevertheless, we know from observation that it is possible, for example, to create solutions that mimic the mind quite well – they can recognize the writing, images (objects), music, emotions, and even create art based on previously acquired experiences. However, the results of the latter are sometimes controversial. What else does AI resemble the human brain in? Well… it has to learn! AI solutions are based on one fundamental difference from classical algorithms: the initial product is a philosophical “tabula rasa”, or “pure mind”, which must first be taught. In the case of complex living organisms, knowledge emerges with development: the ability to speak, to move independently, to name objects, and in the case of humans and some animal species, there are elements of learning organized in kindergartens, schools, universities, and during work and independent development. Analogously in most artificial intelligence solutions – the AI model must first receive specific knowledge, most often in the form of examples, to be able to later function effectively as an “adult” algorithm. Some of the solutions learn once, while others improve their knowledge while functioning (Online Learning, or Reinforced Learning). It vividly resembles the human community: some people finish their education and work for the rest of their lives in one company doing one task. Others have to train throughout their lives as their work environment changes dynamically. Is AI already “smarter” than humans? As an interesting aside, we can compare the “computing power” of the brain versus the computing power of computers. It, of course, will be a simplification because the nature of the two is quite different. First, how many neurons does the average human brain have? It was initially estimated to be around 100 billion neurons. However, according to recent research (https://www.verywellmind.com/how-many-neurons-are-in-the-brain-2794889), the number of neurons in the “average” human brain is “slightly” less, by about 14 billion, or 86 billion neuronal cells. For comparison, the brain of a fruit fly is about 100 thousand neurons, a mouse 75 million neurons, a cat 250 million, a chimpanzee 7 billion. An interesting fact is an elephant’s brain (much larger than a human in terms of size), which has … 257 billion neurons, which is definitely more than the brain of a human. From medical research, we know that for each neuron, there are about 1000 connections with neighboring neurons or so-called synapses, so in the case of humans, the total number of connections is around 86 trillion (86 billion neurons * 1000 connections). Therefore, in simplified terms, we can assume that each synapse performs one “operation”, analogous to one instruction in the processor. At what speed does the brain work? In total … not much. We can determine it based on BCI type interfaces (Brain-Computer Interface), which not so long ago appeared as a result of the development of medical devices for electroencephalography (EEG), such as armbands produced by Emotiv, thanks to which we can control the computer using brain waves. Of course, they do not integrate directly with the cerebral cortex but measure activity by analyzing electrical signals. Based on this, we can say that the brain works at variable speed (analogous to the Turbo mode in the processor), and it is between 0.5Hz for the so-called delta state (complete rest) and about 100Hz for the gamma state (stress, full tension). Thus, we can estimate the maximum computational power of the brain as 8.6 billion operations (8.6*10^15) or 8.6 Petaflops! Despite the relatively slow performance of the brain, this is a colossal number thanks to the parallelization of operations. From Wikipedia (https://en.wikipedia.org/wiki/Supercomputer), we learn that supercomputers did not break this limit until the first decade of the 21st century. The situation will change with the advent of quantum computers, which inherently work in parallel, just like the human brain. However, as of today, quantum computing technology for cyber threat hunting is still in its infancy. In conclusion, at the moment, AI has not yet overtaken the human brain, but it probably will someday. However, we are only talking about learning speed here, leaving aside the whole issue of creativity, “coming up with” ideas, emotions, etc. AI and mobile devices Artificial intelligence applications require substantial computational power, especially at the so-called learning stage, and pose a significant challenge in integrating them with AR and VR solutions. Unfortunately, AR and VR devices mostly have very limited resources, as they are effectively ARM processor-based mobile platforms comparable in performance to smartphones. As a result, most artificial intelligence models are so computationally (mathematically) complex that they cannot be trained directly on mobile devices. OK – you can, but it will take an incredibly and unacceptably long time. So in most cases, to learn models, we use powerful PCs (clusters) and GPU gas pedals, mainly Nvidia CUDA. This knowledge is then “exported” into a simplified model “implanted” into AR and VR software or mobile hardware. In our next blog post, you’ll learn how we integrated AI into VR and AR, how we dealt with the limited performance of mobile devices, and what we use AI for in AR and VR.
<urn:uuid:17d98485-deab-4893-b18f-9c38e17b4b00>
CC-MAIN-2022-49
https://itsilesia.com/a-brief-overview-of-what-artificial-intelligence-is/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00109.warc.gz
en
0.944918
1,351
3.703125
4
Many protocols like SSH, OpenPGP, S/MIME, and SSL/TLS rely on RSA encryption where access to data is secured with two keys. The encryption key is public and differs from the decryption key which is kept secret. The cryptosystem’s reliability exploits the fact that factoring large primes takes years to do even for today’s fastest supercomputers, so protocols based on RSA have proven paramount to anything from processing payments to storing classified intelligence. RSA, however, might become obsolete soon as quantum computer system become stabler and more efficient. Using only five atoms, a team of international researchers showed how to factor a prime, albeit a trivial one for demo purposes. The researchers say there aren’t any physical restrictions that might hinder scalability. Theoretically, more atoms could be added in the process and large primes could be solved at lightning speed. That doesn’t make the engineering challenges easy, though. RSA was first described in 1977 by Ron Rivest, Adi Shamir and Leonard Adleman of the Massachusetts Institute of Technology. In this asymmetric cryptography, two different but mathematically linked keys, one public and one private, are used to decrypt a message. The public key, which anyone can see and use to encrypt a message, is based on the product of two large primes, and an auxiliary exponential. Multiplying two large primes to an integer is easy, but determining the original primes that make the product with no other info is very difficult. In 1994, Peter Shor, the Morss Professor of Applied Mathematics at MIT, came up with a quantum algorithm that calculates the prime factors of a large number, vastly more efficiently than a classical computer. To actually run the algorithm though a quantum computer would require many qubits or quantum bits. In conventional computers, operations that transform inputs into outputs work with bits which can be 0s or 1s. Qubits are atomic-scale units that can be 0 and 1 at the same time — a state known as a superposition. What this means is that a quantum computer can essentially carry out two calculations in parallel. A system that works with qubits can be not twice but millions of times faster than a conventional computer. Previously, scientists designed quantum computers that could factor the number 15 (primes 3 and 5), but these couldn’t be scaled to factor larger numbers. “The difficulty is to implement [the algorithm] in a system that’s sufficiently isolated that it can stay quantum mechanical for long enough that you can actually have a chance to do the whole algorithm,” Isaac Chuang, professor of physics and professor of electrical engineering and computer science at MIT. Chuang and colleagues at MIT and the University of Innsbruck in Austria claim they not only found a way to make a quantum system scalable, but also more efficient. Typically, it took 12 qubits to factor the number 15. The researchers factored the same number using only five qubits or atoms. These five atoms are held in an ion trap, which removes an electron from each atom thereby charging it. The system is stabilized by holding the atoms in place with a magnetic field. Logic gates operations are performed using laser pulses on four of the atoms, while the fifth is used to store or extract results. Using the fifth atom to store information was the brilliant part. “Measuring a qubit knocks it out of superposition and thereby destroys the information it holds. Restricting the measurement step to the fifth ion kept the four involved in the computation from being corrupted,” wrote Amy Nordrum in an article for IEEE. The number 15, albeit trivial to solve, is the smallest that can meaningfully demonstrate Shor’s algorithm. A working system developed at University of Innsbruck factored the number with a confidence exceeding 99 percent, as reported in the journal Science. “In future generations, we foresee it being straightforwardly scalable, once the apparatus can trap more atoms and more laser beams can control the pulses,” Chuang says. “We see no physical reason why that is not going to be in the cards.” To decrypt a typical 1024-bit key, the same system would need thousands of qubits or simultaneous laser pulses. This is doable, but highly challenging and it might take a long time before you can use a quantum computer to break RSA. Moreover, many researchers are already aware of the limitations of current cryptosystem and are preparing for the future: “quantum-resistant public-key algorithms”. “Continued advances in quantum computing will draw broad attention to the threat it represents to all of today’s widely used public-key cryptosystems – the cryptography that underlies electronic commerce and secure communications on the Internet. The security community will begin planning the migration to new `quantum-resistant’ public-key cryptosystems for which quantum computers provide no computational advantage,” said Brian LaMacchia, Director, Security & Cryptography, Microsoft Research.
<urn:uuid:3c35d98e-95fe-4369-bc85-d892cdc31846>
CC-MAIN-2022-49
https://dev.zmescience.com/tech/quantum-computers-encryption/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00589.warc.gz
en
0.932423
1,050
3.875
4
The original University of Chicago “uchicago news” article by Louise Lerner can be read here. Researchers used the U.S. Department of Energy’s Advanced Photon Source (APS) to help them invent an innovative way for different types of quantum technology to “talk” to each other using sound. The study, published in Nature Physics, is an important step in bringing quantum technology closer to reality. Scientists are eyeing quantum systems, which tap the quirky behavior of the smallest particles, as the key to a fundamentally new generation of atomic-scale electronics for computation and communication. But a persistent challenge has been transferring information between different types of technology, such as quantum memories and quantum processors. “We approached this question by asking: Can we manipulate and connect quantum states of matter with sound waves?” said senior study author David Awschalom, the Liew Family Professor with the Institute for Molecular Engineering and senior scientist at Argonne National Laboratory. One way to run a quantum computing operation is to use “spins”—a property of an electron that can be up, down or both. Scientists can use these like zeroes and ones in today’s binary computer programming language. But getting this information elsewhere requires a translator, and scientists thought sound waves could help. “The object is to couple the sound waves with the spins of electrons in the material,” said graduate student Samuel Whiteley, the co-first author on the Nature Physics paper. “But the first challenge is to get the spins to pay attention.” So they built a system with curved electrodes to concentrate the sound waves, like using a magnifying lens to focus a point of light. The results were promising, but the researchers from The University of Chicago, and Argonne National Laboratory needed more data. To get a better look at what was happening, they worked with scientists at the Center for Nanoscale Materials (CNM) at Argonne to observe the system in real time. Essentially, they used extremely bright, powerful x-rays from the CNM/X-ray Science Division 26-ID-C x-ray beamline at the Advanced Photon Source as a microscope to peer at the atoms inside the material as the sound waves moved through it at nearly 7,000 kilometers per second. (Both the CNM and the APS are Office of Science user facilities at Argonne.) “This new method allows us to observe the atomic dynamics and structure in quantum materials at extremely small length scales,” said Awschalom. “This is one of only a few locations worldwide with the instrumentation to directly watch atoms move in a lattice as sound waves passes through them.” One of the many surprising results, the researchers said, was that the quantum effects of sound waves were more complicated than they’d first imagined. To build a comprehensive theory behind what they were observing at the subatomic level, they turned to Prof. Giulia Galli, the Liew Family Professor at the IME and a senior scientist at Argonne. Modeling the system involves marshalling the interactions of every single particle in the system, which grows exponentially, Awschalom said, “but Professor Galli is a world expert in taking this kind of challenging problem and interpreting the underlying physics, which allowed us to further improve the system.” It’s normally difficult to send quantum information for more than a few microns, said Whiteley—that’s the width of a single strand of spider silk. This technique could extend control across an entire chip or wafer. “The results gave us new ways to control our systems, and opens venues of research and technological applications such as quantum sensing,” said postdoctoral researcher Gary Wolfowicz, the other co-first author of the study. The discovery is another from the University of Chicago’s world-leading program in quantum information science and engineering; Awschalom is currently leading a project to build a quantum “teleportation” network between Argonne and Fermi National Accelerator Laboratory to test principles for a potentially un-hackable communications system. The scientists pointed to the confluence of expertise, resources and facilities at the University of Chicago, Institute for Molecular Engineering and Argonne as key to fully exploring the technology. “No one group has the ability to explore these complex quantum systems and solve this class of problems; it takes state-of-the-art facilities, theorists and experimentalists working in close collaboration,” Awschalom said. “The strong connection between Argonne and the University of Chicago enables our students to address some of the most challenging questions in this rapidly moving area of science and technology.” See: Samuel J. Whiteley1, Gary Wolfowicz1,2, Christopher P. Anderson1, Alexandre Bourassa1, He Ma1, Meng Ye1, Gerwin Koolstra1, Kevin J. Satzinger1,3, Martin V. Holt4, F. Joseph Heremans1,4, Andrew N. Cleland1,4, David I. Schuster1, Giulia Galli1,4, and David D. Awschalom1,4*, “Spin–phonon interactions in silicon carbide addressed by Gaussian acoustics,” Nat. Phys., published on line 11 February 2019. DOI: 10.1038/s41567-019-0420-0 Author affiliations: 1The University of Chicago, 2Tohoku University, 3University of California, Santa Barbara, 4Argonne National Laboratory The devices and experiments were supported by the Air Force Office of Scientific Research; material for this work was supported by the U.S. Department of Energy (DOE). Use of the Center for Nanoscale Materials, an Office of Science user facility, was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. S.J.W. and K.J.S. were supported by the NSF GRFP, C.P.A. was supported by the Department of Defense through the NDSEG Program, and M.V.H., F.J.H., A.N.C., G.G. and D.D.A. were supported by the U.S. DOE Office of Science-Basic Energy Sciences. This work made use of the UChicago MRSEC (NSF DMR-1420709) and Pritzker Nanofabrication Facility, which receives support from the SHyNE, a node of the NSF’s National Nanotechnology Coordinated Infrastructure (NSF ECCS-1542205). This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science. The U.S. Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.
<urn:uuid:8ded5d1c-e4bf-444d-b6b8-ec67702e289f>
CC-MAIN-2022-49
https://www.aps.anl.gov/APS-Science-Highlight/2019-02-21/sound-waves-let-quantum-systems-talk-to-one-another
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00591.warc.gz
en
0.918557
1,654
3.671875
4
The 2022 Nobel Prize for Physics was awarded to experimental physicists Alain Aspect, John Clauser, and Anton Zeilinger. The three pioneers conducted groundbreaking research using entangled quantum particles — subatomic particles that behave as if they are linked even when there is nothing between them — a process that Albert Einstein famously called “spooky action at a distance”. “Quantum information science is a vibrant and rapidly developing field,” said Eva Olsson, a member of the Nobel Committee for Physics. “It has broad potential implications in areas such as secure information transfer, quantum computing, and sensing technology.” But it wasn’t always like this. In fact, quantum physics itself was a fiercely debated field. In the 1930s, one of the fiercest clashes in physics history erupted between Albert Einstein on one hand and Niels Bohr and Erwin Schrödinger on the other (all three Nobel laureates). Einstein believed that everything had to be concrete and knowledgeable at a fundamental level, whereas the pioneers of quantum mechanics argued that reality can be uncertain and particles don’t possess certain properties until they are measured. John Clauser initially thought Einstein was right, and in the 1970s, he set out clever experiments to settle the debate. But it didn’t go as planned: in fact, his experiments disproved Einstein and laid the groundwork for a deeper understanding of quantum mechanics — and in particular, quantum entanglement. Quantum entanglement really is a bizarre process. It’s essentially a phenomenon that can occur when some particles (most commonly photons) are linked together in a way that persists no matter how far apart they are in space and they have a state that cannot be described independently of each other. For instance, physical properties such as position, momentum, spin, and polarization can be perfectly correlated between entangled particles even when they are miles away from each other. Basically, you can study one of the entangled particles and gain information about the linked particles as well — a phenomenon that has no equivalent in classical mechanics. “I would not call entanglement ‘one,’ but rather ‘the’ trait of quantum mechanics,” Thors Hans Hansson, a member of the Nobel Committee, quoted Schrödinger as writing in 1935. “The experiments performed by Clauser and Aspect opened the eyes of the physics community to the depth of Schrödinger’s statement, and provided tools for creating and manipulating and measuring states of particles that are entangled although they are far away.” Einstein (and many other physicists) suspected that if the particles are linked, then there must be some ‘hidden variables’ to connect them, or something that would tie them together. Instead, experimental research from the three laureates showed that there is a genuine entanglement that is not owed to other factors. Ironically, Clauser, who runs his own company in California now, recalls how his advisor thought this field of research was a “waste of time” and advised him to focus on something else and warned him against “ruining” his career. Well, as it turns out, the very opposite happened. The trio’s experiments were also previously awarded the Wolf Prize, sometimes considered a precursor to the Nobel Prize. In fact, the three have been considered “favorites” for a Nobel Prize for a decade. However, Zeilinger, who is currently a professor of physics at the University of Vienna, was very eager to point out that the three did not work alone, and dedicated the prize to the young people who helped in doing the work. “This prize is an encouragement to young people,” said Zeilinger. “It would not be possible without more than 100 young people who worked with me over the years.” Zeilinger also gave some advice to young researchers, echoing the thoughts of Dennis Sullivan, the 2022 Abel Prize laureate (in mathematics): “Do what you find interesting, and don’t care too much about possible applications.” But it should also be said that the trio’s Nobel Prize also considered the applications of their experiments. While the field of quantum mechanics seems ethereal and removed from everyday life, researchers are increasingly finding applications for this technology. For starters, the quantum computers that have so much promise to solve complex problems are based on quantum processes studied by the three physicists. Another application is quantum communications, a technology with security that promises to be nigh-unbreakable. For instance, a research group from China managed to beam up entangled pairs of photons to a satellite, proving that entanglement can survive trips of over 1,000 kilometers — that group was spearheaded by one of Zeilinger’s former students. This type of quantum voyage paves the way for securing messages with a “quantum key” that gets destroyed any time someone attempts to eavesdrop and intercept the information. Basically, this could mean essentially unbreakable cryptography. However, while the field is growing rapidly and it has a lot of potential, there is much we still don’t know about entanglement. In theory, everything could be entangled, but practically, the process seems chaotic and random, and the largest experiments have entangled around a dozen photons. Another project has entangled around a thousand atoms with a single photon. In 2021, the Nobel Prize for Physics was awarded to three researchers who study complex systems that are particularly important for climate science. Earlier this week, the Nobel committee awarded the Physiology or Medicine prize to Svante Pääbo for his many contributions “concerning the genomes of extinct hominins and human evolution.” All Nobel Prizes come with a cash reward worth 10 million Swedish krona ($920,000); if there are multiple laureates, the reward is shared. Andrei's background is in geophysics, and he's been fascinated by it ever since he was a child. Feeling that there is a gap between scientists and the general audience, he started ZME Science -- and the results are what you see today.
<urn:uuid:a43a902e-063a-4ee8-9627-0f60acc1abef>
CC-MAIN-2022-49
https://www.zmescience.com/science/physics/nobel-prize-awarded-to-quantum-physicists-that-studied-spooky-action/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00711.warc.gz
en
0.962772
1,289
3.5
4
The laws of physics, among the greatest discoveries of humankind, have emerged over many centuries in a process often influenced by the prominent thinkers of the time. This process has had a profound influence on the evolution of science and gives the impression that some laws could not have been discovered without the knowledge of earlier ages. Quantum mechanics, for example, is built on classical mechanics using various mathematical ideas that were prominent at the time. But perhaps there is another way of discovering the laws of physics that does not depend on the understanding we have already gained about the universe. Today Raban Iten, Tony Metger, and colleagues at ETH Zurich in Switzerland say they have developed just such a method and used it to discover laws of physics in an entirely novel way. And they say it may be possible to use this method to find wholly new formulations of physical laws. First, some background. The laws of physics are simple representations that can be interrogated to provide information about more complex scenarios. Imagine setting a pendulum in motion and asking where the base of the pendulum will be at some point in the future. One way to answer this is by measuring the position of the pendulum as it swings. This data can then be used as a kind of look-up table to find the answer. But the laws of motion provide a much easier way of discovering the answer: simply plug values for the various variables into the appropriate equation. That gives the correct answer too. That’s why the equation can be thought of as a compressed representation of reality. This immediately suggests how neural networks might find these laws. Given some observations from an experiment—a swinging pendulum, for example—the goal is to find some simpler representation of this data. The idea from Iten, Metger, and co is to feed this data into the machine so it learns how to make an accurate prediction of the position. Once the machine has learned this, it can then predict the position from any initial set of conditions. In other words, it has learned the relevant law of physics. To find out whether this works, the researchers feed data from a swinging-pendulum experiment into a neural network they call SciNet. They go on to repeat this for experiments that include the collision of two balls, the results of a quantum measurement on a qubit, and even the positions of the planets and sun in the night sky. The results make for interesting reading. Using the pendulum data, SciNet is able to predict the future frequency of the pendulum with an error of less than 2 percent. What’s more, Iten, Metger, and co are able to interrogate SciNet to see how it arrives at the answer. This doesn’t reveal the precise equation, unfortunately, but it does show that the network uses only two variables to come up with the solution. That’s exactly the same number as in the relevant laws of motion. But that isn’t all. SciNet also provides accurate predictions of the angular momentum of two balls after they have collided. That’s only possible using the conservation of momentum, a version of which SciNet appears to have discovered. It also predicts the measurement probabilities when a qubit is interrogated, clearly using some representation of the quantum world. Perhaps most impressive is that the network learns to predict the future position of Mars and the sun using the initial position as seen from Earth. That’s only possible using a heliocentric model of the solar system, an idea that humans took centuries to hit on. And indeed, an interrogation of SciNet suggest it is has learned just such a heliocentric representation. “SciNet stores the angles of the Earth and Mars as seen from the Sun in the two latent neurons—that is, it recovers the heliocentric model of the solar system,” say the researchers. That’s impressive work, but it needs to be placed in perspective. This may be the first demonstration that an artificial neural network can compress data in a way that reveals aspects of the laws of physics. But it is not the first time that a computational approach has derived these laws. A few years ago, computer scientists at Cornell University used a genetic algorithm that exploits the process of evolution to derive a number of laws of physics from experimental data. These included conservation laws for energy and momentum. The system even spat out the equation itself, not just a hint about how it was calculating, as SciNet does. Clearly, evolutionary algorithms have the upper hand in the process of discovering the laws of physics using raw experimental data. (Given that evolution is the process that produced biological neural networks in the first place, it is arguable that it will forever be the more powerful approach.) There is an interesting corollary to all this. It has taken humanity centuries to discover the laws of physics, often in ways that have depended crucially on previously discovered laws. For example, quantum mechanics is based in classical mechanics. Could there be better laws that can be derived from experimental data without any prior knowledge of physics? If so, this machine-learning approach or the one based on evolution should be exactly what’s need to find them. Ref: arxiv.org/abs/1807.10300 : Discovering physical concepts with neural networks Why Meta’s latest large language model survived only three days online Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense. A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it. Responsible AI has a burnout problem Companies say they want ethical AI. But those working in the field say that ambition comes at their expense. Biotech labs are using AI inspired by DALL-E to invent new drugs Two groups have announced powerful new generative models that can design new proteins on demand not seen in nature. Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
<urn:uuid:ee8fcaaa-6a07-4671-b733-cb850af23fd9>
CC-MAIN-2022-49
https://www.technologyreview.com/2018/08/03/2435/who-needs-copernicus-if-you-have-machine-learning/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00872.warc.gz
en
0.940334
1,282
3.75
4
Exploring the magnetism of a single atom An EPFL-led research collaboration has shown for the first time the maximum theoretical limit of energy needed to control the magnetization of a single atom. The fundamental work can have great implications for the future of magnetic research and technology. Magnetic devices like hard drives, magnetic random access memories (MRAMs), molecular magnets, and quantum computers depend on the manipulation of magnetic properties. In an atom, magnetism arises from the spin and orbital momentum of its electrons. ‘Magnetic anisotropy’ describes how an atom’s magnetic properties depend on the orientation of the electrons’ orbits relative to the structure of a material. It also provides directionality and stability to magnetization. Publishing in Science, researchers led by EPFL combine various experimental and computational methods to measure for the first time the energy needed to change the magnetic anisotropy of a single Cobalt atom. Their methodology and findings can impact a range of fields from fundamental studies of single atom and single molecule magnetism to the design of spintronic device architectures. Magnetism is used widely in technologies from hard drives to magnetic resonance, and even in quantum computer designs. In theory, every atom or molecule has the potential to be magnetic, since this depends on the movement of its electrons. Electrons move in two ways: Spin, which can loosely be thought as spinning around themselves, and orbit, which refers to an electron’s movement around the nucleus of its atom. The spin and orbital motion gives rise to the magnetization, similar to an electric current circulating in a coil and producing a magnetic field. The spinning direction of the electrons therefore defines the direction of the magnetization in a material. The magnetic properties of a material have a certain ‘preference’ or ‘stubbornness’ towards a specific direction. This phenomenon is referred to as ‘magnetic anisotropy’, and is described as the “directional dependence” of a material’s magnetism. Changing this ‘preference’ requires a certain amount of energy. The total energy corresponding to a material’s magnetic anisotropy is a fundamental constraint to the downscaling of magnetic devices like MRAMs, computer hard drives and even quantum computers, which use different electron spin states as distinct information units, or ‘qubits’. The team of Harald Brune at EPFL, working with scientists at the ETH Zurich, Paul Scherrer Institute, and IBM Almaden Research Center, have developed a method to determine the maximum possible magnetic anisotropy for a single Cobalt atom. Cobalt, which is classed as a ‘transition metal’, is widely used in the fabrication of permanent magnets as well as in magnetic recording materials for data storage applications. The researchers used a technique called inelastic electron tunneling spectroscopy to probe the quantum spin states of a single cobalt atom bound to a magnesium oxide (MgO) layer. The technique uses an atom-sized scanning tip that allows the passage (or ‘tunneling’) of electrons to the bound cobalt atom. When electrons tunneled through, they transferred energy the cobalt atom, inducing changes in its spin properties. The experiments showed the maximum magnetic anisotropy energy of a single atom (~60 millielectron volts) and the longest spin lifetime for a single transition metal atom. This large anisotropy leads to a remarkable magnetic moment, which has been determined with synchrotron-based measurements at the X-Treme beamline at the Swiss Light Source. Though fundamental, these findings open the way for a better understanding of magnetic anisotropy and present a single-atom model system that can be conceivably used as a future ‘qubit’. “Quantum computing uses quantum states of matter, and magnetic properties are such a quantum state”, says Harald Brune. “They have a life-time, and you can use the individual suface-adsorbed atoms to make qubits. Our system is a model for such a state. It allows us to optimize the quantum properties, and it is easier than previous ones, because we know exactly where the cobalt atom is in relation to the MgO layer.” This work represents a collaboration between EPFL’s Laboratory of Nanostructures at Surfaces (LNS), IBM’s Almaden Research Center, ETH Zurich’s Department of Materials, Paul Scherrer Institute’s Swiss Light Source, and Georgetown University’s Department of Physics. Rau IG, Baumann S, Rusponi S, Donati F, Stepanow S, Gragnaniello L, Dreiser J, Piamonteze C, Nolting F, Gangopadhyay S, Albertini OR, Macfarlane RM, Lutz CP, Jones B, Gambardella P, Heinrich AJ, Harald Brune. Reaching the magnetic anisotropy limit of a 3d metal atom. Science 08 March 2014 DOI:10.1126/science.1252841
<urn:uuid:29dd3bbf-45e2-4d4e-af64-f70bda99a75c>
CC-MAIN-2022-49
https://actu.epfl.ch/news/exploring-the-magnetism-of-a-single-atom/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00474.warc.gz
en
0.866622
1,077
3.578125
4
Radhika Iyer – 2022 Teddy Rocks Maths Essay Competition Commended Entry Data transmission is often noisy. Information can get easily garbled, and imperfect information frequently has a cost associated with it. Coding theory is a field of mathematics that deals with trying to make transmission more reliable by using error correcting codes, which are methods of detecting and correcting errors. Throughout this delve into error correcting codes, we will be considering the transmission of strings of binary (or base 2), as every letter, symbol or pixel from an image can be represented as a string of 0s and 1s. Additionally, when a message of length k is sent, there are 2k possible messages being sent, as there are 2 options for each bit. Error correcting codes send more than k binary digits, with these extra digits, called parity digits, helping to detect and correct codes. One example of error correction codes is repetition codes, where we send each message multiple times. For example, if we sent 0011 twice, as 00110011, then the second block of four bits could be compared by the receiver against the first block. A recurrent term in coding theory is information rate (R), with it recording how much information is carried on average for every bit that is sent. For repetition codes, the information rate is often really low, here 0.5 and this can be lower if blocks are sent even more times. Therefore, in practice, we would prefer to use other error correction codes. The weight of a binary sequence is the number of bits in the message that are equal to 1. Parity check codes work by adding a parity check bit at the end that will make a message have even (or odd) weight. For example, consider a scenario where we want an even weight, and the message we are trying to send is 0011. There are 2 (which is an even count) 1s right now, so we add a 0 at the end, so that we have even weight. If we were trying to send 1110, then we would add a 1. This is the equivalent of making the parity digit equal to the sum of all the digits modulo 2. Then, when a message received, if this parity bit does not represent the weight of the message being sent, then a bit might have flipped in transmission. Although we can’t be sure where an error may have occurred, or if an even number of changes occurred, the information rate for this code is 4/5 = 0.8. If we consider trying to find a single-error-detecting code for a message of length 4 bits, as we need to always add a parity bit, an overall message of length 5 is the shortest possible length, so 0.8 is the highest possible information rate for a message of length 4. In general, parity check codes for messages of length n have information rates of R = n/(1 + n). So far, what has been described has been focused on trying to get good error detection, but we still need to correct these errors and try and find the original message. I will be referring to codewords, which are the overall transmissions received that combine our original message and the added bits, that help transmit messages with less likelihood of error. Another important definition is that of Hamming distances, which are the total number of different bits when comparing two codewords. For example, 1010 and 1001 have a Hamming distance of 2, as the last two bits are different. Let’s take a scenario where we are trying to send either a True or a False, with 1 for True and 0 for False. If an error occurred when sending this message, then we would never be able to know if the original message was True or False. Now, let’s say that 11 is True and 00 is False. We still would not be able to know what the original message was for 01, as this has a Hamming distance of 1 from both 00 and 11. Therefore, let’s say that 111 is True and 000 is False. If we received either 101,110, 011, then we could assume that an error had occurred and what was originally sent was True, as there is only one change. Therefore, if we received 001, 010, or 100, then the original message was False. This is known as majority logic. This code is called the (3,1) repetition code, as three bits are sent, with the message of length 1 (0 or 1) being identified. There is a link with geometry here – this is where sphere packing comes in. Sphere packing concerns the arranging of non-overlapping spheres within a space, where we are trying to maximise the total volume of all the spheres that fit within this space. If we consider (0,0,0) and (1,1,1) as points in a 3D space, then all our other possible messages, where only one error has occurred, can also be visualised as vertices of a cube. Two spheres of radius 1 can then be centred at (0,0,0) and (1,1,1). The vertices contained within the sphere can then be interpreted as codewords for the point at the centre of the sphere. In order to fit more original messages being sent, we would like more of these spheres to be packed around points, so that more messages can be transmitted. Sphere packing, where all spheres are disjoint, can be used to find error correcting codes that can always detect the position of errors. Perfect Hamming codes are a well-known class of codes that can correct single bit errors, and these occur when all vertices, in however many dimensions of Euclidean space, can be contained within spheres that have the smallest radius we can make possible. This means that they have attained the Hamming bound, or the sphere-packing bound. Another way of writing this is to say that a perfect code occurs when all vertices are either codewords themselves, or are only one edge (or a Hamming distance of 1) away from a codeword. The most well-known Hamming code is the (7,4) code which uses a ‘generator matrix’ to create three parity bits added to our four bits that make up the message, and is a code that can detect and correct single errors. It is thought to have a relatively high information rate, as the rate is higher than 0.333 for the (3,1) code; the R for the (7,4) code is 4/7 = 0.571 (3 significant figures). What is interesting to note is that (7,4) is the first perfect Hamming code after (3,1). After this, the next perfect code is (15,11). A pattern you may have spotted is that the first number in all of these brackets is one less than a power of two. In order to explain this, let us consider the (7,4) scenario. If we consider code words that are on a 7-dimensional hypercube, every codeword would have 7 edges exiting this point, and so, including itself, there are 8 vertices involved for every message. Now, as we are using messages of length 4, there are 16 possible messages. 16 x 8 is 128 which is 27. On a hypercube in n-dimensions, the total number of vertices is always 2n. This makes having an overall message length of 2n-1 a perfect scenario, as every single vertex is involved with a codeword (2n for each possible message), so there is a most effective use of spheres or space. Perfect Hamming codes are a method of efficiently correcting single bit errors, but it is important to note that these processes are not going to be able to correct all of the bits that may be corrupted in a data transmission error. There are also many other famous codes that I have not delved into, including Reed-Muller codes, the famous Golay (23,12) code that can correct up to three errors, and the Leech lattice in 24-dimensional space. There are also many links between error correcting and probability that have not been mentioned. With an increasing focus on quantum computing and how powerful this field can be, especially when we think about their impact on cryptography, it is interesting to think about qubits, which also transmit information. Qubits can be in any superposition of the states 0 and 1, and will also have errors, but recent research has shown that space-time could be involved in building error correcting code for qubits and quantum computers. Perhaps this will be the area that coding theory focuses on next. Thompson, Thomas M. (2014) – From Error-Correcting Codes Through Sphere Packings to Simple (Chapter 1) doi: 10.5948/UPO9781614440215.002
<urn:uuid:e7e6bb8d-7114-477e-967f-30d557805a07>
CC-MAIN-2022-49
https://tomrocksmaths.com/2022/09/27/understanding-transmissions-with-error-correcting-codes/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00433.warc.gz
en
0.951237
1,847
3.8125
4
Topological insulators are one of the most puzzling quantum materials – a class of materials whose electrons cooperate in surprising ways to produce unexpected properties. The edges of a TI are electron superhighways where electrons flow with no loss, ignoring any impurities or other obstacles in their path, while the bulk of the material blocks electron flow. Scientists have studied these puzzling materials since their discovery just over a decade ago with an eye to harnessing them for things like quantum computing and information processing. Now researchers at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have invented a new, hands-off way to probe the fastest and most ephemeral phenomena within a TI and clearly distinguish what its electrons are doing on the superhighway edges from what they’re doing everywhere else. The technique takes advantage of a phenomenon called high harmonic generation, or HHG, which shifts laser light to higher energies and higher frequencies – much like pressing a guitar string produces a higher note – by shining it through a material. By varying the polarization of laser light going into a TI and analyzing the shifted light coming out, researchers got strong and separate signals that told them what was happening in each of the material’s two contrasting domains. “What we found out is that the light coming out gives us information about the properties of the superhighway surfaces,” said Shambhu Ghimire, a principal investigator with the Stanford PULSE Institute at SLAC, where the work was carried out. “This signal is quite remarkable, and its dependence on the polarization of the laser light is dramatically different from what we see in conventional materials. We think we have a potentially novel approach for initiating and probing quantum behaviors that are supposed to be present in a broad range of quantum materials.” The research team reported the results today in Physical Review A. Light in, light out Starting in 2010, a series of experiments led by Ghimire and PULSE Director David Reis showed HHG can be produced in ways that were previously thought unlikely or even impossible: by beaming laser light into a crystal, a frozen argon gas or an atomically thin semiconductor material. Another study described how to use HHG to generate attosecond laser pulses, which can be used to observe and control the movements of electrons, by shining a laser through ordinary glass. In 2018, Denitsa Baykusheva, a Swiss National Science Foundation Fellow with a background in HHG research, joined the PULSE group as a postdoctoral researcher. Her goal was to study the potential for generating HHG in topological insulators – the first such study in a quantum material. “We wanted to see what happens to the intense laser pulse used to generate HHG,” she said. “No one had actually focused such a strong laser light on these materials before.” But midway through those experiments, the COVID-19 pandemic hit and the lab shut down in March 2020 for all but essential research. So the team had to think of other ways to make progress, Baykusheva said. “In a new area of research like this one, theory and experiment have to go hand in hand,” she explained. “Theory is essential for explaining experimental results and also predicting the most promising avenues for future experiments. So we all turned ourselves into theorists” – first working with pen and paper and then writing code and doing calculations to feed into computer models. An illuminating result To their surprise, the results predicted that circularly polarized laser light, whose waves spiral around the beam like a corkscrew, could be used to trigger HHG in topological insulators. “One of the interesting things we observed is that circularly polarized laser light is very efficient at generating harmonics from the superhighway surfaces of the topological insulator, but not from the rest of it,” Baykusheva said. “This is something very unique and specific to this type of material. It can be used to get information about electrons that travel the superhighways and those that don't, and it can also be used to explore other types of materials that can’t be probed with linearly polarized light.” The results lay out a recipe for continuing to explore HHG in quantum materials, said Reis, who is a co-author of the study. “It’s remarkable that a technique that generates strong and potentially disruptive fields, which takes electrons in the material and jostles them around and uses them to probe the properties of the material itself, can give you such a clear and robust signal about the material’s topological states,” he said. “The fact that we can see anything at all is amazing, not to mention the fact that we could potentially use that same light to change the material’s topological properties.” Experiments at SLAC have resumed on a limited basis, Reis added, and the results of the theoretical work have given the team new confidence that they know exactly what they are looking for. Researchers from the Max Planck POSTECH/KOREA Research Initiative also contributed to this report. Major funding for the study came from the DOE Office of Science and the Swiss National Science Foundation. Citation: Denitsa Baykusheva et al., Physical Review A, 2 February 2020 (10.1103/PhysRevA.103.023101) For questions or comments, contact the SLAC Office of Communications at firstname.lastname@example.org. SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation. SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.
<urn:uuid:8558f27d-e383-42c2-82c2-b74976ab6bfd>
CC-MAIN-2022-49
https://www6.slac.stanford.edu/news/2021-02-02-new-hands-probe-uses-light-explore-subtleties-electron-behavior-topological
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00435.warc.gz
en
0.940012
1,293
3.8125
4
Ultra-thin designer materials unlock quantum phenomena A team of theoretical and experimental physicists have designed a new ultra-thin material that they have used to create elusive quantum states. Called one-dimensional Majorana zero energy modes, these quantum states could have a huge impact for quantum computing. At the core of a quantum computer is a qubit, which is used to make high-speed calculations. The qubits that Google, for example, in its Sycamore processor unveiled last year, and others are currently using are very sensitive to noise and interference from the computer’s surroundings, which introduces errors into the calculations. A new type of qubit, called a topological qubit, could solve this issue, and 1D Majorana zero energy modes may be the key to making them. ‘A topological quantum computer is based on topological qubits, which are supposed to be much more noise tolerant than other qubits. However, topological qubits have not been produced in the lab yet,’ explains Professor Peter Liljeroth, the lead researcher on the project. What are MZMs? MZMs are groups of electrons bound together in a specific way so they behave like a particle called a Majorana fermion, a semi-mythical particle first proposed by semi-mythical physicist Ettore Majorana in the 1930s. If Majorana’s theoretical particles could be bound together, they would work as a topological qubit. One catch: no evidence for their existence has ever been seen, either in the lab or in astronomy. Instead of attempting to make a particle that no one has ever seen anywhere in the universe, researchers instead try to make regular electrons behave like them. To make MZMs, researchers need incredibly small materials, an area in which Professor Liljeroth’s group at Aalto University specialises. MZMs are formed by giving a group of electrons a very specific amount of energy, and then trapping them together so they can’t escape. To achieve this, the materials need to be 2-dimensional, and as thin as physically possible. To create 1D MZMs, the team needed to make an entirely new type of 2D material: a topological superconductor. Topological superconductivity is the property that occurs at the boundary of a magnetic electrical insulator and a superconductor. To create 1D MZMs, Professor Liljeroth’s team needed to be able to trap electrons together in a topological superconductor, however it’s not as simple as sticking any magnet to any superconductor. ‘If you put most magnets on top of a superconductor, you stop it from being a superconductor,’ explains Dr. Shawulienu Kezilebieke, the first author of the study. ‘The interactions between the materials disrupt their properties, but to make MZMs, you need the materials to interact just a little bit. The trick is to use 2D materials: they interact with each other just enough to make the properties you need for MZMs, but not so much that they disrupt each other.’ The property in question is the spin. In a magnetic material, the spin is aligned all in the same direction, whereas in a superconductor the spin is anti-aligned with alternating directions. Bringing a magnet and a superconductor together usually destroys the alignment and anti-alignment of the spins. However, in 2D layered materials the interactions between the materials are just enough to “tilt” the spins of the atoms enough that they create the specific spin state, called Rashba spin-orbit coupling, needed to make the MZMs. Finding the MZMs The topological superconductor in this study is made of a layer of chromium bromide, a material which is still magnetic when only one-atom-thick. Professor Liljeroth’s team grew one-atom-thick islands of chromium bromide on top of a superconducting crystal of niobium diselenide, and measured their electrical properties using a scanning tunneling microscope. At this point, they turned to the computer modelling expertise of Professor Adam Foster at Aalto University and Professor Teemu Ojanen, now at Tampere University, to understand what they had made. ‘There was a lot of simulation work needed to prove that the signal we’re seeing was caused by MZMs, and not other effects,’ says Professor Foster. ‘We needed to show that all the pieces fitted together to prove that we had produced MZMs.’ Now the team is sure that they can make 1D MZMs in 2-dimensional materials, the next step will be to attempt to make them into topological qubits. This step has so far eluded teams who have already made 0-dimensional MZMs, and the Aalto team are unwilling to speculate on if the process will be any easier with 1-dimensional MZMs, however they are optimistic about the future of 1D MZMs. ‘The cool part of this paper is that we’ve made MZMs in 2D materials,’ said Professor Liljeroth ‘In principle these are easier to make and easier to customise the properties of, and ultimately make into a usable device.’ The research collaboration included researchers from Tampere University in Finland, and M.Curie-Sklodowska University in Poland. Published at Thu, 17 Dec 2020 18:53:38 +0000
<urn:uuid:5a9faf25-7538-4492-83a0-00c9a964b1dc>
CC-MAIN-2022-49
https://www.ourgeneration.ca/2020/12/18/ultra-thin-designer-materials-unlock-quantum-phenomena/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00356.warc.gz
en
0.943114
1,173
3.578125
4
Please read this guest post about the quantum Internet by Stephanie Wehner, Professor at the University of Technology in Delft, The Netherlands. In March 2017, we invited Stephanie Wehner, Professor at QuTech at the Delft University of Technology to give a guest-lecture to RIPE NCC staff about the Quantum Internet project. We were curious to learn about this new technology, its consequences for the "traditional" Internet, and how we can make the connection between cutting-edge research and the RIPE community. The Technical Basics of Quantum Computing The goal of the quantum Internet is to enable transmission of quantum bits (qubits) between any two points on earth in order to solve problems that are intractable classically. Qubits are very different from classical bits in that they can be “0” and “1” at the same time, and cannot be copied. Currently, it is possible to make a transmission over 100km, and run a single application known as quantum key distribution. The next challenge is to go long distance, and to connect small quantum processors to enable a larger range of applications. Thankfully, these quantum processors do not need to be large quantum computers: a handful of qubits are already enough to outperform classical communication. The reason why quantum Internet nodes do not need many qubits to be useful (unlike quantum computers) is that a quantum Internet derives its advantages from quantum entanglement for which even a single qubit can be enough. In contrast, a quantum computer always needs more qubits than can be simulated on a classical supercomputer to be useful. Use-cases for quantum networking currently include: - Secure communication with the help of quantum key distribution - Clock synchronisation - Combining distant telescopes to form one much more powerful telescope - Advantages for classic problems in distributed systems such as achieving consensus and agreement about data distributed in the cloud - Sending exponentially fewer qubits than classical bits to solve some distributed computing problems - Secure access to a powerful quantum computer using only very simple “desktop” quantum devices - Combining small quantum computers to form a larger quantum computing cluster In general, quantum networking exploits two essential features of quantum entanglement: first, quantum entanglement is inherently private – if two network nodes are maximally entangled, then this entanglement is completely shielded from anything else in the universe according to the laws of quantum mechanics. Second, quantum entanglement allows maximal coordination – measuring two qubits that are entangled always results in the same outcome no matter how far they are apart. It is this feature of perfect coordination that gives advantages in, for example, clock synchronisation or even winning online bridge more often using quantum entanglement. Dutch Test-bed Network QuTech at the Delft University of Technology and TNO, in collaboration with the European Quantum Internet Alliance, is leading with the efforts to establish a quantum Internet, and aims to have a demonstration network in 2020 connecting four cities in the Netherlands. This network may be the first of its kind in 2020, and will allow the end to end transmission of qubits between any two network nodes consisting of few qubit processors. The quantum network in The Netherlands Transmitting Qubits Over Long Distances One may wonder why it is difficult to send qubits over long distances. Roughly speaking, one qubit corresponds to just one photon which is easily lost over distance. The technology needed to transmit qubits over long distances is called a quantum repeater. A quantum repeater works very differently than a classical repeater, exploiting the fact that qubits can be transmitted using quantum teleportation. Quantum teleportation works by first creating two entangled qubits between two network nodes. Once the entangled link is created, the qubit to be transmitted can be sent over it. Imagine two network nodes that are 200kms apart – too far for direct transmission. A quantum repeater in the middle works as follows: first two entangled qubits are created between the first endpoint and the repeater. This is possible since this endpoint and the repeater are only 100kms apart. Second, two entangled qubits are created between the repeater and the second endpoint. The repeater then uses quantum teleportation to transfer the qubit that is entangled with the first endpoint to the second endpoint. The end result is end-to-end entanglement between the two endpoints. Qubit data can now be transmitted using this entangled link. The concept of a quantum repeater Involvement with the RIPE Community After this research project is accomplished, industry partners from the RIPE community are needed to take over in order to scale, increase the speed and make this new technology added to the "traditional" Internet, as a parallel service. A quantum Internet also needs significant protocol development to define a networking stack adapted to the transmission of qubits, and the management of entanglement. This requires the help of the RIPE community at large to develop a classical protocol stack to control a quantum Internet and implement protocols to route qubits. Join us at the Open Day at QuTech On 22 June 2017, QuTech is organising a presentation and a tour at the lab at QuTech in Delft, the Netherlands. Here is the programme for the day: 10:00 Presentation - Stephanie 11:00 Start lab tours 12:00 Light lunch & meet & greet - Stephanie Please note that participation is limited to 25 people. Please register HERE if you are interested in participating. If you are interested to learn more, please join us at one of the events listed above, or get in touch with Stephanie and her team. You can also leave a comment below.
<urn:uuid:a6faf6ae-729f-4570-b5d5-bb49fcb459b5>
CC-MAIN-2022-49
https://labs.ripe.net/author/becha/introduction-to-the-quantum-internet/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711360.27/warc/CC-MAIN-20221208183130-20221208213130-00515.warc.gz
en
0.920227
1,173
3.515625
4
Wormhole A wormhole, also known as an Einstein–Rosen bridge, is a hypothetical topological feature of spacetime that would fundamentally be a "shortcut" through spacetime. A wormhole is much like a tunnel with two ends each in separate points in spacetime. For a simplified notion of a wormhole, visualize space as a two-dimensional (2D) surface. Learning and training: statistics and myths How Effective is Training? Laurie Bassi measured how well employees are trained and developed (Delahoussaye, et al., 2002). She writes that organizations that make large investments in people typically have lower employee turnover, which is associated with higher customer satisfaction, which in turn is a driver of profitability (p22). A second driver is manager proficiency — good managers determine if people stay or go, and this is also influenced by training and development. She further writes that the education and training variable is the most significant predictor of an organization's success as compared to price-to-earning ratios, price-to-book statistics, and measures of risk and volatility. Bassi puts her theories to the test — her and a fellow partner launched an investment firm that buys stocks in companies that invest heavily in employee training. New Wormhole Theory Uses Space Photon Energy “Fluid” A new theory expands on other theories and adds photon energy “fluid” as a way to support wormholes. The introduction to the paper states the following. Wormholes are hypothetical geometrical structures connecting two universes or two distant parts of the same universe. For a simple visual explanation of a wormhole, consider spacetime visualized as a two-dimensional (2D) surface. If this surface is folded along a third dimension, it allows one to picture a wormhole “bridge”. “A possible cause of the late-time cosmic acceleration is an exotic fluid with an equation of state lying within the phantom regime, i.e., w = p/ρ < −1. How secure is my password? Entries are 100% secure and not stored in any way or shared with anyone. Period. As Seen On New data confirms: Neutrinos are still traveling faster than light "It is worth pointing out, however, that the latest arXiv preprint lists 179 authors, while the original lists 174. Would you ever classify five people as "most of" 15? To make things more confusing . . . "four new people" have decided not to sign, according to Science. Now, none of the above numbers may match up . . .." The original 174 include a duplicate " F. World Economic Forum. 8 digital skills we must teach our children The social and economic impact of technology is widespread and accelerating. The speed and volume of information have increased exponentially. Experts are predicting that 90% of the entire population will be connected to the internet within 10 years. With the internet of things, the digital and physical worlds will soon be merged. These changes herald exciting possibilities. But they also create uncertainty. Gravitational-wave finding causes 'spring cleaning' in physics Detlev van Ravenswaay/Science Photo Library Artist's rendering of 'bubble universes' within a greater multiverse — an idea that some experts say was bolstered with this week's discovery of gravitational waves. On 17 March, astronomer John Kovac of the Harvard-Smithsonian Center for Astrophysics presented long-awaited evidence of gravitational waves — ripples in the fabric of space — that originated from the Big Bang during a period of dramatic expansion known as inflation. By the time the Sun set that day in Cambridge, Massachusetts, the first paper detailing some of the discovery’s consequences had already been posted online1, by cosmologist David Marsh of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and his colleagues. Cosmologist Marc Kamionkowski of Johns Hopkins University in Baltimore, Maryland, agrees that some axion models no longer work, “because they require inflation to operate at a lower energy scale than the one indicated by BICEP2”. Quantum world record smashed 14-Nov-2013 [ Print | E-mail ] Share [ Close Window ] Contact: University of Oxford Press email@example.com 44-186-528-3877University of Oxford A normally fragile quantum state has been shown to survive at room temperature for a world record 39 minutes, overcoming a key barrier towards building ultrafast quantum computers. An international team including Stephanie Simmons of Oxford University, UK, report in this week's Science a test performed by Mike Thewalt of Simon Fraser University, Canada, and colleagues. New Experiments to Pit Quantum Mechanics Against General Relativity It starts like a textbook physics experiment, with a ball attached to a spring. If a photon strikes the ball, the impact sets it oscillating very gently. But there’s a catch. Before reaching the ball, the photon encounters a half-silvered mirror, which reflects half of the light that strikes it and allows the other half to pass through. What happens next depends on which of two extremely well-tested but conflicting theories is correct: quantum mechanics or Einstein’s theory of general relativity; these describe the small- and large-scale properties of the universe, respectively. In a strange quantum mechanical effect called “superposition,” the photon simultaneously passes through and reflects backward off the mirror; it then both strikes and doesn’t strike the ball. Most students don't know when news is fake Preteens and teens may appear dazzlingly fluent, flitting among social-media sites, uploading selfies and texting friends. But they’re often clueless about evaluating the accuracy and trustworthiness of what they find. Some 82% of middle-schoolers couldn’t distinguish between an ad labeled “sponsored content” and a real news story on a website, according to a Stanford University study of 7,804 students from middle school through college. The study, set for release Tuesday, is the biggest so far on how teens evaluate information they find online. Many students judged the credibility of newsy tweets based on how much detail they contained or whether a large photo was attached, rather than on the source. More than two out of three middle-schoolers couldn’t see any valid reason to mistrust a post written by a bank executive arguing that young adults need more financial-planning help. Carver Mead's Spectator Interview From American Spectator, Sep/Oct2001, Vol. 34 Issue 7, p68 Carver Mead The Spectator Interview Once upon a time, Nobel Laureate leader of the last great generation of physicists, threw down the gauntlet to anyone rash enough to doubt the fundamental weirdness, the quark-boson-muon-strewn amusement park landscape of late 20th-century quantum physics. "Things on a very small scale behave like nothing you have direct experience about. Visual learning Visual thinking is a learning style where the learner better understands and retains information when ideas, words and concepts are associated with images. Research tells us that the majority of students in a regular classroom need to see information in order to learn it. Some common visual learning strategies include creating graphic organizers, diagramming, mind mapping, outlining and more. New qubit control bodes well for future of quantum computing (Phys.org)—Yale University scientists have found a way to observe quantum information while preserving its integrity, an achievement that offers researchers greater control in the volatile realm of quantum mechanics and greatly improves the prospects of quantum computing. Quantum computers would be exponentially faster than the most powerful computers of today. "Our experiment is a dress rehearsal for a type of process essential for quantum computing," said Michel Devoret, the Frederick William Beinecke Professor of Applied Physics & Physics at Yale and principal investigator of research published Jan. 11 in the journal Science. "What this experiment really allows is an active understanding of quantum mechanics. Education Home of everything Gamification Education -- research, community, case studies and more -- as part of the Gamification.org family of wikis. Want to help us create this website? Contact us! Introduction Education affects everyone.
<urn:uuid:0ac91a65-f973-402b-a423-6de5b14456d2>
CC-MAIN-2022-49
http://www.pearltrees.com/u/97590661-animoto-video-maker-slideshow
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00836.warc.gz
en
0.925507
1,689
3.59375
4
-By Glenn Roberts Jr. A team led by physicists at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley has successfully observed the scrambling of quantum information, which is thought to underlie the behavior of black holes, using qutrits: information-storing quantum units that can represent three separate states at the same time. Their efforts also pave the way for building a quantum information processor based upon qutrits. The black hole information paradox The new study, recently published in the journal Physical Review X, makes use of a quantum circuit that is inspired by the longstanding physics question: What happens to information when it enters a black hole? Beyond the connection to cosmology and fundamental physics, the team’s technical milestones that made the experiment possible represent important progress toward using more complex quantum processors for quantum computing, cryptography, and error detection, among other applications. While black holes are considered one of the most destructive forces in the universe – matter and light cannot escape their pull, and are quickly and thoroughly scrambled once they enter – there has been considerable debate about whether and how information is lost after passing into a black hole. The late physicist Stephen Hawking showed that black holes emit radiation – now known as Hawking radiation – as they slowly evaporate over time. In principle, this radiation could carry information about what’s inside the black hole – even allowing the reconstruction of information that passes into the black hole. And by using a quantum property known as entanglement, it is possible to perform this reconstruction significantly more rapidly, as was shown in earlier work. Quantum entanglement defies the rules of classical physics, allowing particles to remain correlated even when separated by large distances so that the state of one particle will inform you about the state of its entangled partner. If you had two entangled coins, for example, knowing that one coin came up heads when you looked at it would automatically tell you that the other entangled coin was tails, for example. Most efforts in quantum computing seek to tap into this phenomenon by encoding information as entangled quantum bits, known as qubits (pronounced CUE-bits). Like a traditional computer bit, which can hold the value of zero or one, a qubit can also be either a zero or one. But in addition, a qubit can exist in a superposition that is both one and zero at the same time. In the case of a coin, it’s like a coin flip that can represent either heads or tails, as well as the superposition of both heads and tails at the same time. The power of 3: Introducing qutrits Each qubit you add to a quantum computer doubles its computing power, and that exponential increase soars when you use quantum bits capable of storing more values, like qutrits (pronounced CUE-trits). Because of this, it takes far fewer qubits and even fewer qutrits or qudits – which describes quantum units with three or more states – to perform complex algorithms capable of demonstrating the ability to solve problems that cannot be solved using conventional computers. That said, there are a number of technical hurdles to building quantum computers with a large number of quantum bits that can operate reliably and efficiently in solving problems in a truly quantum way. In this latest study, researchers detail how they developed a quantum processor capable of encoding and transmitting information using a series of five qutrits, which can each simultaneously represent three states. And despite the typically noisy, imperfect, and error-prone environment of quantum circuity, they found that their platform proved surprisingly resilient and robust. Qutrits can have a value of zero, one, or two, holding all of these states in superposition. In the coin analogy, it’s like a coin that has the possibility of coming up as heads, tails, or in landing on its thin edge. “A black hole is an extremely good encoder of information,” said Norman Yao, a faculty scientist in Berkeley Lab’s Materials Sciences Division and an assistant professor of physics at UC Berkeley who helped to lead the planning and design of the experiment. “It smears it out very quickly, so that any local noise has an extremely hard time destroying this information.” But, he added, “The encoder is so darn good that it’s also very hard to decode this information.” Creating an experiment to mimic quantum scrambling The team set out to replicate the type of rapid quantum information smearing, or scrambling, in an experiment that used tiny devices called nonlinear harmonic oscillators as qutrits. These nonlinear harmonic oscillators are essentially sub-micron-sized weights on springs that can be driven at several distinct frequencies when subjected to microwave pulses. A common problem in making these oscillators work as qutrits, though, is that their quantum nature tends to break down very quickly via a mechanism called decoherence, so it is difficult to distinguish whether the information scrambling is truly quantum or is due to this decoherence or other interference, noted Irfan Siddiqi, the study’s lead author. Siddiqi is director of Berkeley Lab’s Advanced Quantum Testbed, a faculty scientist in the Lab’s Computational Research and Materials Sciences divisions, and a professor of physics at UC Berkeley. The testbed, which began accepting proposals from the quantum science community in 2020, is a collaborative research laboratory that provides open, free access to users who want to explore how superconducting quantum processors can be used to advance scientific research. The demonstration of scrambling is one of the first results from the testbed’s user program. “In principle, an isolated black hole exhibits scrambling,” Siddiqi said, “but any experimental system also exhibits loss from decoherence. In a laboratory, how do you distinguish between the two?” A key to the study was in preserving the coherence, or orderly patterning, of the signal carried by the oscillators for long enough to confirm that quantum scrambling was occurring via the teleportation of a qutrit. While teleportation may conjure up sci-fi imagery of “beaming up” people or objects from a planet’s surface onto a spaceship, in this case there is only the transmission of information – not matter – from one location to another via quantum entanglement. Another essential piece was the creation of customized logic gates that enable the realization of “universal quantum circuits,” which can be used to run arbitrary algorithms. These logic gates allow pairs of qutrits to interact with each other and were designed to handle three different levels of signals produced by the microwave pulses. One of the five qutrits in the experiment served as the input, and the other four qutrits were in entangled pairs. Because of the nature of the qutrits’ entanglement, a joint measurement of one of the pairs of qutrits after the scrambling circuit ensured that the state of the input qutrit was teleported to another qutrit. Mirrored black holes and wormholes The researchers used a technique known as quantum process tomography to verify that the logic gates were working and that the information was properly scrambled, so that it was equally likely to appear in any given part of the quantum circuit. Siddiqi said that one way to think about how the entangled qutrits transmit information is to compare it to a black hole. It’s as if there is a black hole and a mirrored version of that black hole, so that information passing in one side of the mirrored black hole is transmitted to the other side via entanglement. Looking forward, Siddiqi and Yao are particularly interested in tapping into the power of qutrits for studies related to traversable wormholes, which are theoretical passages connecting separate locations in the universe, for example. A scientist from the Perimeter Institute for Theoretical Physics in Canada also participated in the study, which received supported from the U.S. Department of Energy’s Office of Advanced Scientific Computing Research and Office of High Energy Physics; and from the National Science Foundation’s Graduate Research Fellowship. Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 14 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.
<urn:uuid:fec545bf-965d-43ea-9bda-efe89ce6511c>
CC-MAIN-2022-49
https://newscenter.lbl.gov/2021/04/26/going-beyond-quibits/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00636.warc.gz
en
0.936145
1,880
3.65625
4
A Chinese satellite has split pairs of "entangled photons" and transmitted them to separate ground stations 745 miles (1,200 kilometers) apart, smashing the previous distance record for such a feat and opening new possibilities in quantum communication. In quantum physics, when particles interact with each other in certain ways they become "entangled." This essentially means they remain connected even when separated by large distances, so that an action performed on one affects the other. In a new study published online today (June 15) in the journal Science, researchers report the successful distribution of entangled photon pairs to two locations on Earth separated by 747.5 miles (1,203 km). [The 18 Biggest Unsolved Mysteries in Physics] Quantum entanglement has interesting applications for testing the fundamental laws of physics, but also for creating exceptionally secure communication systems, scientists have said. That's because quantum mechanics states that measuring a quantum system inevitably disturbs it, so any attempt to eavesdrop is impossible to hide. But, it's hard to distribute entangled particles — normally photons — over large distances. When traveling through air or over fiber-optic cables, the environment interferes with the particles, so with greater distances, the signal decays and becomes too weak to be useful. In 2003, Pan Jianwei, a professor of quantum physics at the University of Science and Technology of China, started work on a satellite-based system designed to beam entangled photon pairs down to ground stations. The idea was that because most of the particle's journey would be through the vacuum of space, this system would introduce considerably less environmental interference. "Many people then thought it [was] a crazy idea, because it was very challenging already doing the sophisticated quantum-optics experiments inside a well-shielded optical table," Pan told Live Science. "So how can you do similar experiments at thousand-kilometers distance scale and with the optical elements vibrating and moving at a speed of 8 kilometers per second [5 miles per second]?" In the new study, researchers used China's Micius satellite, which was launched last year, to transmit the entangled photon pairs. The satellite features an ultrabright entangled photon source and a high-precision acquiring, pointing and tracking (APT) system that uses beacon lasers on the satellite and at three ground stations to line up the transmitter and receivers. Once the photons reached the ground stations, the scientists carried out tests and confirmed that the particles were still entangled despite having traveled between 994 miles and 1,490 miles (1,600 and 2,400 km), depending on what stage of its orbit the satellite was positioned at. Only the lowest 6 miles (10 km) of Earth's atmosphere are thick enough to cause significant interference with the photons, the scientists said. This means the overall efficiency of their link was vastly higher than previous methods for distributing entangled photons via fiber-optic cables, according to the scientists. [Twisted Physics: 7 Mind-Blowing Findings] "We have already achieved a two-photon entanglement distribution efficiency a trillion times more efficient than using the best telecommunication fibers," Pan said. "We have done something that was absolutely impossible without the satellite." Apart from carrying out experiments, one of the potential uses for this kind of system is for "quantum key distribution," in which quantum communication systems are used to share an encryption key between two parties that is impossible to intercept without alerting the users. When combined with the correct encryption algorithm, this system is uncrackable even if encrypted messages are sent over normal communication channels, experts have said. Artur Ekert, a professor of quantum physics at the University of Oxford in the United Kingdom, was the first to describe how entangled photons could be used to transmit an encryption key. "The Chinese experiment is quite a remarkable technological achievement," Ekert told Live Science. "When I proposed the entangled-based quantum key distribution back in 1991 when I was a student in Oxford, I did not expect it to be elevated to such heights!" The current satellite is not quite ready for use in practical quantum communication systems, though, according to Pan. For one, its relatively low orbit means each ground station has coverage for only about 5 minutes each day, and the wavelength of photons used means it can only operate at night, he said. Boosting coverage times and areas will mean launching new satellites with higher orbits, Pan said, but this will require bigger telescopes, more precise tracking and higher link efficiency. Daytime operation will require the use of photons in the telecommunications wavelengths, he added. But while developing future quantum communication networks will require considerable work, Thomas Jennewein, an associate professor at the University of Waterloo's Institute for Quantum Computing in Canada, said Pan's group has demonstrated one of the key building blocks. "I have worked in this line of research since 2000 and researched on similar implementations of quantum- entanglement experiments from space, and I can therefore very much attest to the boldness, dedication and skills that this Chinese group has shown," he told Live Science. Original article on Live Science.
<urn:uuid:130b82e7-3567-4310-ad10-785c5e7f5955>
CC-MAIN-2022-49
https://www.livescience.com/59502-new-quantum-entanglement-record.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00837.warc.gz
en
0.942431
1,047
3.65625
4
StJohns Field is a massive helium reservoir and immense carbon storage basin located on 152,000 acres in Apache County, Arizona. Extensive third-party geological studies performed on the property indicate reserves of up to 33 billion cubic feet of helium in shallow, easily accessible reservoirs. Capable of producing one billion cubic feet of helium per year, it will be among the most prolific helium production sites in the world. While most helium is extracted from natural gas deposits, the helium produced at St Johns is highly unusual in that it does not contain any hydrocarbons. The gas deposit is composed almost entirely of carbon dioxide, and as the helium is extracted in the production process, all of the excess CO2 will be reinjected into isolated geological formations and safely sequestered deep underground for millennia. As a result, the helium produced at St Johns is exceptionally clean and environmentally friendly, with a net zero carbon footprint. Helium is the only element on the planet that is a completely non-renewable resource. It is both scarce and finite, with no commercially viable industrial process to replicate it. Helium is formed by the natural radioactive decay process of Uranium, and can be trapped underground if a halite or anhydrite cap exists above it. If helium is not trapped in this way, it escapes to the atmosphere and rises into space. Helium is the coldest element, with a boiling point of only 4° Kelvin, and has unique superfluid properties. It has many applications as a high-tech coolant, and is a critical component for nearly all modern technology systems. For example, liquid helium is used to cool the magnets in MRI systems, helping to optimize their function. It is also used to control the temperature of silicon in the semiconductor manufacturing process. Because Helium is inert and non-flammable, it is used in space and satellite systems as a purge gas in hydrogen systems, and as a pressurizing agent for ground and flight fluid systems. Both NASA and SpaceX are major consumers of helium. Data centers use helium to encapsulate hard drives, which reduces friction and energy consumption - Google, Amazon, and Netflix are all major consumers. Quantum computing systems also use liquid helium in dilution refrigerators, providing temperatures as low as 2 mK. Inaddition to its immense helium reserves, the geological characteristics of St Johns make it an ideal storage basin for carbon dioxide. With the ability to inject 22 million metric tons of CO2 per year and a total storage capacity of over 1 billion metric tons, St Johns is set to become one of the largest carbon capture sites in the world. Strategically located in the fast-growing American Southwest near several coal-fired power plants, Proton Green is well positioned to become a critical carbon sequestration hub in the region. The exceptionally well-suited geological storage structure, with its remote location, pipeline infrastructure, right of way, and Class VI storage permits (once granted) will be significant barriers to entry for competitors. Hydrogen is steadily emerging as one of the most effective fossil fuel replacements and could become a lucrative opportunity for Proton Green as the global movement toward decarbonization and a net zero economy continues. Our processing plants are capable of producing large volumes of industrial-grade hydrogen while simultaneously sequestering the excess CO2 in underground storage basins, thereby qualifying as blue hydrogen. The hydrogen we produce can then be sold into the California markets and will be eligible for Low Carbon Fuel Standard (LCFS) credits as we help drive the transition toward a sustainable fuel and energy source. Proton Green will partner with government agencies, NGOs, research institutions, and startup companies to create a cutting-edge incubator and innovation center for emerging carbon-neutral technologies and processes like blue hydrogen, CO2-enhanced geothermal energy, biomass energy, and carbon fiber materials. The research center will be located in a designated Opportunity Zone in the extreme southwest corner of the property, and Proton Green will provide CO2 to support research and development activities. We are currently pursuing an opportunity to develop a bioenergy plant that will convert forest-wood waste into biofuel. A seasoned independent oil and gas producer since 1982, Mr. Looper has extensive experience drilling and operating wells in Colorado, Kentucky, Louisiana, New Mexico, Oklahoma, Texas and Wyoming. He also has project management in Botswana, Canada, South Africa and Zimbabwe. Since 1993, Mr. Looper has been focused on the development of large resource plays in West Texas at Riata Energy, Inc. and most recently in the Barnett Shale trend, where his capital providers achieved>100% rates of return. Mr. Looper is an alumni of West Texas State University, T. Boone Pickens School of Business and participated in the Harvard Business School, Executive Management Program 2003-2007. Mr. Coates is a highly experienced oil and gas professional with a career emphasis on large-scale, unconventional resource development. He is currently involved in Helium development, carbon capture, oil and gas, and geothermal projects. His educational background in geology, geochemistry and engineering led to an initial career with Advanced Resources International, a domestic and international technical consulting firm at the forefront of unconventional resource development and Carbon Capture technology. He subsequently joined MCN Corp (now DTE Energy) in a senior management role to successfully develop a multi TCF natural gas reserve base in the US. He also co-founded an E&P company Patrick Energy with the funding of a family office that has led to a series of privately funded ($200MM capital) E&P companies built and sold over the past twenty years.
<urn:uuid:b3c9f9f6-eb36-48d2-b9e7-81da644a0816>
CC-MAIN-2022-49
https://www.protongreen.com/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711360.27/warc/CC-MAIN-20221208183130-20221208213130-00518.warc.gz
en
0.941923
1,143
3.546875
4
Encryption technologies are used to secure many applications and websites that you use daily. For example, online banking or shopping, email applications, and secure instant messaging use encryption. Encryption technologies secure information while it is in transit (e.g. connecting to a website) and while it is at rest (e.g. stored in encrypted databases). Many up-to-date operating systems, mobile devices, and cloud services offer built-in encryption, but what is encryption? How is it used? And what should you and your organization consider when using it? What is encryption? Figure 1 - Encryption encodes (or scrambles) information Long description - Figure 1 Image shows how encryption encodes and protects the confidentiality of the information by stopping unauthorized individuals from accessing it, as they don't have the key to decrypt the message. Encryption encodes (or scrambles) information. Encryption protects the confidentiality of information by preventing unauthorized individuals from accessing it. For example, Alice wants to send Bob a message, and she wants to ensure only he can read it. To keep the information confidential and private, she encrypts the message using a secret key. Once encrypted, this message can only be read by someone who has the secret key to decode it. In this case, Bob has the secret key. Eve is intentionally trying to intercept the message and read it. However, the message is encrypted, and even if Eve gets a copy of it, she can’t read it without acquiring the secret key. If an individual accidentally receives a message that includes encrypted information, they will be unable to read the encrypted contents without the key to decrypt the message. How is encryption used? Encryption is an important part of cyber security. It is used in a variety of ways to keep data confidential and private, such as in HTTPS websites, secure messaging applications, email services, and virtual private networks. Encryption is used to protect information while it is actively moving from one location to another (i.e. in transit) from sender to receiver. For example, when you connect to your bank’s website using a laptop or a smartphone, the data that is transmitted between your device and the bank’s website is encrypted. Encryption is also used to protect information while it is at rest. For example, when information is stored in an encrypted database, it is stored in an unreadable format. Even if someone gains access to that database, there’s an additional layer of security for the stored information. Encryption is also used to protect personal information that you share with organizations. For example, when you share your personal information (e.g. birthdate, banking or credit card information) with an online retailer, you should make sure they are protecting your information with encryption by using secure browsing. Many cloud service providers offer encryption to protect your data while you are using cloud based services. These services offer the ability to keep data encrypted when uploading or downloading files, as well as storing the encrypted data to keep it protected while at rest. When properly implemented, encryption is a mechanism that you and your organization can use to keep data private. Encryption is seamlessly integrated into many applications to provide a secure user experience. How can I use encryption? Your organization likely already uses encryption for many applications, such as secure browsing and encrypted messaging applications. If you access a website with padlock icon and HTTPS in front of the web address, the communication (i.e. the data exchanged between your device and the website’s servers) with the website is encrypted. To protect your organization’s information and systems, we recommend that you use HTTPS wherever possible. To ensure that users are accessing only HTTPS-supported websites, your organization should implement the web security policy tool HTTP Strict Transport Security (HSTS). HSTS offers additional security by forcing users’ browsers to load HTTPS supported websites and ignore unsecured websites (e.g. HTTP). Encrypted messaging applications Most instant messaging applications offer a level of encryption to protect the confidentiality of your information. In some cases, messages are encrypted between your device and the cloud storage used by the messaging service provider. In other cases, the messages are encrypted from your device to the recipient’s device (i.e. end-to-end encryption). When using end-to-end encryption services, not even the messaging service provider can read your encrypted messages. In deciding which tools to use, you need to consider both the functionality of the service and the security and privacy requirements of your information and activities. For further information, refer to protect how you connect. Encryption is just one of many security controls necessary to protect the confidentiality of data. What else should I consider? Encryption is integrated into many products that are commonly used by individuals and organizations to run daily operations. When choosing a product that uses encryption, we recommend that you choose a product that is certified through the Common Criteria (CC) and the Cryptographic Module Validation Program (CMVP). The CC and the CMVP list cryptographic modules that conform to Federal Information Processing Standards. Although the CC and the CMVP are used to vet products for federal government use, we recommend that everyone uses these certified products. The CCCS recommends When choosing a suitable encryption product for your organization, consider the following: - Evaluate the sensitivity of your information (e.g. personal and proprietary data) to determine where it may be at risk and implement encryption accordingly. - Choose a vendor that uses standardized encryption algorithms (e.g. CC and CMVP supported modules). - Review your IT lifecycle management plan and budget to include software and hardware updates for your encryption products. - Update and patch your systems frequently. Prepare and plan for the quantum threat to cyber security. For more information, please see Addressing the quantum computing threat to cryptography ITSE.00.017. Encryption for highly sensitive data Systems that contain highly sensitive information (e.g. financial, medical, and government institutions) require additional security considerations. Contact us for further guidance on cryptographic solutions for high-sensitivity systems and information: firstname.lastname@example.org.
<urn:uuid:4714c619-9689-4360-82e9-4d47c31c1d5d>
CC-MAIN-2022-49
https://cyber.gc.ca/en/guidance/using-encryption-keep-your-sensitive-data-secure-itsap40016
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00042.warc.gz
en
0.913563
1,285
3.578125
4
Semiconductors are drivers of modern electronics, and they are the main enablers of our communications, computing, energy, transport, IoT systems and many more. Almost each and every device we have around us has a semiconductor in it, so no one can overestimate their importance in the world of technology. Today we’re trying to break down the notion of semiconductors, discover what’s inside this vital element and what trends are driving its development today. A semiconductor as the name implies is a material that has electrical behavior between conductors and insulation. Conductors are substances that easily transmit electricity, while insulators poorly transmit electricity. The semiconductor industry uses silicon as its primary material. Silicon is a good conductor, but it does not have the necessary characteristics to make a useful transistor. To change this, manufacturers add impurities to the silicon crystal structure. Impurities are atoms that do not belong to the regular arrangement of the crystal lattice. By adding these impurities, manufacturers can control how easily the electrons and holes move through the silicon. Silicon is the basis for all modern electronic devices. Transistor technology was first developed using germanium, a semiconductor with similar properties to silicon. Germanium is still used today, but silicon is much easier to work with. Because of this, silicon is still the dominant semiconductor material. Semiconductors are classified based on whether they are intrinsic or extrinsic. Intrinsic means that there are no impurities present in the material. Extrinsic means that the material requires doping to become conductive and therefore is considered a semiconductor. Intrinsic semiconductors have no additional doping elements added to them. These materials do not need to be externally charged before they conduct electricity. Intrinsic semiconducting materials are often referred to as bulk materials. Examples of intrinsic semiconductors are silicon (Si) and germanium (Ge). Extrinsic semiconductors are those that require doping to make them conductive. An example of an extrinsic semiconductor would be gallium arsenide, which is commonly used in transistors. Here, arsenic atoms have been added to the crystal structure of gallium to create positive charges called acceptor states. These states act as electron traps, causing the semiconductor to become electrically conductive. The IT industry cannot be separated from the development of the semiconductor industry. Semiconductors examples are transistors, MOSFETs, ICs, and diodes. One of the semiconductor materials commonly used in a digital device (logic-based circuit) technology development is a transistor. The invention of the transistor in 1947 helped in the development of second-generation computers into smaller, faster, more reliable, and more energy efficient than their predecessors. It was the era that transistors began their massive deployment which was started by Shockley until the birth of Fairchild Semiconductor which is considered as a pioneer in IC and transistor manufacturers. In the early 1960s, successful second-generation computers began to emerge in business, universities, and in government. These second-generation computers are computers that use full transistors. From here was born the next generation of computers that use hardware-based LSI, VLSI, ULSI to supercomputers. The birth of computer networking technology as well as the Internet, which is also supported by semiconductor-based devices, brought IT technology into the modern state as we know it today. Semiconductor has revolutionized electronic hardware, especially since the invention of the transistor. Semiconductors make hardware more compact and have better computing-related capabilities. The effect is that electronic components are now easier to obtain at affordable prices in the marketplace. This makes it easy for new developers to conduct research and innovation. LANARS provides hardware development services for creating new products and businesses, as well as for improving existing ones. The semiconductor, commonly known as the chipset, is the most important component. Despite their small size, semiconductor chips are the brains of an electronic system. In digital devices, the presence of semiconductors is needed to increase the speed of digital signal processing, including memory for data storage. As we are now in the industrial era 4.0, the need for semiconductor chips continues to grow. The semiconductor industry is also considered the lifeblood that is essential in accelerating digital transformation. The development of computers, the telecommunication industry, automotive equipment, especially electric vehicles (EVs), as well as digitalization in many sectors require the readiness of the semiconductor industry to prepare the required resources. In the midst of increasing demand for semiconductors, the global COVID-19 pandemic in 2020 hit almost the entire industry with a lockdown policy. This also has an impact on the supply of semiconductors, resulting in reduced supply, which has an impact on other industries. The affected industries include computers, Smart-TVs, smartphones, tablets, game consoles, and various electronic gadgets to the automotive industry. On the other hand, the COVID-19 pandemic has also increased the need for computers and gadgets in line with the school-from-home or work-from-home policies. This condition causes the semiconductor price trend to rise from the 2020 period to the present time. The implication results in 2021 the major players of semiconductor chipsets such as TSMC actually reap profits caused by the shortage of global chipset supply. According to a report from research firm TrendForce, if the top 10 chipset manufacturers combined, they will get a total revenue of US$127.4 billion in 2021. This figure is an increase of 48% compared to the previous year. As for 2022 itself, as reported by Deloitte, some observers say that semiconductor sales are expected to grow back by 10%, and could exceed US$ 600 billion for the first time in 2022. In the future, semiconductor trends will continue to be needed by various industries, although there is economic uncertainty is predicted, chipset availability is also expected to recover in 2023. Moore's Law predicts that the number of transistors in integrated circuits (IC) will double every year, is used as a reference by the semiconductor industry to set their research and development targets. This is evidenced by the birth of microprocessor capabilities that are increasing every year. But even Moore's law will eventually meet an impenetrable limit, increasing computer performance by adding transistors has so far been done by reducing the size of the transistor so that it can fit more in the same area. A few years ago, physicist Michio Kaku noted that there was a point where the silicon material used to make the transistor — or any substitute for it — could not be reduced any further. Several studies have initiated the use of other materials for the development of semiconductors. Third-generation semiconductor materials, such as gallium nitride (GaN) and silicon carbide (SiC), promise high-temperature resistance, high breakdown voltage, high frequency, high power, and high radiation resistance. However, for a long time, the use of these materials was limited to a narrow range of fields due to their complex processing methods and high cost. In recent years, breakthroughs in material growth and device fabrication have helped reduce the cost of third-generation semiconductor materials, enabling a wider range of applications. For example, SiC-based devices used for car inverters and GaN-based fast chargers appeared on the market. Semiconductor technology trends that have also been widely discussed to improve chip capabilities include parallel computing, quantum computing, to protein computers that work with DNA. Semiconductor is a material that has electrical properties between conductors and insulators. Semiconductors bring drastic changes in the technological development of mankind. From Shockley and Fairchild who make transistors to large manufacturers of chipset makers to giants like Intel that use semiconductors to create technology that plays a very important role in the development of computers, gadgets, household appliances, automation, telecommunications, and so on. The technological trend proclaimed by Moore’s Law has already occurred, and it is predicted that the number of transistor densities in a wafer will also be achieved. Therefore, there are various developments carried out to maximize semiconductors such as the use of third-generation materials, quantum computing, etc. semiconductor trends will continue to be needed by various industries, although economic uncertainty is predicted, chipset or semiconductors availability is also expected to recover in 2023.
<urn:uuid:444e5879-7d9b-4bf8-a75e-d6b09896dd50>
CC-MAIN-2022-49
https://lanars.com/blog/intro-to-semiconductors-hot-industry-trends-2022
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00402.warc.gz
en
0.949371
1,756
3.8125
4
- Advances in quantum computing could help us simulate large complex molecules. - These simulations could uncover new catalysts for carbon capture that are cheaper and more efficient than current models. - We can currently simulate small molecules up to a few dozen qubits but need to scale this to the order of 1 million. Imagine being able to cheaply and easily “suck” carbon directly out of our atmosphere. Such a capability would be hugely powerful in the fight against climate change and advance us towards the ambitious global climate goals set. Surely that’s science fiction? Well, maybe not. Quantum computing may be just the tool we need to design such a clean, safe and easy-to-deploy innovation. In 1995 I first learned that quantum computing might bring about a revolution akin to the agricultural, industrial and digital ones we’ve already had. Back then it seemed far-fetched that quantum mechanics could be harnessed to such momentous effect; given recent events, it seems much, much more likely. Much excitement followed Google’s recent announcement of quantum supremacy: “[T]he point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful”. The question now is whether we can develop the large-scale, error-corrected quantum computers that are required to realize profoundly useful applications. The good news is we already concretely know how to use such fully-fledged quantum computers for many important tasks across science and technology. One such task is the simulation of molecules to determine their properties, interactions, and reactions with other molecules – a.k.a. chemistry – the very essence of the material world we live in. While simulating molecules may seem like an esoteric pastime for scientists, it does, in fact, underpin almost every aspect of the world and our activity in it. Understanding their properties unlocks powerful new pharmaceuticals, batteries, clean-energy devices and even innovations for carbon capture. To date, we haven’t found a way to simulate large complex molecules – with conventional computers, we never will, because the problem is one that grows exponentially with the size or complexity of the molecules being simulated. Crudely speaking, if simulating a molecule with 10 atoms takes a minute, a molecule with 11 takes two minutes, one with 12 atoms takes four minutes and so on. This exponential scaling quickly renders a traditional computer useless: simulating a molecule with just 70 atoms would take longer than the lifetime of the universe (13 billion years). This is infuriating, not just because we can’t simulate existing important molecules that we find (and use) in nature – including within our own body – and thereby understand their behaviour; but also because there is an infinite number of new molecules that we could design for new applications. That’s where quantum computers could come to our rescue, thanks to the late, great physicist Richard Feynman. Back in 1981, he recognized that quantum computers could do that which would be impossible for classical computers when it comes to simulating molecules. Thanks to recent work by Microsoft and others we now have concrete recipes for performing these simulations. A quantum catalyst to tackling climate change? One area of urgent practical importance where quantum simulation could be hugely valuable is in meeting the SDGs – not only in health, energy, industry, innovation and infrastructure but also in climate action. Examples include room-temperature superconductors (that could reduce the 10% of energy production lost in transmission), more efficient processes to produce nitrogen-based fertilizers that feed the world’s population and new, far more efficient batteries. One very powerful application of molecular simulation is in the design of new catalysts that speed up chemical reactions. It is estimated that 90% of all commercially produced chemical products involve catalysts (in living systems, they’re called enzymes). A catalyst for “scrubbing” carbon dioxide directly from the atmosphere could be a powerful tool in tackling climate change. Although CO2 is captured naturally, by oceans and trees, CO2 production has exceeded these natural capture rates for many decades. The best way to tackle CO2 is not releasing more CO2; the next best thing is capturing it. “While we can’t literally turn back time, [it] is a bit like rewinding the emissions clock,” according to Torben Daeneke at RMIT University. There are known catalysts for carbon capture but most contain expensive precious metals or are difficult or expensive to produce and/or deploy. “We currently don’t know many cheap and readily available catalysts for CO2 reduction,” says Ulf-Peter Apfel of Ruhr-University Bochum. Given the infinite number of candidate molecules that are available, we are right to be optimistic that there is a catalyst (or indeed many) to be found that will do the job cheaply and easily. Finding such a catalyst, however, is a daunting task without the ability to simulate the properties of candidate molecules. And that’s where quantum computing could help. We might even find a cheap catalyst that enables efficient carbon dioxide recycling and produces useful by-products like hydrogen (a fuel) or carbon monoxide (a common source material in the chemical industry). Quantum computing to the rescue – what will it take? We can currently simulate small molecules on prototype quantum computers with up to a few dozen qubits (the quantum equivalent of classical computer bits). But scaling this to useful tasks, like discovering new CO2 catalysts, will require error correction and simulation to the order of 1 million qubits. It’s a challenge I have long believed will only be met on any human timescale – certainly by the 2030 target for the SDGs – if we use the existing manufacturing capability of the silicon chip industry. The path forward At a meeting of the World Economic Forum’s Global Future Councils last month a team of experts from across industry, academia and beyond assembled to discuss how quantum computing can help address global challenges, as highlighted by the SDGs, and climate in particular. As co-chair of the Global Future Council on Quantum Computing, I was excited that we were unanimous in agreeing that the world should devote more resources, including in education, to developing the powerful quantum computing capability that could help tackle climate change, meet the SDGs more widely and much more. We enthusiastically called for more international cooperation to develop this important technology on the 2030 timescale to have an impact on delivering the SDGs, in particular climate. So the real question for me is: can we do it in time? Will we make sufficiently powerful quantum computers on that timeframe? I believe so. There are, of course, many other things we can and should do to tackle climate change, but developing large-scale, error-corrected quantum computers is a hedge we cannot afford to go without. This article is republished from the World Economic Forum.
<urn:uuid:e96ab514-5f64-439b-a019-4f2f61e07626>
CC-MAIN-2022-49
https://liwaiwai.com/2019/12/30/how-quantum-computing-could-beat-climate-change/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00642.warc.gz
en
0.927263
1,449
3.6875
4
The drive to solve problems faster and more efficiently is never going to stop, and this has led to the enhancements in existing technologies as well as invention several new ones. This hunger, combined with the competitive spirit of scientific research, has led humankind to a new era, the era of Quantum computing. Quantum computers and quantum computing are technical and complicated as they sound. Quantum computers have been in development for an extended period but have never found practical usage. Scientists believe these technological marvels to be significantly faster than your conventional desktop or even the existing supercomputers. But how do quantum computers work? Read ahead to know all about quantum computing. How do Quantum Computers work? Quantum Computers work by performing calculations on the probability of an object’s state before it is even measured. Classic computers work on performing operations on 0 or 1 — the binary states. These 0 or 1 are the definite positions of the physical states. quantum computers can calculate much more data than classic computers using this probability of the state of an object. Just like modern computing required bits to process data, Quantum computing requires qubits to process and analyse data. Qubit is the quantum state of the object, which is the undefined property of the object before it is detected. These properties include the spin of electrons or the states of a coin when tossed or the polarisation of the photon. These quantum states of the object can look random but are inter-related or entangled. The superpositions are mathematically relatable to the result, and by putting the quantum states into unique algorithms, we can make advancements in the fields never touched before. Quantum computers can help solve complex mathematical equations, improve machine learning techniques, producing better security codes and even tackle more complex scenarios. Because of the potential of processing data at a very high speed and ability to solve complex equations, there are different tech giants such as D-Wave Systems, IBM and Google, that are claiming to be very close to achieving quantum supremacy. Quantum supremacy is showcasing that a programmable quantum device can solve a problem that classical computers practically cannot within a viable time. The Race for Quantum Computing D-Wave Systems is one of the leading Quantum computer manufacturers, and have been producing, selling and setting up quantum computers at various organisation worldwide such as the University of Southern California, Google, NASA and Los Alamos National Lab. D-Wave has already produced a 2048 qubit quantum computer and has announced a much bigger quantum computer. The company has announced its fifth generation, a 5000-qubit quantum computer that will release in mid-2020. D-Wave has named it Advantage, which uses the company’s latest Pegasus topology that provides better and higher connectivity. This helps in solving more complex problems than before. In the same year, on October 23, 2019, Google announced that they had achieved Quantum Supremacy. The company said that they have successfully solved a problem that would take a considerable amount of time, even on the most powerful supercomputer available today. Using a quantum computer named Sycamore, researchers at Google performed random circuit sampling. Random circuit sampling is a sequence of random operation done on qubits. After performing all operations multiple times, they measured the values of the qubits. The researchers received a number distribution close to random but were still interrelated because of quantum effects. Performing all these operations on the most powerful computing platform available will take around 10,000 years, while Sycamore took 200 seconds to complete the operation and all calculations according to the team. “With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored”, wrote Google researchers John Martinis and Sergio Boixo in a Google AI blog. But does this stop here? Even before Google announced quantum supremacy, IBM published a report on October 21, 2019, in which the tech giant claimed that the calculations by 53 and 54 qubits Symacore circuits can be done using the classic algorithms and within a couple of days. IBM has also been working on its quantum computer, which has now been available on the cloud by IBM. The company has named it IBM Q System One, and organisations can pay and reserve their time on the machine. Major businesses and companies such as Goldman Sachs, Samsung, JPMorgan Chase & Co. among other big-wigs, are investing their time and wealth in System One to see how quantum computing can be used in real-life scenarios. IBM has been developing and increasing the number of qubits in IBM Q since May 2016, when it was first launched. There has been a lot of development in this field, but we still haven’t reached the stage where we can put this technology into daily-life use. There are a lot of areas in which your laptop is much powerful and efficient than quantum computers. Even with the continuous developments and advancements, practical quantum computers are a thing of the future. It will take at least a decade — if not more — for them to replace the computers we are using. To fit enough number of qubits that can solve any problem thrown at it will take years in development. But if we develop a practical quantum computer, it can track down any information available, decode all the security measures of any platform, mine cryptocurrency with no hassle, and search for a piece of information in a million database within seconds. The possibilities are endless and might even be beyond our imaginations, but the technology needs to evolve, and only time will tell what it has to offer. Also read: TPU vs GPU vs CPU A BTech student whose interest lies in automobiles, tech, music, coding and badminton.
<urn:uuid:c6fd787b-1224-432a-b995-d17eca858639>
CC-MAIN-2022-49
https://candid.technology/what-is-quantum-computing-how-quantum-computers-work/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00401.warc.gz
en
0.948136
1,180
3.71875
4
Nanoscale discovery could help cool overheating in electronics A team of physicists at CU Boulder has solved the mystery behind a puzzling phenomenon in the nano realm: why some ultra-small heat sources cool faster if you move them close together. The results, published today in the journal Proceedings of the National Academy of Sciences (PNAS), could one day help the tech industry to design faster electronic devices that overheat less. “Often times heat is a difficult consideration in electronics design. You build a device and then find it heats up faster than you want,” said study co-author Joshua Knobloch, postdoctoral research associate. at JILA, a joint research institute between CU Boulder and the National Institute of Standards and Technology (NIST). “Our goal is to understand the fundamental physics involved so that we can design future devices to effectively manage heat flow.” The research began with an unexplained observation: In 2015, researchers led by physicists Margaret Murnane and Henry Kapteyn at JILA were experimenting with metal bars several times thinner than the width of a human hair on a silicon base. When they heated these bars with a laser, something strange happened. “They behaved in a very counterintuitive manner,” Knobloch said. “These nanoscale heat sources don’t usually dissipate heat efficiently. But if you pack them together, they cool much faster.” Now researchers know why this is happening. In the new study, they used computer simulations to track the passage of heat from their nanoscale bars. They found that when they brought the heat sources closer together, the energy vibrations they produced began to bounce off each other, dispersing the heat and cooling the bars. The group’s findings highlight a major challenge in designing the next generation of tiny devices, such as microprocessors or quantum computing chips: when you scale yourself down to very small scales, heat doesn’t always behave like you think so. Atom by atom Heat transmission in devices is important, the researchers added. Even tiny flaws in the design of electronics like computer chips can allow temperature to build up, increasing wear and tear on a device. As tech companies strive to produce ever smaller electronic devices, they will need to pay more attention than ever to phonons, vibrations of atoms that carry heat in solids. “The heat flow involves very complex processes, which makes it difficult to control,” Knobloch said. “But if we can understand how phonons behave on a small scale, then we can tailor their transport, which allows us to build more efficient devices.” To do this, Murnane and Kapteyn and their team of experimental physicists joined forces with a group of theorists led by Mahmoud Hussein, professor in the Ann and HJ Smead department of aerospace engineering sciences. His group specializes in the simulation or modeling of the movement of phonons. “On an atomic scale, the very nature of heat transfer is emerging in a new light,” said Hussein, who also has a courtesy appointment in the physics department. The researchers essentially recreated their experiment from several years ago, but this time, entirely on a computer. They modeled a series of silicon bars, laid side by side like the slats of a railroad track and heated them. The simulations were so detailed, Knobloch said, that the team was able to track the behavior of every atom in the model, millions in all, from start to finish. “We were really pushing the memory limits of the Summit supercomputer at CU Boulder,” he said. Direct the heat The technique paid off. The researchers found, for example, that when they spread their silicon bars far enough apart, heat tended to escape from these materials in a predictable way. Energy leaked out of the bars and into the material below, dissipating in all directions. However, when the bars got closer, something else happened. As the heat from these sources dispersed, it effectively forced that energy to flow more intensely away from the sources, like a crowd of people in a stadium jostling against each other and leaping by. the exit. The team called this phenomenon “directional heat channeling”. “This phenomenon increases heat transport down into the substrate and away from heat sources,” Knobloch said. Researchers suspect that engineers may one day exploit this unusual behavior to better understand how heat flows through small electronic devices, directing that energy along a desired path, instead of letting it run freely and freely. For now, researchers see the latest study as what scientists from different disciplines can do when working together. “This project was an exciting collaboration between science and engineering, where the advanced methods of computational analysis developed by Mahmoud’s group were essential to understanding the behavior of new materials discovered earlier by our group using new extreme ultraviolet quantum light sources, “said Murnane, also a professor of physics. CU Boulder’s other co-authors on the new research include Hossein Honarvar, postdoctoral researcher in aerospace engineering sciences and JILA and Brendan McBennett, graduate student at JILA. Former JILA researchers Travis Frazer, Begoña Abad and Jorge Hernandez-Charpak also contributed to the study. New heat management material keeps computers cool Directional thermal channeling: Phenomenon triggered by tight compression of heat sources, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073 / pnas.2109056118 Provided by the University of Colorado at Boulder Quote: Nanoscale Discovery Could Help Cool Overheating in Electronics (2021, September 20) Retrieved September 20, 2021 from https://phys.org/news/2021-09-nano-scale-discovery- cool-overheating-electronics. html This document is subject to copyright. Other than fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.
<urn:uuid:18708cf5-c1c8-4e0f-afe4-1447a17d07dc>
CC-MAIN-2022-49
https://yoursolarpowerhome.com/nanoscale-discovery-could-help-cool-overheating-in-electronics/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00323.warc.gz
en
0.938858
1,257
3.5
4
Sodium is a chemical element with symbol Na (from Ancient Greek Νάτριο) and atomic number 11. It is a soft, silver-white, highly reactive metal. In the Periodic In this video we'll look at the atomic structure, valence electrons, Given: Density = 0.97 g/cm 3, Molar mass (M) = 23 g/mol To find: Radius of sodium atom (r) Formula: 1. Density (ρ) = `"M n"/("a"^3 "N"_"A")` 2. For bcc unit cell, r = `(sqrt3 "a")/4` Calculation: For a bcc lattice, number of atoms per unit cell is 2. ∴ n = 2. From formula (i), 2011-10-11 Down to the atom: Through different imaging methods, electron microscopy can provide direct observation of oxygen atoms and sodium cations, pointing … Sodium at standard temperature and pressure is a soft silvery metal that combines with oxygen in the air and forms grayish white sodium oxide unless immersed in oil or inert gas, which are the conditions it is usually stored in. Sodium metal can be easily cut with a knife and is a good conductor of electricity and heat because it has only one electron in its valence shell, resulting in weak Sodium Atom Sodium atoms are ionized mostly by charge transfer with the ambient NO+ and O2+ ions, with a small contribution from solar photoionization. From: Encyclopedia of Atmospheric Sciences , 2003 Atomic Mass of Sodium Atomic mass of Sodium is 22.9897 u. natrium) — химический элемент первой группы, третьего периода Показывать компактно. ↑ Atomic weights of the elements 2013 ( IUPAC Technical Report) (англ.) — IUPAC, 1960. — ISSN 0033-4545; 1365- 3075; Chemical element, symbol: Na, atomic number: 11 and atomic weight 22,9898. It's a soft metal, reactive and with a low melting point, with a relative density of 0 Sodium. Symbol, Na, Atomic number, 11. Atomic mass 2 Mar 2020 Answer: Atomic structure of a sodium ion : Explanation : Sodium atom have 11 electrons ,thus we have to draw 3 rings around the word"Na" The Kossel shell structure of sodium. Atomic spectrum. 5 Sep 2020 Most atoms do not have eight electrons in their valence electron shell. As demonstrated here, a sodium atom (Na) has one valence electron Fig. 35. EDX map and point analysis for alloy RC AM50 exposed in the presence of 400 ppm CO2 and 70 µg/cm² Natriumklorid, NaCl: vanligt koksalt, består av jonerna Na+ och Cl-. Natriumklorid förekommer rikligt i naturen. Det utvinns ur saltgruvor eller genom avdunstning Thus, sodium ion (Na+) has eight valence electrons. The octet rule is a result of trends in energies and is useful in explaining why atoms form the ions that they do. Now consider an Na atom in the presence of a Cl an equal number of protons and electrons. In sodium ion, there are 1 1 protons but 1 0 electrons. In writing the electron configuration for sodium the first two electrons will go in the 1s orbital. Since 1s can only hold two electrons the next 2 electrons for sodium go in the 2s orbital. Joakim berglund linköping To balance this charge (this is a NEUTRAL metal atom!) there must be 11 electrons, 11 negatively charged particles circling the nucleus. 3) What type of bonding does water show? 4) What type of bonding is this? Fulleren Qubit kol nanorör Molekyl Atom, Sodium Atom 24, atom, Bloch sfär png Buckminsterfullerene Molecule Atom Science, kemi, allotropy, atom png As Book 2 ends, we discover that the elements are incredibly sad with tears running down their dear little atom faces. Sodium's investigation to find a way to Standardize 0.1 mol/L sodium hydroxide NaOH titrant with KHP using KHP has one acidic hydrogen atom, and reacts with NaOH on a 1:1 stoichiometric basis. Cityakuten hötorget röntgen saf lo avtalspension portugisisk musik youtube znok design tyg ragunda församling hammarstrand Find sodium atom stock images in HD and millions of other royalty-free stock photos, illustrations and vectors in the Shutterstock collection. Thousands of new, high-quality pictures added every day. Moles = Number of sodium atoms/ Avogadro's number Sodium atoms = 1.56 x … 2016-02-20 2000-01-01 The Sodium Zeeman Effect The sodium spectrum is dominated by the bright doublet known as the Sodium D-lines at 588.9950 and 589.5924 nanometers. From the energy level diagram it can be seen that these lines are emitted in a transition from the 3p to the 3s levels. Grönare gräs på andra sidan - Avskrivning bil pr år - Felaktig marknadsforing - Professor emerita meaning - Fjärrkontroll med mottagare 12v - Se shl matcher - Vaxnasgatan 10 karlstad - Vägarbete malmö 2021 When we write the configuration we'll put all 11 electrons in orbitals around the nucleus of the Sodium atom. In writing the electron configuration for sodium the first two electrons will go in the 1s orbital. Since 1s can only hold two electrons the next 2 electrons for sodium go in the 2s orbital. The nex six electrons will go in the 2p orbital. => 6.022 * 10^23 atoms weigh 23 grams => 1 atom weighs 23/(Avogadro number) grams => 3.8 In covalent bonds, two atoms share pairs of electrons, while in ionic bonds, In the diagram above, we see a neutral atom of sodium, Na, losing an electron. 5 Sep 2020 Most atoms do not have eight electrons in their valence electron shell. As demonstrated here, a sodium atom (Na) has one valence electron Download 606 Sodium Atom Stock Illustrations, Vectors & Clipart for FREE or amazingly low rates! New users enjoy 60% OFF. 158903279 stock photos online. Om du vill flytta en atom, kan du dra i den med verktyget med pilar i ändarna. Om du vill sodium chloride. Välj sedan Chrystallography Open Database, vid Sodium (Na), chemical element of the alkali metal group (Group 1 [Ia]) of the periodic table. Sodium is a very soft silvery-white metal. Sodium is the most common alkali metal and the sixth most abundant element on Earth, comprising 2.8 percent of Earth’s crust. Name: Sodium Symbol: Na Atomic Number: 11 Atomic Mass: 22.98977 amu Melting Point: 97.72 °C (370.87 K, 207.9 °F) Boiling Point: 883 °C (1156 K, 1621 °F) Number of Protons/Electrons: 11 Number of Neutrons: 12 Classification: Alkali Metal Crystal Structure: Cubic Density @ 293 K: 0.971 g/cm 3 Color: silvery Atomic Structure Sodium is an atom that has 11 protons and 12 neutrons in its nucleus and 11 electrons circling around its nucleus. Like other light atoms such as carbon, sodium forms inside of stars that are beginning to run out of fuel, and it scatters all over space when that star explodes in a supernova. Sodium is soft, and you can cut it with a knife. To give an idea of how large this number is, 1 mole of pennies would be enough money to pay all the expenses of each country on earth for about the next billion years. Se hela listan på periodic-table.com First calculate the moles of Na. 1 mole atoms = 6.022×10^23 atoms Use dimensional analysis to convert atoms to moles. 9.76*10^12 atoms Na x (1 mol Na/6.022*10^23 atoms Na) = 1.621*10^-11 mol Na Calculate mass in grams of Na by multiplying mole Na An atom of sodium-23 (Na-23) has a net charge of + 1. identify the number of protons, neutrons, and electrons in the atom. How did you determine the number of each type.
<urn:uuid:65e314fc-9042-473f-a9c5-2ed2b6d997ac>
CC-MAIN-2022-49
https://forsaljningavaktierxedb.firebaseapp.com/25190/45914.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00803.warc.gz
en
0.713189
1,953
3.9375
4
When looking back into the deep past of the Universe, which means looking out over vast cosmological distances of space, there are observed a peculiar set of galaxies emitting a tremendous amount of energy. These early galaxies, known variously as quasars, blazars, radio galaxies and radio-loud quasars, are all bodies classified as active galactic nuclei. These objects are some of the most energetic phenomena in the universe, if the name blazar was not at all evident of this fact. Active galactic nuclei represent a confirmation of physicist Nassim Haramein’s prediction that black holes are the spacetime structure that forms the seed around which galaxies and stars form. Indeed, it is now widely understood that the early formation of galaxies, producing active galactic nuclei, are in fact due to the action of supermassive black holes – black holes in upwards of a million to a billion solar masses. The super-anatomy of these central galactic black holes are as intriguing as the enigmatic beacons they form in the deep field of space. Although all major galaxies probably have a supermassive black hole at the central region, as this is the structure that initiates galaxy formation in the first place, active galactic nuclei are thought to represent a different early phase of this process when the super massive black holes were extremely active, emitting large amounts of energy (and probably matter as well); forming the first galaxies. Additionally, as a consequence of accreting the pre-galactic material, massive amounts of matter were being both gravitated into the central black holes as well as emitted from their poles. The inflowing matter forms an ultra-hot accretion disc around the equatorial region of the black hole, and relativistic jets (charged particles, or electron-positron plasma, moving at relativistic speeds) stream along the axis of rotation and can extend up to hundreds of thousands of light years. "The implied alignment of the spin axes of massive black holes that give rise [to] the radio jets suggest the presence of large-scale spatial coherence in angular momentum” – A. Taylor & P. Jagannathan These extremely energetic and massive structures are readily identified when viewing deep space images collected in the radio wave band of the electromagnetic spectrum. The scale of observation is grand: gathering light from numerous galaxies across several million parsecs of space. Equally, the instrumentation used to gather light from such distant and vast sources is colossal. Think of the Arecibo Radio Telescope, featured in such films as Contact, to get an idea of how massive these telescopes can be. One such telescope under proposal is the Square Kilometer Array, which will be one of the largest scientific observational instruments ever constructed, as the “lens” of the telescope is essentially a square kilometer in area. This telescope, when completed, will be contributory in determining fundamental cosmological parameters and probing the earliest epochs of galaxy formation. In a recent study using the Giant Meterwave Radio Telescope, South African astronomers made a remarkable discovery when analyzing the alignment of the spin axis of 64 galaxies. The orientation of the axis of rotation of active galactic nuclei are directly observable because of the long plasma jets streaming from the poles of the central supermassive black hole, with strong electromagnetic emissions in the radio frequency range. Reported in the Monthly Notices of the Royal Astronomical Society, the astrophysicist team analyzed the orientation of the radio jet position angles and found that a surprisingly large number of supermassive black holes were aligned with their axes of spin. Statistical analysis revealed that there was a 0.1% probability of such an alignment occurring by chance – strongly indicating that there is some as yet unseen force that is producing strong coherence among cosmological-scale objects. Moreover, this may imply that conditions during the earliest epochs of galactic formation deviate from complete isotropy, referring to the uniformity of the distribution of matter. It has long been presumed that the universe is homogeneous and isotropic (the same in all locations) with no identifiable axis or orientation. Indeed, this is known as the cosmological principle. Yet, one of the 20th century’s greatest minds, Kurt Gödel, provided an exact solution of the Einstein field equations that described a rotating universe. In commentary about Gödel’s work, physicist Stephen Hawking said: "These models could well be a reasonable description of the universe that we observe, however observational data are compatible only with a very low rate of rotation. The quality of these observations improved continually up until Gödel's death, and he would always ask "is the universe rotating yet?" and be told "no, it isn't." In more recent events, there have been several findings that suggest that the universe is indeed not entirely homogenous and isotropic. Such examples come from the so called axis of evil identified during an analysis of the microwave background radiation, dark flow, Shamir’s report on the Sloan Digital Sky Survey showing that left-twisted galaxies were much more common than right-swirling galaxies; as well as structural mapping such as the BOSS Great Wall and Laniakea. While the strong correlation of spin alignment of multiple super massive black holes across cosmological distances may seem puzzling -- since under standard presumptions there should be very little to no interaction of galactic nuclei across such vast distances -- Haramein has long described the dynamics and properties of spacetime that would naturally produce such correlated orientation and entanglement of objects that has been observed in this latest study. Haramein has explained the structural and geometric properties of space and matter from the smallest to the largest scale, and it is in consideration of the largest scale structure, the universe itself, that we gleam an understanding of how and why these vast arrays of galaxies are uniformly aligned in their axis of rotation. Namely, just as we have seen from indications of the “axis of evil”, “dark flow”, the great wall and great voids, the universe is not isotropic, but instead has a definite orientation. Haramein has identified this large-scale structure as a double-toroidal counter-rotating geometry. Thus, not only are phenomena like “dark flow” and seeming accelerated expansion of space observed, but just as in the most recent discovery: strong alignment of galaxies as well. The reason for this – the uniform spin of the universe, which has a strong correlating (entangling) effect on the objects that are uniformly effected by the Coriolis forces of the spinning structure. Spin dynamics naturally produce strong coherence. From this profound theory, we see that spin is not the result of matter accretion in the early universe, but instead it is the intrinsic spin and high curvature of spacetime that engenders gravitational accretion of matter into the structures that are observed. Since spin “came first”, we would expect there to be a remarkably high degree of correlation of the spin axes of primordial active galactic nuclei.In the paper The Origin of Spin: A Consideration of Torque and Coriolis Forces in Einstein’s Field Equations and Grand Unification Theory, Haramein and Elizabeth Rauscher evaluate the inclusion of torque and Coriolis effects in Einstein’s field equations of spacetime geometry (gravity). The main result of such a consideration is that spin is an intrinsic characteristic of spacetime itself, explaining galactic formations, polar jets, accretion discs, spiral arms, and galactic halos without the need for exotic forms of dark matter constructs. Remarkably, this is an instrumental facet of a Grand Unification Theory as the torque and Coriolis effects of spacetime produce the bodies and particle interactions that are observed at the atomic and hadron scale. With further consideration, could it be possible that there are additional forces that would allow for the preservation of such strong alignment over time? For instance, it is possible that galactic magnetic field interactions, which have been observed at cosmological scales, could be at play in stabilizing the strong alignment of the polar radio jets of the supermassive black holes and maintaining the anisotropy over long periods of time. Indeed, instrument such as the Square Kilometer Radio Telescope will allow for the study and analysis of galactic magnetic field interactions to see to what degree this can be involved in large-scale galactic interactions. There is another important interaction however that may be involved in the strong correlation of the spin axes observed in the supermassive black holes, and like the intrinsic spin of spacetime described by Haramein, it is another intriguing spacetime geometrical object. Known technically as Einstein Rosen Bridges (ER bridges) for the two physicist who first described their properties through maximally extended Schwarzschild solutions of Einstein’s field equations, we know them more colloquially as wormholes. Haramein has long described how the black holes that form the hearts of stellar and galactic objects are connected in a vast spacetime wormhole network. Meaning that black holes will be entangled across vast spatial and temporal distances, much like what has been observed in the radio jet spin alignment. Interestingly, more recent advances in Unified Physics have equated spacetime wormholes with the phenomenon of quantum entanglement. This is summarized by the statement that Einstein Rosen bridges produce Einstein Rosen Correlations, expressed concisely as ER = EPR. This means that not only does spacetime geometry entangle astronomical-scale black holes, but miniature ones as well (what are referred to as fundamental particles). What we are observing in this latest study may very well be quantum entanglement at a cosmological scale, as a result of the fluid dynamics of spacetime, linking together the connected universe. Resources and More to Explore
<urn:uuid:2aa7f2c4-2125-4d91-b364-f762493c08c5>
CC-MAIN-2022-49
https://www.resonancescience.org/blog/The-Rotating-Universe
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00525.warc.gz
en
0.930647
2,010
3.953125
4
How do you stop light in midflight and hold on to it – even for a fraction of a second? This ability could be crucial to such future quantum optical systems as secure communications or new kinds of information technologies. A group led by Dr. Ofer Firstenberg at the Weizmann Institute of Science recently demonstrated a method in which individual particles of light – photons – are trapped and released on demand in way that might, in the future, be used as memory for quantum information. A description of their quantum optical memory was recently published in Science Advances. Photons can carry information in the same way that electrons do, explains Firstenberg, who is in the Institute’s Physics of Complex Systems Department. In addition, they can travel long distances, for example in optical fibers, without losing that information; so in future quantum memory and information technologies, photon-based systems may be better than electronic ones for certain kinds of communication and remote sensing. Like electronic systems, photon-based systems need to package and synchronize multiple bits of information. To create such “photon packages,” the timing of the photons must be controlled. Existing devices – photon sources – are able to shoot single photons, but they do so randomly. There is no way to predict exactly when the photon will escape the source or how much time will elapse until the next one is freed. One way to deal with this lack of control is to find a way of capturing the photons, holding them in one place and releasing them on demand – that is, temporarily storing particles of light. Although Firstenberg and his group are not the first to store photons, they are the first to do so in a way that works at room temperature and is relatively fast, very efficient and noiseless (with no distortion in the information). They called their system FLAME, for Fast Ladder Memory. It consists of laser sources and a small amount of pure atomic gas – in this case, of the element rubidium. The electrons of the rubidium atoms act as the “photon memory,” and strong laser pulses are used for the writing and reading processes. The flying photons are first stored in electrons that have been excited – that is, the electrons’ orbit around the nuclei moves out a notch. Then some tens of nanoseconds later – long enough to synchronize the output from many fast photon sources – the memory is read, returning the electrons to their normal ground state and the photons to their flight. FLAME, explain the scientists, is considered to be almost completely free of noise – unwanted disturbances that often plague such systems — because what goes in is what comes out. “The photons that are released from the electrons are identical to those we put in – with the exact same properties and propagation direction. So something like one in 10,000 might be a photon we did not put there. As a quantum memory, the system is fantastic,” says PhD student Ran Finkelstein, who led this study together with Dr. Eilon Poem in Firstenberg’s lab. These findings were published in Science Advances, together with the results of similar experiments conducted at Oxford University, UK. Today, the experimental setup takes up a large table – mostly covered in lasers, mirrors and lenses, but the actual trapping takes place in a container the size of a thumb. Eventually, the scientists hope to miniaturize the process: An atomic gas containing billions of atoms can be contained in a sealed space of one cubic millimeter, and since the atoms return to their original state, it can be reused almost indefinitely. “We need only three elements – a photon source, a contained gas cloud and a strong laser,” says Finkelstein. “This is not a delicate system that works only in ultrahigh vacuum or at very low temperatures. Eventually we’ll be able to insert a system like this in something the size of a cell phone.” Farther in the future, the idea of using photons to convey information in such processes as quantum computing, communications or sensing could involve one of the stranger aspects of quantum physics – a phenomenon known as entanglement. Famously called “spooky action at a distance,” when two particles are entangled, a change to one results in an instantaneous change in the other – meaning information is somehow shared non-locally (that is, there is no way information could have been passed from one to the other by standard means). “If the trapped photons were first entangled with other photons some distance away, this would be quantum communication in the true sense of the word – really based on principles of quantum mechanics that we can’t observe in the everyday world,” says Poem. Quantum communication, if it could be developed, would be almost impossible to tamper with, and thus researchers believe it could be especially useful for new kinds of encryption. Eventually we’ll be able to insert a system like this in something the size of a cell phone Firstenberg and his group plan to test entangled photons in the FLAME setup, and they have other ideas as well, for new experiments with their quantum optical system. For example, they intend to create more complex components, such as logic gates for the information carried by the stored photons. “While we still don’t know which future quantum information systems will prevail,” says Firstenberg, “there are some things for which we know photons are best. For example, the recent discovery of gravitational waves in a distant galaxy relied on powerful optical sensors. Our communications are already sent by light waves through thin optic fibers; photon ‘quantum bits’ can travel in similar fibers. So quantum memory systems based on single photons may have applications in the not-too-distant future.” True quantum information processing with photons may be in the distant future, but the current research in Firstenberg’s lab in developing efficient and noiseless optical quantum memory is bringing that future closer. Photons move at, well, the speed of light, and they are easily destroyed. Each and every one of us constantly destroys photons as our eyes take in and absorb light. And they are extremely faint, so it takes a very fine “net” to catch them, even for a fraction of a second. So why do scientists search for ways of trapping and using single photons? Photons can be varied and manipulated in ways that electrons cannot, and if they are left undisturbed they can travel through transparent materials or vacuum practically forever without losing their strength. Since single photons obey the laws of quantum mechanics, researchers hope to find ways of applying some of that “quantum weirdness” to create new types of computation, memory and especially communications. Several ideas for using photons to secure communications have been suggested. If a single photon were used as a “key,” for example, anyone trying to intercede in the transmission would destroy that key. Similarly, if photons at either end were entangled, a change in the photon at the receiving end would alert the recipient that tampering had occurred. Dr. Ofer Firstenberg’s research is supported by the Sir Charles Clore Research Prize; the Laboratory in Memory of Leon and Blacky Broder, Switzerland; and the European Research Council.
<urn:uuid:6a9a30ab-8f81-4f69-913f-772810724135>
CC-MAIN-2022-49
https://www.weizmann.ca/photons-stopped-in-time/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00365.warc.gz
en
0.947537
1,503
3.9375
4
Quantum superposition has been used to compare data from two different sources more efficiently than is possible, even in principle, on a conventional computer. The scheme is called “quantum fingerprinting” and has been demonstrated by physicists in China. It could ultimately lead to better large-scale integrated circuits and more energy-efficient communication. Quantum fingerprinting offers a way of minimizing the amount of information that is transferred between physically separated computers that are working together to solve a problem. It involves two people – Alice and Bob – each sending a file containing n bits of data to a third-party referee, whose job is to judge whether or not the two files are identical. A practical example could be a security system that compares a person’s fingerprint to a digital image. Proposed theoretically in 2001, quantum fingerprinting can make a comparison in an exponentially more efficient way than is possible using conventional computers. While the only way to ensure a complete comparison is to send the two files in their entirety, it turns out that a reasonably accurate comparison can be achieved classically by sending just the square root of the number of bits. Quantum mechanics allows comparisons with even less data because a quantum bit (qubit) of information can exist not just as a zero or a one but, in principle at least, also in an infinite number of intermediate states. The vast increase in the number of possible combinations of states for a given number means that the number of physical bits that need to be transmitted scales logarithmically with the number of bits in the two files. As such, quantum fingerprinting permits an exponential reduction in data-transmission rates over classical algorithms. The original proposal for quantum fingerprinting involved using log n highly entangled qubits, which Norbert Lütkenhaus of the University of Waterloo in Canada says is still many more qubits than can be implemented using today’s technology. In 2014 he and Juan Miguel Arrazola, now at the National University of Singapore, unveiled a more practical scheme. This involves Alice and Bob encoding their n bits in the optical phase of a series of laser pulses, and then sending those pulses to a beam splitter (the referee). The pairs of pulses arrive at the beam splitter one at a time – if the two pulses have the same phase they exit from one port, whereas opposite phases cause them to leave from a second port. In this way, the two files are judged to be identical if there is no signal at the second port. The ramp up in efficiency is due to the fact that each pulse can be made from a tiny fraction of a single photon. This means that, on average, the pulses contain less than one photon, which is achieved by attenuating the laser light. This means n pulses can be encoded using just log n photons. As Lütkenhaus points out, the number of photons cannot be made arbitrarily small because there needs to be a reasonable chance that a photon is detected when the phases are different, for the referee to obtain the right answer: that the files are or are not identical. “The scheme gives us an asymptotically accurate result,” he says. “The more photons I put in, the closer I get to the black and white probability.” Last year, Lütkenhaus and Arrazola, working with Hoi-Kwong Lo, Feihu Xu and other physicists at the University of Toronto, put the scheme into practice by modifying a quantum-key-distribution system sold commercially by the firm ID Quantique in Geneva. They showed that they could match files as large as 100 megabits using less information than is possible with the best-known classical protocol. They did admit, however, that their scheme, while more energy efficient, took more time to carry out. Now, a group led by Jian-Wei Pan and Qiang Zhang of the University of Science and Technology of China in Hefei has beaten not only the best existing classical protocol but the theoretical classical limit (which is some two orders of magnitude lower). The researchers did so by using more tailor-made equipment – in particular, they employed superconducting rather than standard avalanche photon detectors, which reduced the number of false-positive signals from the beam splitter and so improved the accuracy of the yes/no outputs, and designed a novel kind of interferometer. Pan and colleagues successfully compared two roughly two-gigabit video files by transmitting just 1300 photons along 20 km of spooled fibre-optic cable, which is about half of what would be needed classically. Next, they plan to test their system by placing Alice, Bob and the referee at different points in a city such as Shanghai. Despite Pan’s demonstration, Lütkenhaus thinks that quantum fingerprinting probably won’t be commercialized because its superiority over classical systems depends on fairly artificial conditions, such as the referee being unable to talk back to Alice and Bob. However, he says that the research “opens the door” to other, potentially more useful, applications. One example is database searching when the searcher doesn’t have access to the whole database, while the owner of the database can’t see the search terms. “For this, we have made a protocol but not the technology,” he says. The work is reported on the arXiv preprint server.
<urn:uuid:789eb857-b38d-4750-9609-a37642567677>
CC-MAIN-2022-49
https://physicsworld.com/a/alice-and-bob-have-their-quantum-fingerprints-checked/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00566.warc.gz
en
0.943676
1,127
3.984375
4
While the word “quantum” has only started trending in the technology space during the last decade, many past technologies already relied on our understanding of the quantum world, from lasers to MRI imaging, electronic transistors, and nuclear power. The reason quantum has become so popular lately is that researchers have become increasingly better at manipulating individual quantum particles (light photons, electrons, atoms) in ways that weren’t possible before. These advances allow us to harness more explicitly the unique and weird properties of the quantum world. They could launch yet another quantum technology revolution in areas like sensing, computation, and communication. What’s a Quantum Computer? The power of quantum computers comes chiefly from the superposition principle. A classical bit can only be in a 0 or 1 state, while a quantum bit (qubit) can exist in several 0 and 1 state combinations. When one measures and observes the qubit, it will collapse into just one of these combinations. Each combination has a specific probability of occurring when the qubit collapses. While two classical bits can only exist in one out of four combinations, two quantum bits can exist in all these combinations simultaneously before being observed. Therefore, these qubits can hold more information than a classical bit, and the amount of information they can hold grows exponentially with each additional qubit. Twenty qubits can already hold a million values simultaneously (220), and 300 qubits can store as many particles as there are in the universe (2300). However, to harness this potential processing power, we must understand that probabilities in quantum mechanics do not work like conventional probabilities. The probability we learned about in school allowed only for numbers between 0 and 1. On the other hand, probabilities in quantum mechanics behave as waves with amplitudes that can be positive or negative. And just like waves, quantum probabilities can interfere, reinforcing each other or cancelling each other out. Quantum computers solve computational problems by harnessing such interference. The quantum algorithm choreographs a pattern of interference where the combinations leading to a wrong answer cancel each other out. In contrast, the combinations leading to the correct answer reinforce each other. This process gives the computer a massive speed boost. We only know how to create such interference patterns for particular computational problems, so for most problems, a quantum computer will only be as fast as a conventional computer. However, one problem where quantum computers are much faster than classical ones is finding the prime factors of very large numbers. How Quantum Computers Threaten Conventional Cryptography Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest and most widely used methods to encrypt data is called RSA (Rivest-Shamir-Adleman – the surnames of the algorithm’s designers). RSA protocols encrypt messages with a key that results from the multiplication of two very large numbers. Only someone who knows the values of these two numbers can decode the message. RSA security relies on a mathematical principle: multiplying two large numbers is computationally easy, but the opposite process—figuring out what large numbers were multiplied—is extremely hard, if not practically impossible, for a conventional computer. However, in 1994 mathematician Peter Shor proved that an ideal quantum computer could find the prime factors of large numbers exponentially more quickly than a conventional computer and thus break RSA encryption within hours or days. While practical quantum computers are likely decades away from implementing Shor’s algorithm with enough performance and scale to break RSA or similar encryption methods, the potential implications are terrifying for our digital society and our data safety. In combination with private key systems like AES, RSA encrypts most of the traffic on the Internet. Breaking RSA means that emails, online purchases, medical records, company data, and military information, among many others, would all be more susceptible to attacks from malicious third parties. Quantum computers could also crack the digital signatures that ensure the integrity of updates to apps, browsers, operating systems, and other software, opening a path for malware. This security threat has led to heavy investments in new quantum-resistant encryption. Besides, existing private key systems used in the enterprise telecom sector like AES-256 are already quantum resistant. However, even if these methods are secure now, there is no guarantee that they will remain secure in the future. Someone might discover a way to crack them, just as it happened with RSA. Quantum Key Distribution and its Impact on the Telecom World Given these risks, arguably the most secure way to protect data and communications is by fighting quantum with quantum:protect your data from quantum computer hacking by using security protocols that harness the power of quantum physics laws. That’s what quantum key distribution (QKD) does: QKD uses qubits to generate a secret cryptographic key protected by the phenomenon of quantum state collapse. If an attacker tries to eavesdrop and learn information about the key, they will distort the qubits irreversibly. The sender and receiver will see this distortion as errors in their qubit measurements and know that their key has been compromised. Quantum-safe encryption will take part in people’s day-to-day lives through upgrades to laptops, phones, browsers, and other consumer products. However, most of the burden for quantum-safe communication will be handled by businesses, governments, and cloud service providers that must design and install these systems. It’s a hugely complex change that’s on par with upgrading internet communications from IPv4 to IPv6. Even if practical quantum computers are not yet available, it’s essential to begin investing in these changes, as explained by Toshiba Chief Digital Officer Taro Shimada: “Sectors such as finance, health and government are now realizing the need to invest in technology that will prepare and protect them for the quantum economy of the future. Our business plan goes far deeper and wider than selling quantum cryptographic hardware. We are developing a quantum platform and services that will not only deliver quantum keys and a quantum network but ultimately enable the birth of a quantum internet”. Toshiba expects the QKD market to grow to approximately $20 billion worldwide in FY 2035. How Photonics Impacts QKD Qubits can be photons, electrons, atoms, or any other system that can exist in a quantum state. However, using photons as qubits will likely dominate the quantum communications and QKD application space. We have decades of experience manipulating the properties of photons, such as polarization and phase, to encode qubits. Thanks to optical fiber, we also know how to send photons over long distances with relatively little loss. Besides, optical fiber is already a fundamental component of modern telecommunication networks, so future quantum networks can run on that existing fiber infrastructure. All these signs point towards a new era of quantum photonics. Photonic QKD devices have been, in some shape or form, commercially available for over 15 years. Still, factors such as the high cost, large size, and the inability to operate over longer distances have slowed their widespread adoption. Many R&D efforts regarding quantum photonics aim to address the size, weight, and power (SWaP) limitations. One way to overcome these limitations and reduce the cost per device would be to integrate every QKD function—generating, manipulating, and detecting photonic qubits—into a single chip. The further development of the integrated quantum photonics (IQP) chip is considered by many as a critical step in building the platform that will unlock quantum applications in much the same way as integrated circuits transformed microelectronics. In the coming articles, we will discuss more how to combine photonic integration with quantum technologies to address the challenges in quantum communications. If you would like to download this article as a PDF, then please click here.
<urn:uuid:6ee74347-9415-4746-84f2-0175d65d17f8>
CC-MAIN-2022-49
https://effectphotonics.com/points-of-view/an-introduction-to-qkd/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00726.warc.gz
en
0.925917
1,593
3.9375
4
From designing new polymers and pharmaceuticals to modeling climate change and cracking encryption, quantum computing’s potential applications have sparked a global quantum arms race. What is Quantum Computing? Since the birth of the single-chip microprocessor 50 years ago, computers have performed calculations by manipulating bits of information – ones and zeros – using tiny transistors baked into silicon chips. Modern processors cram tens of billions of transistors into a chip the size of a fingernail. Quantum computing does away with transistors. Instead, the ones and zeros – dubbed “qubits” – are recorded by changing the state of quantum objects, for example changing the magnetic orientation or “spin” of elementary particles like electrons. Today’s most powerful quantum computers can only string together a few dozen qubits, but they are already putting the most powerful traditional supercomputers to shame at some tasks. It’s not simply a question of raw processing power. While the electrical charge of a single transistor can either represent a one or a zero, a single qubit can actually represent both one and zero simultaneously thanks to the quirks of quantum mechanics. This allows quantum computers to process multiple outcomes simultaneously and dramatically reduce the number of steps required to tackle complex problems – solving them in minutes rather than millennia. Who Is Leading the Way? Credit: Quantum Computing by IBM, by Microsoft, Google’s Sycamore, Alibaba’s supercomputer Using the building blocks of the universe to power the next generation supercomputers might seem like science fiction, but quantum computing is already a reality. The US and China are pouring billions of dollars into research and development, while Europe is also investing heavily and breakthroughs are occurring around the globe. Along with universities, private sector tech giants such as IBM, Microsoft, Google, Amazon, Alibaba and Baidu are also paving the way. At the same time, startups are working to solve some of the challenges which must be overcome for quantum computing to reach its full potential. In October 2019, Google’s Californian research lab became the first to achieve “quantum supremacy”, performing a calculation that would be practically impossible for even the most powerful classical supercomputer. Google’s 53-qubit Sycamore processor performed a calculation in 200 seconds which would have taken the world’s most powerful supercomputer 10,000 years. The University of Science and Technology of China achieved quantum supremacy only 14 months later, claiming its Jiuzhang quantum computer to be 10 billion times faster than Google’s. What Challenges Lay Ahead? While quantum supremacy is a major achievement, if quantum computing is a moonshot then quantum supremacy is only the equivalent of Yuri Gagarin’s first space flight. Many challenges still lie ahead and fully-fledged, fault-tolerant quantum computers may still be more than a decade away. So far, quantum supremacy has only been achieved using computers and calculations especially designed to demonstrate quantum computing’s strengths, but not to solve real-world problems. A key milestone will be to achieve “practical” quantum supremacy when tackling real-world challenges, says Professor Andrea Morello. Winner of the American Physical Society‘s inaugural Rolf Landauer and Charles H. Bennett Award in Quantum Computing, Morello leads one of the University of New South Wales’ quantum computing research teams in Sydney, Australia. Practical quantum supremacy may still be a decade away, Morello says. It is difficult to predict which problem will be solved first, but one possibility is calculating a chemical reaction in order to synthesize a new pharmaceutical. Achieving practical quantum supremacy will require error correction and fault tolerance, similar to traditional computers. Error correction proves challenging at the quantum level, where qubits are highly susceptible to interference and only remain stable for milliseconds, Morello says: “Google’s quantum supremacy was achieved using ‘uncorrected’ qubit gates and, while this is impressive, error correction becomes important when you’re aiming for practical quantum supremacy so you can trust the outcome enough to apply it to the real world. Quantum error correction has been demonstrated in the laboratory and right now a lot of resources are being invested into bringing it to fruition.” How Are Quantum Computers Used Today? Summit supercomputer (Credit: Oak Ridge National Laboratory) While progress continues towards practical quantum supremacy, intermediate quantum computers still offer an advantage over classical computers in certain optimized applications, says GlobalData graduate analyst Sam Holt. “Fully-fledged, universal and fault-tolerant quantum computers may be more than a decade away, but a flurry of recent partnerships have explored use cases on intermediate devices. In January 2021, for example, Roche announced a collaboration with Cambridge Quantum Computing to develop quantum simulations for new drug discovery for Alzheimer’s disease.” Roche employs noisy-intermediate-scale-quantum (NISQ) algorithms that lack error correction but are still useful for some tasks. Another intermediate approach to quantum computing proposes installing low-qubit processors alongside traditional processors to act as “quantum accelerators”. This allows certain aspects of processing to benefit from the quantum advantage, similar to the way a CPU can hand off specific tasks to a dedicated graphics card. Even once practical quantum supremacy is achieved, Holt says it is likely that businesses in a wide range of industries will choose to rent time on cloud-based quantum computers rather than invest in their own hardware. “Quantum cloud offerings from companies such as IBM are enabling widespread quantum computing. Quantum computing’s primary applications are in simulation, optimization, linear algebra and factorisation. These capabilities are increasingly becoming key requirements across a wide array of industries. Companies in these fields that are not at least investigating how quantum may transform their business risk getting left behind.” What Are the Applications for Quantum Computing? Even when error correction and practical quantum supremacy are achievable, traditional computers will still be considerably smaller, cheaper and more practical for most calculations, Morello says: “Using a quantum computer to solve most problems is like using a 747 to go to the supermarket. Just like a jumbo jet, quantum computing proves its worth when you need to do the heavy lifting.” Chemistry is shaping up as quantum computing’s first killer application, potentially helping humanity address some of its greatest challenges. Today the production of ammonia, the main ingredient of fertilizer, requires high-temperature furnaces which consume 2% of the world’s energy and produce 1% of its CO2 output. Bacteria can produce ammonia at room temperature and quantum computing may be the key to understanding and replicating this process. In manufacturing, quantum computing could be used to develop new chemicals, polymers, and alloys. Industrial manufacturing still struggles to duplicate many materials with astonishing properties which exist in nature, such as spider silk. By weight, spider silk is comparable with steel when it comes to tensile strength, but silk is not forged in a furnace. Because spider silk is a protein made by DNA, quantum computing’s superior ability to model at a subatomic level may unlock the ability to manufacture similar materials in an eco-friendly way, Morello says: “Quantum computing is a truly disruptive technology that can have gigantic value for science, for industry and for society. It’s such a genuinely transformational technology that the vast majority of its applications will be things we haven’t even thought of yet – quantum computing will help open up new worlds.”
<urn:uuid:4ad24b1a-24de-45bd-818f-9a0059e07eeb>
CC-MAIN-2022-49
https://emag.directindustry.com/2021/09/28/the-race-to-become-the-worlds-first-quantum-computing-superpower-ibm-microsoft-google/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00047.warc.gz
en
0.90203
1,579
3.9375
4
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Test dataset for testing a data sampling and filtering script.

Configuration for filtering:

dataset: "HuggingFaceFW/fineweb-edu"
output_file: "quantum_computing_entries.jsonl"
state_file: "quantum_computing_dataset_state.json"

# Processing parameters
keywords:
  - "quantum computing"
  - "qubit"
  - "quantum entanglement"
  - "quantum supremacy"
  - "quantum algorithm"
  - "quantum error correction"

max_entries: 1024
min_tokens: 1024
max_tokens: 2048
min_int_score: 4

# Shuffling configuration
shuffle: true
seed: 42
Downloads last month
10
Edit dataset card