↩ Accueil

Vue lecture

Could silicon become the bedrock of quantum computers?

Silicon, in the form of semiconductors, integrated chips and transistors, is the bedrock of modern classical computers – so much so that it lends its name to technological hubs around the world, beginning with Silicon Valley in the US . For quantum computers, the bedrock is still unknown, but a new platform developed by researchers in Australia suggests that silicon could play a role here, too.

Dubbed the 14|15 platform due to its elemental constituents, it combines a crystalline silicon substrate with qubits made from phosphorus atoms . By relying on only two types of atoms, team co-leader Michelle Simmons says the device “avoids the interfaces and complexities that plague so many multi-material platforms” while enabling “high-quality qubits with lower noise, simplicity of design and device stability”.

Boarding at platform 14|15

Quantum computers take registers of qubits, which store quantum information, and apply basic operations to them sequentially to execute algorithms. One of the primary challenges they face is scalability – that is, sustaining reliable, or high-fidelity, operations on an increasing number of qubits. Many of today’s platforms use only a small number of qubits, for which operations can be individually tuned for optimal performance. However, as the amount of hardware, complexity and noise increases, this hands-on approach becomes debilitating.

Silicon quantum processors may offer a solution. Writing in Nature, Simmons, Ludwik Kranz, and their team at Silicon Quantum Computing (a spinout from the University of New South Wales in Sydney) describe a system that uses the nuclei of phosphorus atoms as its primary qubit. Each nucleus behaves a little like a bar magnet with an orientation (north/south or up/down) that represents a 0 or 1.

These so-called spin qubits are particularly desirable because they exhibit relatively long coherence times, meaning information can be preserved for long enough to apply the numerous operations of an algorithm. Using monolithic, high-purity silicon as the substrate further benefits coherence since it reduces undesirable charge and magnetic noise arising from impurities and interfaces.

To make their quantum processor, the team deposited phosphorus atoms in small registers a few nanometres across. Within each register, the phosphorus nuclei do not interact enough to generate the entangled states required for a quantum computation. The team remedy this by loading each cluster of phosphorous atoms with a electron that is shared between the atoms. The result is that so-called hyperfine interactions, wherein each nuclear spin interacts with the electron like an interacting bar magnet, arise and provide the interaction necessary to entangle nuclear spins within each register.

By combining these interactions with control of individual nuclear spins, the researchers showed that they can generate Bell states (maximally entangled two-qubit states) between pairs of nuclei within a register with error rates as low as 0.5% – the lowest to date for semiconductor platforms.

Scaling through repulsion

The team’s next step was to connect multiple processors – a step that exponentially increases their combined capacity. To understand how, consider two quantum processors, one with n qubits and the other m qubits. Isolated from one another, they can collectively represent at most 2n + 2m states. Once they are entangled, however, they can represent 2n + m states.

Simmons says that silicon quantum processors offer an inherent advantage in scaling, too. Generating numerous registers on a single chip and using “naturally occurring” qubits, she notes, reduces their need for extraneous confinement gates and electronics as they scale.

The researchers showcased these scaling capabilities by entangling a register of four phosphorus atoms with a register of five, separated by 13 nm. The entanglement of these registers is mediated by the electron-exchange interaction, a phenomenon arising from the combination of Pauli’s exclusion principle and Coulomb repulsion when electrons are confined in a small region. By leveraging this and all other interactions and control in their toolkit, the researchers generate entanglement of eight data qubits across the two registers.

Retaining such high-quality qubits and individual control of them despite their high density demonstrates the scaling potential of the platform. Future avenues of exploration include increasing the size of 2D arrays of registers to increase the number of qubits, but Simmons says the rest is “top secret”, adding “the world will know soon enough”.

The post Could silicon become the bedrock of quantum computers? appeared first on Physics World.

  •  

We need a ‘Planetary Neural Network’ for AI-enabled space infrastructure protection

Illustration of space debris in Earth orbit from NORAD's satcat catalog

You may not see it with the naked eye, but in Earth’s orbit, a silent crisis is unfolding. With over 11,000 active satellites currently in orbit — a number expected to reach between 30,000 and 60,000 by 2030 — 40,500 tracked objects of 10 cm and more, 1.1 million pieces of space debris between 1 […]

The post We need a ‘Planetary Neural Network’ for AI-enabled space infrastructure protection appeared first on SpaceNews.

  •  

Is our embrace of AI naïve and could it lead to an environmental disaster?

According to today’s leading experts in artificial intelligence (AI), this new technology is a danger to civilization. A statement on AI risk published in 2023 by the US non-profit Center for AI Safety warned that mitigating the risk of extinction from AI must now be “a global priority”, comparing it to other societal-scale dangers such as pandemics and nuclear war. It was signed by more than 600 people, including the winner of the 2024 Nobel Prize for Physics and so-called “Godfather of AI” Geoffrey Hinton. In a speech at the Nobel banquet after being awarded the prize, Hinton noted that AI may be used “to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim”.

Despite signing the letter, Sam Altman of OpenAI, the firm behind ChatGPT, has stated that the company’s explicit ambition is to create artificial general intelligence (AGI) within the next few years, to “win the AI-race”. AGI is predicted to surpass human cognitive capabilities for almost all tasks, but the real danger is if or when AGI is used to generate more powerful versions of itself. Sometimes called “superintelligence”, this would be impossible to control. Companies do not want any regulation of AI and their business model is for AGI to replace most employees at all levels. This is how firms are expected to benefit from AI, since wages are most companies’ biggest expense.

AI, to me, is not about saving the world, but about a handful of people wanting to make enormous amounts of money from it. No-one knows what internal mechanism makes even today’s AI work – just as one cannot find out what you think from how the neurons in your brain are firing. If we don’t even understand today’s AI models, how are we going to understand – and control – the more powerful models that already exist or are planned in the near future?

AI has some practical benefits but too often is put to mostly meaningless, sometimes downright harmful, uses such as cheating your way through school or creating disinformation and fake videos online. What’s more, an online search with the help of AI requires at least 10 times as much energy as a search without AI. It already uses 5% of all electricity in the US and by 2028 this figure is expected to be 15%, which will be over a quarter of all US households’ electricity consumption. AI data servers are more than 50% as carbon intensive as the rest of the US’s electricity supply.

Those energy needs are why some tech companies are building AI data centres – often under confidential, opaque agreements – very quickly for fear of losing market share. Indeed, the vast majority of those centres are powered by fossil-fuel energy sources – completely contrary to the Paris Agreement to limit global warming. We must wisely allocate Earth’s strictly limited resources, with what is wasted on AI instead going towards vital things.

To solve the climate crisis, there is definitely no need for AI. All the solutions have already been known for decades: phasing out fossil fuels, reversing deforestation, reducing energy and resource consumption, regulating global trade, reforming the economic system away from its dependence on growth. The problem is that the solutions are not implemented because of short-term selfish profiteering, which AI only exacerbates.

Playing with fire

AI, like all other technologies, is not a magic wand and, as Hinton says, potentially has many negative consequences. It is not, as the enthusiasts seem to think, a magical free resource that provides output without input (and waste). I believe we must rethink our naïve, uncritical, overly fast, total embrace of AI. Universities are known for wise reflection, but worryingly they seem to be hurrying to jump on the AI bandwagon. The problem is that the bandwagon may be going in the wrong direction or crash and burn entirely.

Why then should universities and organizations send their precious money to greedy, reckless and almost totalitarian tech billionaires? If we are going to use AI, shouldn’t we create our own AI tools that we can hopefully control better? Today, more money and power is transferred to a few AI companies that transcend national borders, which is also a threat to democracy. Democracy only works if citizens are well educated, committed, knowledgeable and have influence.

AI is like using a hammer to crack a nut. Sometimes a hammer may be needed but most of the time it is not and is instead downright harmful. Happy-go-lucky people at universities, companies and throughout society are playing with fire without knowing about the true consequences now, let alone in 10 years’ time. Our mapped-out path towards AGI is like a zebra on the savannah creating an artificial lion that begins to self-replicate, becoming bigger, stronger, more dangerous and more unpredictable with each generation.

Wise reflection today on our relationship with AI is more important than ever.

The post Is our embrace of AI naïve and could it lead to an environmental disaster? appeared first on Physics World.

  •  

New sensor uses topological material to detect helium leaks

A new sensor detects helium leaks by monitoring how sound waves propagate through a topological material – no chemical reactions required. Developed by acoustic scientists at Nanjing University, China, the innovative, physics-based device is compact, stable, accurate and capable of operating at very low temperatures.

Helium is employed in a wide range of fields, including aerospace, semiconductor manufacturing and medical applications as well as physics research. Because it is odourless, colourless, and inert, it is essentially invisible to traditional leak-detection equipment such as adsorption-based sensors. Specialist helium detectors are available, but they are bulky, expensive and highly sensitive to operating conditions.

A two-dimensional acoustic topological material

The new device created by Li Fan and colleagues at Nanjing consists of nine cylinders arranged in three sub-triangles with tubes in between the cylinders. The corners of the sub-triangles touch and the tubes allow air to enter the device. The resulting two-dimensional system has a so-called “kagome” structure and is an example of a topological material – that is, one that contains special, topologically protected, states that remain stable even if the bulk structure contains minor imperfections or defects. In this system, the protected states are the corners.

To test their setup, the researchers placed speakers under the corners that send sound waves into the structure and make the gas within it vibrate at a certain frequency (the resonance frequency). When they replaced the air in the device with helium, the sound waves travelled faster, changing the vibration frequency. Measuring this shift in frequency enabled the researchers to calculate the concentration of helium in the device.

Many advantages over traditional gas sensors

Fan explains that the device works because the interface/corner states are impacted by the properties of the gas within it. This mechanism has many advantages over traditional gas sensors. First, it does not rely on chemical reactions, making it ideal for detecting inert gases like helium. Second, the sensor is not affected by external conditions and can therefore work at extremely low temperatures – something that is challenging for conventional sensors that contain sensitive materials. Third, its sensitivity to the presence of helium does not change, meaning it does not need to be recalibrated during operation. Finally, it detects frequency changes quickly and rapidly returns to its baseline once helium levels decrease.

As well as detecting helium, Fan says the device can also pinpoint the direction a gas leak is coming from. This is because when helium begins to fill the device, the corner closest to the source of the gas is impacted first. Each corner thus acts as an independent sensing point, giving the device a spatial sensing capability that most traditional detectors lack.

Other gases could be detected

Detecting helium leaks is important in fields such as semiconductor manufacturing, where the gas is used for cooling, and in medical imaging systems that operate at liquid helium temperatures. “We think our work opens an avenue for inert gas detection using a simple device and is an example of a practical application for two-dimensional acoustic topological materials,” says Fan.

While the new sensor was fabricated to detect helium, the same mechanism could also be employed to detect other gases such as hydrogen, he adds.

Spurred on by these promising preliminary results, which they report in Applied Physics Letters, the researchers plan to extend their fabrication technique to create three-dimensional acoustic topological structures. “These could be used to orientate the corner points so that helium can be detected in 3D space,” says Fan. “Ultimately, we are trying to integrate our system into a portable structure that can be deployed in real-world environments without complex supporting equipment.,” he tells Physics World.

The post New sensor uses topological material to detect helium leaks appeared first on Physics World.

  •  

Encrypted qubits can be cloned and stored in multiple locations

Encrypted qubits can be cloned and stored in multiple locations without violating the no-cloning theorem of quantum mechanics, researchers in Canada have shown. Their work could potentially allow quantum-secure cloud storage, in which data can be stored on multiple servers, thereby allowing for redundancy without compromising security. The research also has implications for quantum fundamentals.

Heisenberg’s uncertainty principle – which states that it is impossible to measure conjugate variables of a quantum object with less than a combined minimum uncertainty – is one of the central tenets of quantum mechanics. The no-cloning theorem – that it is impossible to create identical clones of unknown quantum states – flows directly from this. Achim Kempf of the University of Waterloo explains, “If you had [clones] you could take half your copies and perform one type of measurement, and the other half of your copies and perform an incompatible measurement, and then you could beat the uncertainty principle.”

No-cloning poses a challenge those trying to create a quantum internet. On today’s Internet, storage of information on remote servers is common, and multiple copies of this information are usually stored in different locations to preserve data in case of disruption. Users of a quantum cloud server would presumably desire the same degree of information security, but no-cloning theorem would apparently forbid this.

Signal and noise

In the new work, Kempf and his colleague Koji Yamaguchi, now at Japan’s Kyushu University, show that this is not the case. Their encryption protocol begins with the generation of a set of pairs of entangled qubits. When a qubit, called A, is encrypted, it interacts with one qubit (called a signal qubit) from each pair in turn. In the process of interaction, the signal qubits record information about the state of A, which has been altered by previous interactions. As each signal qubit is entangled with a noise qubit, the state of the noise qubits is also changed.

Another central tenet of quantum mechanics, however, is that quantum entanglement does not allow for information exchange. “The noise qubits don’t know anything about the state of A either classically or quantum mechanically,” says Kempf. “The noise qubits’ role is to serve as a record of noise…We use the noise that is in the signal qubit to encrypt the clone of A. You drown the information in noise, but the noise qubit has a record of exactly what noise has been added because [the signal qubits and noise qubits] are maximally entangled.”

Therefore, a user with all of the noise qubits knows nothing about the signal, but knows all of the noise that was added to it. Possession of just one of the signal qubits, therefore, allows them to recover the unencrypted qubit. This does not violate the uncertainty principle, however, because decrypting one copy of A involves making a measurement of the noise qubits: “At the end of [the measurement], the noise qubits are no longer what they were before, and they can no longer be used for the decryption of another encrypted clone,” explains Kempf.

Cloning clones

Kempf says that, working with IBM, they have demonstrated hundreds of steps of iterative quantum cloning (quantum cloning of quantum clones) on a Heron 2 processor successfully and showed that the researchers could even clone entangled qubits and recover the entanglement after decryption. “We’ll put that on the arXiv this month,” he says.

 The research is described in Physical Review Letters and Barry Sanders at Canada’s University of Calgary is impressed by both the elegance and the generality of the result. He notes it could have significance for topics as distant as information loss from black holes: “It’s not a flash in the pan,” he says; “If I’m doing something that is related to no-cloning, I would look back and say ‘Gee, how do I interpret what I’m doing in this context?’: It’s a paper I won’t forget.”

Seth Lloyd of MIT agrees: “It turns out that there’s still low-hanging fruit out there in the theory of quantum information, which hasn’t been around long,” he says. “It turns out nobody ever thought to look at this before: Achim is a very imaginative guy and it’s no surprise that he did.” Both Lloyd and Sanders agree that quantum cloud storage remains hypothetical, but Lloyd says “I think it’s a very cool and unexpected result and, while it’s unclear what the implications are towards practical uses, I suspect that people will find some very nice applications in the near future.”

The post Encrypted qubits can be cloned and stored in multiple locations appeared first on Physics World.

  •  
❌