↩ Accueil

Vue normale

Reçu aujourd’hui — 15 octobre 2025 Physics World

Evo CT-Linac eases access to online adaptive radiation therapy

15 octobre 2025 à 14:15

Adaptive radiation therapy (ART) is a personalized cancer treatment in which a patient’s treatment plan can be updated throughout their radiotherapy course to account for any anatomical variations – either between fractions (offline ART) or immediately prior to dose delivery (online ART). Using high-fidelity images to enable precision tumour targeting, ART improves outcomes while reducing side effects by minimizing healthy tissue dose.

Elekta, the company behind the Unity MR-Linac, believes that in time, all radiation treatments will incorporate ART as standard. Towards this goal, it brings its broad knowledge base from the MR-Linac to the new Elekta Evo, a next-generation CT-Linac designed to improve access to ART. Evo incorporates AI-enhanced cone-beam CT (CBCT), known as Iris, to provide high-definition imaging, while its Elekta ONE Online software automates the entire workflow, including auto-contouring, plan adaptation and end-to-end quality assurance.

A world first

In February of this year, Matthias Lampe and his team at the private centre DTZ Radiotherapy in Berlin, Germany became the first in the world to treat patients with online ART (delivering daily plan updates while the patient is on the treatment couch) using Evo. “To provide proper tumour control you must be sure to hit the target – for that, you need online ART,” Lampe tells Physics World.

The team at DTZ Radiotherapy
Initiating online ART The team at DTZ Radiotherapy in Berlin treated the first patient in the world using Evo. (Courtesy: Elekta)

The ability to visualize and adapt to daily anatomy enables reduction of the planning target volume, increasing safety for nearby organs-at-risk (OARs). “It is highly beneficial for all treatments in the abdomen and pelvis,” says Lampe. “My patients with prostate cancer report hardly any side effects.”

Lampe selected Evo to exploit the full flexibility of its C-arm design. He notes that for the increasingly prevalent hypofractionated treatments, a C-arm configuration is essential. “CT-based treatment planning and AI contouring opened up a new world for radiation oncologists,” he explains. “When Elekta designed Evo, they enabled this in an achievable way with an extremely reliable machine. The C-arm linac is the primary workhorse in radiotherapy, so you have the best of everything.”

Time considerations

While online ART can take longer than conventional treatments, Evo’s use of automation and AI limits the additional time requirement to just five minutes – increasing the overall workflow from 12 to 17 minutes and remaining within the clinic’s standard time slots.

Patient being set up on an Elekta treatment system
Elekta Evo Evo is a next-generation CT-Linac designed to improve access to adaptive radiotherapy. (Courtesy: Elekta)

The workflow begins with patient positioning and CBCT imaging, with Evo’s AI-enhanced Iris imaging significantly improving image quality, crucial when performing ART. The radiation therapist then matches the cone-beam and planning CTs and performs any necessary couch shift.

Simultaneously, Elekta ONE Online performs AI auto-contouring of OARs, which are reviewed by the physician, and the target volume is copied in. The physicist then simulates the dose distribution on the new contours, followed by a plan review. “Then you can decide whether to adapt or not,” says Lampe. “This is an outstanding feature.” The final stage is treatment delivery and online dosimetry.

When DTZ Berlin first began clinical treatments with Evo, some of Lampe’s colleagues were apprehensive as they were attached to the conventional workflow. “But now, with CBCT providing the chance to see what will be treated, every doctor on my team has embraced the shift and wouldn’t go back,” he says.

The first treatments were for prostate cancer, a common indication that’s relatively easy to treat. “I also thought that if the Elekta ONE workflow struggled, I could contour this on my own in a minute,” says Lampe. “But this was never necessary, the process is very solid. Now we also treat prostate cancer patients with lymph node metastases and those with relapse after radiotherapy. It’s a real success story.”

Lampe says that older and frailer patients may benefit the most from online ART, pointing out that while published studies often include relatively young, healthy patients, “our patients are old, they have chronic heart disease, they’re short of breath”.

For prostate cancer, for example, patients are instructed to arrive with a full bladder and an empty rectum. “But if a patient is in his eighties, he may not be able to do this and the volumes will be different every day,” Lampe explains. “With online adaptive, you can tell patients: ‘if this is not possible, we will handle it, don’t stress yourself’. They are very thankful.”

Making ART available to all

At UMC Utrecht in the Netherlands, the radiotherapy team has also added CT-Linac online adaptive to its clinical toolkit.

UMC Utrecht is renowned for its development of MR-guided radiotherapy, with physicists Bas Raaymakers and Jan Lagendijk pioneering the development of a hybrid MR-Linac. “We come from the world of MR-guidance, so we know that ART makes sense,” says Raaymakers. “But if we only offer MR-guided radiotherapy, we miss out on a lot of patients. We wanted to bring it to the wider community.”

The radiotherapy team at UMC Utrecht
ART for all The radiotherapy team at UMC Utrecht in the Netherlands has added CT-Linac online adaptive to its clinical toolkit. (Courtesy: UMC Utrecht)

At the time of speaking to Physics World, the team was treating its second patient with CBCT-guided ART, and had delivered about 30 fractions. Both patients were treated for bladder cancer, with future indications to explore including prostate, lung and breast cancers and bone metastases.

“We believe in ART for all patients,” says medical physicist Anette Houweling. “If you have MR and CT, you should be able to choose the optimal treatment modality based on image quality. For below the diaphragm, this is probably MR, while for the thorax, CT might be better.”

Ten minute target for OART

Houweling says that ART delivery has taken 19 minutes on average. “We record the CBCT, perform image fusion and then the table is moved, that’s all standard,” she explains. “Then the adaptive part comes in: delineation on the CBCT and creating a new plan with Elekta ONE Planning as part of Elekta One Online.”

The plan adaptation, when selected to perform, takes roughly four minutes to create a clinical-grade volumetric-modulated arc therapy (VMAT) plan. With the soon to be installed next-generation optimizer, it is expected to take less than one minute to generate a VMAT plan.

“As you start with the regular workflow, you can still decide not to choose adaptive treatment, and do a simple couch shift, up until the last second,” says Raaymakers. It’s very close to the existing workflow, which makes adoption easier. Also, the treatment slots are comparable to standard slots. Now with CBCT it takes 19 minutes and we believe we can get towards 10. That’s one of the drivers for cone-beam adaptive.”

Shorter treatment times will impact the decision as to which patients receive ART. If fully automated adaptive treatment is deliverable in a 10-minute time slot, it could be available to all patients. “From the physics side, our goal is to have no technological limitations to delivering ART. Then it’s up to the radiation oncologists to decide which patients might benefit,” Raaymakers explains.

Future gazing

Looking to the future, Raaymakers predicts that simulation-free radiotherapy will be adopted for certain standard treatments. “Why do you need days of preparation if you can condense the whole process to the moment when the patient is on the table,” he says. “That would be very much helped by online ART.”

“Scroll forward a few years and I expect that ART will be automated and fast such that the user will just sign off the autocontours and plan in one, maybe tune a little, and then go ahead,” adds Houweling. “That will be the ultimate goal of ART. Then there’s no reason to perform radiotherapy the traditional way.”

The post Evo CT-Linac eases access to online adaptive radiation therapy appeared first on Physics World.

Jesper Grimstrup’s The Ant Mill: could his anti-string-theory rant do string theorists a favour?

15 octobre 2025 à 12:00

Imagine you had a bad breakup in college. Your ex-partner is furious and self-publishes a book that names you in its title. You’re so humiliated that you only dimly remember this ex, though the book’s details and anecdotes ring true.

According to the book, you used to be inventive, perceptive and dashing. Then you started hanging out with the wrong crowd, and became competitive, self-involved and incapable of true friendship. Your ex struggles to turn you around; failing, they leave. The book, though, is so over-the-top that by the end you stop cringing and find it a hoot.

That’s how I think most Physics World readers will react to The Ant Mill: How Theoretical High-energy Physics Descended into Groupthink, Tribalism and Mass Production of Research. Its author and self-publisher is the Danish mathematician-physicist Jesper Grimstrup, whose previous book was Shell Beach: the Search for the Final Theory.

After receiving his PhD in theoretical physics at the Technical University of Vienna in 2002, Grimstrup writes, he was “one of the young rebels” embarking on “a completely unexplored area” of theoretical physics, combining elements of loop quantum gravity and noncommutative geometry. But there followed a decade of rejected articles and lack of opportunities.

Grimstrup became “disillusioned, disheartened, and indignant” and in 2012 left the field, selling his flat in Copenhagen to finance his work. Grimstrup says he is now a “self-employed researcher and writer” who lives somewhere near the Danish capital. You can support him either through Ko-fi or Paypal.

Fomenting fear

The Ant Mill opens with a copy of the first page of the letter that Grimstrup’s fellow Dane Niels Bohr sent in 1917 to the University of Copenhagen successfully requesting a four-storey building for his physics institute. Grimstrup juxtaposes this incident with the rejection of his funding request, almost a century later, by the Danish Council for Independent Research.

Today, he writes, theoretical physics faces a situation “like the one it faced at the time of Niels Bohr”, but structural and cultural factors have severely hampered it, making it impossible to pursue promising new ideas. These include Grimstrup’s own “quantum holonomy theory, which is a candidate for a fundamental theory”. The Ant Mill is his diagnosis of how this came about.

The Standard Model of particle physics, according to Grimstrup, is dominated by influential groups that squeeze out other approaches.

A major culprit, in Grimstrup’s eyes, was the Standard Model of particle physics. That completed a structure for which theorists were trained to be architects and should have led to the flourishing of a new crop of theoretical ideas. But it had the opposite effect. The field, according to Grimstrup, is now dominated by influential groups that squeeze out other approaches.

The biggest and most powerful is string theory, with loop quantum gravity its chief rival. Neither member of the coterie can make testable predictions, yet because they control jobs, publications and grants they intimidate young researchers and create what Grimstrup calls an “undercurrent of fear”. (I leave assessment of this claim to young theorists.)

Half the chapters begin with an anecdote in which Grimstrup describes an instance of rejection by a colleague, editor or funding agency. In the book’s longest chapter Grimstrup talks about his various rejections – by the Carlsberg Foundation, The European Physics Journal C, International Journal of Modern Physics A, Classical and Quantum Gravity, Reports on Mathematical Physics, Journal of Geometry and Physics, and the Journal of Noncommutative Geometry.

Grimstrup says that the reviewers and editors of these journals told him that his papers variously lacked concrete physical results, were exercises in mathematics, seemed the same as other papers, or lacked “relevance and significance”. Grimstrup sees this as the coterie’s handiwork, for such journals are full of string theory papers open to the same criticism.

“Science is many things,” Grimstrup writes at the end. “[S]imultaneously boring and scary, it is both Indiana Jones and anonymous bureaucrats, and it is precisely this diversity that is missing in the modern version of science”. What the field needs is “courage…hunger…ambition…unwillingness to compromise…anarchy.

Grimstrup hopes that his book will have an impact, helping to inspire young researchers to revolt, and to make all the scientific bureaucrats and apparatchiks and bookkeepers and accountants “wake up and remember who they truly are”.

The critical point

The Ant Mill is an example of what I have called “rant literature” or rant-lit. Evangelical, convinced that exposing truth will make sinners come to their senses and change their evil ways, rant lit can be fun to read, for it is passionate and full of florid metaphors.

Theoretical physicists, Grimstrup writes, have become “obedient idiots” and “technicians”. He slams theoretical physics for becoming a “kingdom”, a “cult”, a “hamster wheel”, and “ant mill”, in which the ants march around in a pre-programmed “death spiral”.

Grimstrup hammers away at theories lacking falsifiability, but his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”

An attentive reader, however, may come away with a different lesson. Grimstrup calls falsifiability the “crown jewel of the natural sciences” and hammers away at theories lacking it. But his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”

In his 2013 book String Theory and the Scientific Method, for instance, the Stockholm University philosopher of science Richard Dawid suggested rescuing the scientific status of string theory by adding such non-empirical criteria to evaluating theories as clarity, coherence and lack of alternatives. It’s an approach that both rescues the formalistic approach to the scientific method and undermines it.

Dawid, you see, is making the formalism follow the practice rather than the other way around. In other words, he is able to reformulate how we make theories because he already knows how theorizing works – not because he only truly knows what it is to theorize after he gets the formalism right.

Grimstrup’s rant, too, might remind you of the birth of the Yang–Mills theory in 1954. Developed by Chen Ning Yang and Robert Mills, it was a theory of nuclear binding that integrated much of what was known about elementary particle theory but implied the existence of massless force-carrying particles that then were known not to exist. In fact, at one seminar Wolfgang Pauli unleashed a tirade against Yang for proposing so obviously flawed a theory.

The theory, however, became central to theoretical physics two decades later, after theorists learned more about the structure of the world. The Yang-Mills story, in other words, reveals that theory-making does not always conform to formal strictures and does not always require a testable prediction. Sometimes it just articulates the best way to make sense of the world apart from proof or evidence.

The lesson I draw is that becoming the target of a rant might not always make you feel repentant and ashamed. It might inspire you into deep reflection on who you are in a way that is insightful and vindicating. It might even make you more rather than less confident about why you’re doing what you’re doing

Your ex, of course, would be horrified.

The post Jesper Grimstrup’s <em>The Ant Mill</em>: could his anti-string-theory rant do string theorists a favour? appeared first on Physics World.

Further evidence for evolving dark energy?

15 octobre 2025 à 11:34

The term dark energy, first used in 1998, is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe – an observation that was awarded the 2011 Nobel Prize in Physics.

Dark energy is now a well established concept and forms a key part of the standard model of Big Bang cosmology, the Lambda-CDM model.

The trouble is, we’ve never really been able to explain exactly what dark energy is, or why it has the value that it does.

Even worse, new data acquired by cutting-edge telescopes have suggested that dark energy might not even exist as we had imagined it.

This is where the new work by Mukherjee and Sen comes in. They combined two of these datasets, while making as few assumptions as possible, to understand what’s going on.

The first of these datasets came from baryon acoustic oscillations. These are patterns in the distribution of matter in the universe, created by sound waves in the early universe.

The second dataset is based on a survey of supernovae data from the last 5 years. Both sets of data can be used to track the expansion history of the universe by measuring distances at different snapshots in time.

The team’s results are in tension with the Lambda-CDM model at low redshifts. Put simply, the results disagree with the current model at recent times. This provides further evidence for the idea that dark energy, previously considered to have a constant value, is evolving over time.

Evolving dark energy
The tension in the expansion rate is most evident at low redshifts (Courtesy: P. Mukherjee)

The is far from the end of the story with dark energy. New observational data, and new analyses such as this one are urgently required to provide a clearer picture.

However, where there’s uncertainty, there’s opportunity. Understanding dark energy could hold the key to understanding quantum gravity, the Big Bang and the ultimate fate of the universe.

 

 

 

The post Further evidence for evolving dark energy? appeared first on Physics World.

Searching for dark matter particles

15 octobre 2025 à 11:34

Dark matter is hypothesised form of matter that does not emit, absorb, or reflect light, making it invisible to electromagnetic observations. Although we have never detected it, its existence is inferred from its gravitational effects on visible matter and the large-scale structure of the universe.

The Standard Model of particle physics does not contain any dark matter particles but there have been several proposed extensions of how they might be included. Several of these are very low mass particles such as the axion or the sterile neutrino.

Detecting these hypothesised particles is very challenging, however, due to the extreme sensitivity required.

Electromagnetic resonant systems, such as cavities and LC circuits, are widely used for this purpose, as well as to detect high-frequency gravitational waves.

When an external signal matches one of these systems’ resonant frequencies, the system responds with a large amplitude, making the signal possible to detect. However, there is always a trade-off between the sensitivity of the detector and the range of frequencies it is able to detect (its bandwidth).

A natural way to overcome this compromise is to consider multi-mode resonators, which can be viewed as coupled networks of harmonic oscillators. Their scan efficiency can be significantly enhanced beyond the standard quantum limit of simple single-mode resonators.

In a recent paper, the researchers demonstrated how multi-mode resonators can achieve the advantages of both sensitive and broadband detection. By connecting adjacent modes inside the resonant cavity, and  tuning these interactions to comparable magnitudes, off-resonant (i.e. unwanted) frequency shifts are effectively cancelled increasing the overall response of the system.

Their method allows us to search for these elusive dark matter particles in a faster, more efficient way.

Dark matter detection circuit
A multi-mode detector design, where the first mode couples to dark matter and the last mode is read out (Courtesy: Y. Chen)

The post Searching for dark matter particles appeared first on Physics World.

Physicists explain why some fast-moving droplets stick to hydrophobic surfaces

15 octobre 2025 à 10:00

What happens when a microscopic drop of water lands on a water-repelling surface? The answer is important for many everyday situations, including pesticides being sprayed on crops and the spread of disease-causing aerosols. Naively, one might expect it to depend on the droplet’s speed, with faster-moving droplets bouncing off the surface and slower ones sticking to it. However, according to new experiments, theoretical work and simulations by researchers in the UK and the Netherlands, it’s more complicated than that.

“If the droplet moves too slowly, it sticks,” explains Jamie McLauchlan, a PhD student at the University of Bath, UK who led the new research effort with Bath’s Adam Squires and Anton Souslov of the University of Cambridge. “Too fast, and it sticks again. Only in between is bouncing possible, where there is enough momentum to detach from the surface but not so much that it collapses back onto it.”

As well as this new velocity-dependent condition, the researchers also discovered a size effect in which droplets that are too small cannot bounce, no matter what their speed. This size limit, they say, is set by the droplets’ viscosity, which prevents the tiniest droplets from leaving the surface once they land on it.

Smaller-sized, faster-moving droplets

While academic researchers and industrialists have long studied single-droplet impacts, McLauchlan says that much of this earlier work focused on millimetre-sized drops that took place on millisecond timescales. “We wanted to push this knowledge to smaller sizes of micrometre droplets and faster speeds, where higher surface-to-volume ratios make interfacial effects critical,” he says. “We were motivated even further during the COVID-19 pandemic, when studying how small airborne respiratory droplets interact with surfaces became a significant concern.”

Working at such small sizes and fast timescales is no easy task, however. To record the outcome of each droplet landing, McLauchlan and colleagues needed a high-speed camera that effectively slowed down motion by a factor of 100 000. To produce the droplets, they needed piezoelectric droplet generators capable of dispensing fluid via tiny 30-micron nozzles. “These dispensers are highly temperamental,” McLauchlan notes. “They can become blocked easily by dust and fibres and fail to work if the fluid viscosity is too high, making experiments delicate to plan and run. The generators are also easy to break and expensive.”

Droplet modelled as a tiny spring

The researchers used this experimental set-up to create and image droplets between 30‒50 µm in diameter as they struck water-repelling surfaces at speeds of 1‒10 m/s. They then compared their findings with calculations based on a simple mathematical model that treats a droplet like a tiny spring, taking into account three main parameters in addition to its speed: the stickiness of the surface; the viscosity of the droplet liquid; and the droplet’s surface tension.

Previous research had shown that on perfectly non-wetting surfaces, bouncing does not depend on velocity. Other studies showed that on very smooth surfaces, droplets can bounce on a thin air layer. “Our work has explored a broader range of hydrophobic surfaces, showing that bouncing occurs due to a delicate balance of kinetic energy, viscous dissipation and interfacial energies,” McLauchlan tells Physics World.

This is exciting, he adds, because it reveals a previously unexplored regime for bounce behaviour: droplets that are too small, or too slow, will always stick, while sufficiently fast droplets can rebound. “This finding provides a general framework that explains bouncing at the micron scale, which is directly relevant for aerosol science,” he says.

A novel framework for engineering microdroplet processes

McLauchlan thinks that by linking bouncing to droplet velocity, size and surface properties, the new framework could make it easier to engineer microdroplets for specific purposes. “In agriculture, for example, understanding how spray velocities interact with plant surfaces with different hydrophobicity could help determine when droplets deposit fully versus when they bounce away, improving the efficiency of crop spraying,” he says.

Such a framework could also be beneficial in the study of airborne diseases, since exhaled droplets frequently bump into surfaces while floating around indoors. While droplets that stick are removed from the air, and can no longer transmit disease via that route, those that bounce are not. Quantifying these processes in typical indoor environments will provide better models of airborne pathogen concentrations and therefore disease spread, McLauchlan says. For example, in healthcare settings, coatings could be designed to inhibit or promote bouncing, ensuring that high-velocity respiratory droplets from sneezes either stick to hospital surfaces or recoil from them, depending on which mode of potential transmission (airborne or contact-based) is being targeted.

The researchers now plan to expand their work on aqueous droplets to droplets with more complex soft-matter properties. “This will include adding surfactants, which introduce time-dependent surface tensions, and polymers, which give droplets viscoelastic properties similar to those found in biological fluids,” McLauchlan reveals. “These studies will present significant experimental challenges, but we hope they broaden the relevance of our findings to an even wider range of fields.”

The present work is detailed in PNAS.

The post Physicists explain why some fast-moving droplets stick to hydrophobic surfaces appeared first on Physics World.

Reçu hier — 14 octobre 2025 Physics World

Quantum computing on the verge: a look at the quantum marketplace of today

14 octobre 2025 à 17:40

“I’d be amazed if quantum computing produces anything technologically useful in ten years, twenty years, even longer.” So wrote University of Oxford physicist David Deutsch – often considered the father of the theory of quantum computing – in 2004. But, as he added in a caveat, “I’ve been amazed before.”

We don’t know how amazed Deutsch, a pioneer of quantum computing, would have been had he attended a meeting at the Royal Society in London in February on “the future of quantum information”. But it was tempting to conclude from the event that quantum computing has now well and truly arrived, with working machines that harness quantum mechanics to perform computations being commercially produced and shipped to clients. Serving as the UK launch of the International Year of Quantum Science and Technology (IYQ) 2025, it brought together some of the key figures of the field to spend two days discussing quantum computing as something like a mature industry, even if one in its early days.

Werner Heisenberg – who worked out the first proper theory of quantum mechanics 100 years ago – would surely have been amazed to find that the formalism he and his peers developed to understand the fundamental behaviour of tiny particles had generated new ways of manipulating information to solve real-world problems in computation. So far, quantum computing – which exploits phenomena such as superposition and entanglement to potentially achieve greater computational power than the best classical computers can muster – hasn’t tackled any practical problems that can’t be solved classically.

Although the fundamental quantum principles are well-established and proven to work, there remain many hurdles that quantum information technologies have to clear before this industry can routinely deliver resources with transformative capabilities. But many researchers think that moment of “practical quantum advantage” is fast approaching, and an entire industry is readying itself for that day.

Entangled marketplace

So what are the current capabilities and near-term prospects for quantum computing?

The first thing to acknowledge is that a booming quantum-computing market exists. Devices are being produced for commercial use by a number of tech firms, from the likes of IBM, Google, Canada-based D-Wave, and Rigetti who have been in the field for a decade or more; to relative newcomers like Nord Quantique (Canada), IQM (Finland), Quantinuum (UK and US), Orca (UK) and PsiQuantum (US), Silicon Quantum Computing (Australia).

The global quantum ecosystem

Map showing the investments globally into quantum computing
(Courtesy: QURECA)

We are on the cusp of a second quantum revolution, with quantum science and technologies growing rapidly across the globe. This includes quantum computers; quantum sensing (ultra-high precision clocks, sensors for medical diagnostics); as well as quantum communications (a quantum internet). Indeed, according to the State of Quantum 2024 report, a total of 33 countries around the world currently have government initiatives in quantum technology, of which more than 20 have national strategies with large-scale funding. As of this year, worldwide investments in quantum tech – by governments and industry – exceed $55.7 billion, and the market is projected to reach $106 billion by 2040. With the multitude of ground-breaking capabilities that quantum technologies bring globally, it’s unsurprising that governments all over the world are eager to invest in the industry.

With data from a number of international reports and studies, quantum education and skills firm QURECA has summarized key programmes and efforts around the world. These include total government funding spent through 2025, as well as future commitments spanning 2–10 year programmes, varying by country. These initiatives generally represent government agencies’ funding announcements, related to their countries’ advancements in quantum technologies, excluding any private investments and revenues.

A supply chain is also organically developing, which includes manufacturers of specific hardware components, such as Oxford Instruments and Quantum Machines and software developers like Riverlane, based in Cambridge, UK, and QC Ware in Palo Alto, California. Supplying the last link in this chain are a range of eager end-users, from finance companies such as J P Morgan and Goldman Sachs to pharmaceutical companies such as AstraZeneca and engineering firms like Airbus. Quantum computing is already big business, with around 400 active companies and current global investment estimated at around $2 billion.

But the immediate future of all this buzz is hard to assess. When the chief executive of computer giant Nvidia announced at the start of 2025 that “truly useful” quantum computers were still two decades away, the previously burgeoning share prices of some leading quantum-computing companies plummeted. They have since recovered somewhat, but such volatility reflects the fact that quantum computing has yet to prove its commercial worth.

The field is still new and firms need to manage expectations and avoid hype while also promoting an optimistic enough picture to keep investment flowing in. “Really amazing breakthroughs are being made,” says physicist Winfried Hensinger of the University of Sussex, “but we need to get away from the expectancy that [truly useful] quantum computers will be available tomorrow.”

The current state of play is often called the “noisy intermediate-scale quantum” (NISQ) era. That’s because the “noisy” quantum bits (qubits) in today’s devices are prone to errors for which no general and simple correction process exists. Current quantum computers can’t therefore carry out practically useful computations that could not be done on classical high-performance computing (HPC) machines. It’s not just a matter of better engineering either; the basic science is far from done.

IBM quantum computer cryogenic chandelier
Building up Quantum computing behemoth IBM says that by 2029, its fault-tolerant system should accurately run 100 million gates on 200 logical qubits, thereby truly achieving quantum advantage. (Courtesy: IBM)

“We are right on the cusp of scientific quantum advantage – solving certain scientific problems better than the world’s best classical methods can,” says Ashley Montanaro, a physicist at the University of Bristol who co-founded the quantum software company Phasecraft. “But we haven’t yet got to the stage of practical quantum advantage, where quantum computers solve commercially important and practically relevant problems such as discovering the next lithium-ion battery.” It’s no longer if or how, but when that will happen.

Pick your platform

As the quantum-computing business is such an emerging area, today’s devices use wildly different types of physical systems for their qubits. There is still no clear sign as to which of these platforms, if any, will emerge as the winner. Indeed many researchers believe that no single qubit type will ever dominate.

The top-performing quantum computers, like those made by Google (with its 105-qubit Willow chip) and IBM (which has made the 121-qubit Condor), use qubits in which information is encoded in the wavefunction of a superconducting material. Until recently, the strongest competing platform seemed to be trapped ions, where the qubits are individual ions held in electromagnetic traps – a technology being developed into working devices by the US company IonQ, spun out from the University of Maryland, among others.

But over the past few years, neutral trapped atoms have emerged as a major contender, thanks to advances in controlling the positions and states of these qubits. Here the atoms are prepared in highly excited electronic states called Rydberg atoms, which can be entangled with one another over few a microns. A Harvard startup called QuEra is developing this technology, as is the French start-up Pasqal. In September a team from the California Institute of Technology announced a 6100-qubit array made from neutral atoms. “Ten years ago I would not have included [neutral-atom] methods if I were hedging bets on the future of quantum computing,” says Deutsch’s Oxford colleague, the quantum information theorist Andrew Steane. But like many, he thinks differently now.

Some researchers believe that optical quantum computing, using photons as qubits, will also be an important platform. One advantage here is that there is no need for complex conversion of photonic signals in existing telecommunications networks going to or from the processing units, which is also handy for photonic interconnections between chips. What’s more, photonic circuits can work at room temperature, whereas trapped ions and superconducting qubits need to be cooled. Photonic quantum computing is being developed by firms like PsiQuantum, Orca, and Xanadu.

Other efforts, for example at Intel and Silicon Quantum Computing in Australia, make qubits from either quantum dots (Intel) or precision-placed phosphorus atoms (SQC), both in good old silicon, which benefits from a very mature manufacturing base. “Small qubits based on ions and atoms yield the highest quality processors”, says Michelle Simmons of the University of New South Wales, who is the founder and CEO of SQC. “But only atom-based systems in silicon combine this quality with manufacturability.”

Intel's silicon spin qubits are now being manufactured on an industrial scale
Spinning around Intel’s silicon spin qubits are now being manufactured on an industrial scale. (Courtesy: Intel Corporation)

And it’s not impossible that entirely new quantum computing platforms might yet arrive. At the start of 2025, researchers at Microsoft’s laboratories in Washington State caused a stir when they announced that they had made topological qubits from semiconducting and superconducting devices, which are less error-prone than those currently in use. The announcement left some scientists disgruntled because it was not accompanied by a peer-reviewed paper providing the evidence for these long-sought entities. But in any event, most researchers think it would take a decade or more for topological quantum computing to catch up with the platforms already out there.

Each of these quantum technologies has its own strengths and weaknesses. “My personal view is that there will not be a single architecture that ‘wins’, certainly not in the foreseeable future,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC), which aims to facilitate the transition of quantum computing from basic research to an industrial concern. Cuthbert thinks the best platform will differ for different types of computation: cold neutral atoms might be good for quantum simulations of molecules, materials and exotic quantum states, say, while superconducting and trapped-ion qubits might be best for problems involving machine learning or optimization.

Measures and metrics

Given these pros and cons of different hardware platforms, one difficulty in assessing their merits is finding meaningful metrics for making comparisons. Should we be comparing error rates, coherence times (basically how long qubits remain entangled), gate speeds (how fast a single computational step can be conducted), circuit depth (how many steps a single computation can sustain), number of qubits in a processor, or what? “The metrics and measures that have been put forward so far tend to suit one or other platform more than others,” says Cuthbert, “such that it becomes almost a marketing exercise rather than a scientific benchmarking exercise as to which quantum computer is better.”

The NQCC evaluates the performance of devices using a factor known as the “quantum operation” (QuOp). This is simply the number of quantum operations that can be carried out in a single computation, before the qubits lose their coherence and the computation dissolves into noise. “If you want to run a computation, the number of coherent operations you can run consecutively is an objective measure,” Cuthbert says. If we want to get beyond the NISQ era, he adds, “we need to progress to the point where we can do about a million coherent operations in a single computation. We’re now at the level of maybe a few thousand. So we’ve got a long way to go before we can run large-scale computations.”

One important issue is how amenable the platforms are to making larger quantum circuits. Cuthbert contrasts the issue of scaling up – putting more qubits on a chip – with “scaling out”, whereby chips of a given size are linked in modular fashion. Many researchers think it unlikely that individual quantum chips will have millions of qubits like the silicon chips of today’s machines. Rather, they will be modular arrays of relatively small chips linked at their edges by quantum interconnects.

Having made the Condor, IBM now plans to focus on modular architectures (scaling out) – a necessity anyway, since superconducting qubits are micron-sized, so a chip with millions of them would be “bigger than your dining room table”, says Cuthbert. But superconducting qubits are not easy to scale out because microwave frequencies that control and read out the qubits have to be converted into optical frequencies for photonic interconnects. Cold atoms are easier to scale up, as the qubits are small, while photonic quantum computing is easiest to scale out because it already speaks the same language as the interconnects.

To be able to build up so called “fault tolerant” quantum computers, quantum platforms must solve the issue of error correction, which will enable more extensive computations without the results becoming degraded into mere noise.

In part two of this feature, we will explore how this is being achieved and meet the various firms developing quantum software. We will also look into the potential high-value commercial uses for robust quantum computers – once such devices exist.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum computing on the verge: a look at the quantum marketplace of today appeared first on Physics World.

Physicists achieve first entangled measurement of W states

14 octobre 2025 à 14:15

Imagine two particles so interconnected that measuring one immediately reveals information about the other, even if the particles are light–years apart. This phenomenon, known as quantum entanglement, is the foundation of a variety of technologies such as quantum cryptography and quantum computing. However, entangled states are notoriously difficult to control. Now, for the first time, a team of physicists in Japan has performed a collective quantum measurement on a W state comprising three entangled photons. This allowed them to analyse the three entangled photons at once rather than one at a time. This achievement, reported in Science Advances, marks a significant step towards the practical development of quantum technologies.

Physicists usually measure entangled particles using a technique known as quantum tomography. In this method, many identical copies of a particle are prepared, and each copy is measured at a different angle. The results of these measurements are then combined to reconstruct its full quantum state. To visualize this, imagine being asked to take a family photo. Instead of taking one group picture, you have to photograph each family member individually and then combine all the photos into a single portrait. Now imagine taking a photo properly: taking one photograph of the entire family. This is essentially what happens in an entangled measurement: where all particles are measured simultaneously rather than separately. This approach allows for significantly faster and more efficient measurements.

So far, for three-particle systems, entangled measurements have only been performed on Greenberger–Horne–Zeilinger (GHZ) states, where all qubits (quantum bits of a system) are either in one state or another. Until now, no one had carried out an entangled measurement for a more complicated set of states known as W states, which do not share this all-or-nothing property. In their experiment, the researchers at Kyoto University and Hiroshima University specifically used the simplest type of W state, made up of three photons, where each photon’s polarization (horizontal or vertical) is represented by one qubit.

“In a GHZ state, if you measure one qubit, the whole superposition collapses. But in a W state, even if you measure one particle, entanglement still remains,” explains Shigeki Takeuchi, corresponding author of the paper describing the study. This robustness makes the W state particularly appealing for quantum technologies.

Fourier transformations

The team took advantage of the fact that different W states look almost identical but differ by tiny phase shift, which acts as a hidden label that distinguishes one state from another. Using a tool called a discrete Fourier transform (DFT) circuit, researchers were able to “decode” this phase and tell the states apart.

The DFT exploits a special type of symmetry inherent to W states. Since the method relies on symmetry, in principle it can be extended to systems containing any number of photons. The researchers prepared photons in controlled polarization states and ran them through the DFT, which provided each state’s phase label. After, the photons were sent through polarizing beam splitters that separate them into vertically and horizontally polarized groups. By counting both sets of photons, and combining this with information from the DFT, the team could identify the W state.

The experiment identified the correct W state about 87% of the time, well above the 15% success rate typically achieved using tomography-based measurements. Maintaining this level of performance was a challenge, as tiny fluctuations in optical paths or photon loss can easily destroy the fragile interference pattern. The fact that the team could maintain stable performance long enough to collect statistically reliable data marks an important technical milestone.

Scalable to larger systems

“Our device is not just a single-shot measurement: it works with 100% efficiency,” Takeuchi adds. “Most linear optical protocols are probabilistic, but here the success probability is unity.” Although demonstrated with three photons, this procedure is directly scalable to larger systems, as the key insight is the symmetry that the DFT can detect.

“In terms of applications, quantum communication seems the most promising,” says Takeuchi. “Because our device is highly efficient, our protocol could be used for robust communication between quantum computer chips. The next step is to build all of this on a tiny photonic chip, which would reduce errors and photon loss and help make this technology practical for real quantum computers and communication networks.”

The post Physicists achieve first entangled measurement of W states appeared first on Physics World.

Physicists apply quantum squeezing to a nanoparticle for the first time

14 octobre 2025 à 10:00

Physicists at the University of Tokyo, Japan have performed quantum mechanical squeezing on a nanoparticle for the first time. The feat, which they achieved by levitating the particle and rapidly varying the frequency at which it oscillates, could allow us to better understand how very small particles transition between classical and quantum behaviours. It could also lead to improvements in quantum sensors.

Oscillating objects that are smaller than a few microns in diameter have applications in many areas of quantum technology. These include optical clocks and superconducting devices as well as quantum sensors. Such objects are small enough to be affected by Heisenberg’s uncertainty principle, which places a limit on how precisely we can simultaneously measure the position and momentum of a quantum object. More specifically, the product of the measurement uncertainties in the position and momentum of such an object must be greater than or equal to ħ/2, where ħ is the reduced Planck constant.

In these circumstances, the only way to decrease the uncertainty in one variable – for example, the position – is to boost the uncertainty in the other. This process has no classical equivalent and is called squeezing because reducing uncertainty along one axis of position-momentum space creates a “bulge” in the other, like squeezing a balloon.

A charge-neutral nanoparticle levitated in an optical lattice

In the new work, which is detailed in Science, a team led by Kiyotaka Aikawa studied a single, charge-neutral nanoparticle levitating in a periodic intensity pattern formed by the interference of criss-crossed laser beams. Such patterns are known as optical lattices, and they are ideal for testing the quantum mechanical behaviour of small-scale objects because they can levitate the object. This keeps it isolated from other particles and allows it to sustain its fragile quantum state.

After levitating the particle and cooling it to its motional ground state, the team rapidly varied the intensity of the lattice laser. This had the effect of changing the particle’s oscillation frequency, which in turn changed the uncertainty in its momentum. To measure this change (and prove they had demonstrated quantum squeezing), the researchers then released the nanoparticle from the trap and let it propagate for a short time before measuring its velocity. By repeating these time-of-flight measurements many times, they were able to obtain the particle’s velocity distribution.

The telltale sign of quantum squeezing, the physicists say, is that the velocity distribution they measured for the nanoparticle was narrower than the uncertainty in velocity for the nanoparticle at its lowest energy level. Indeed, the measured velocity variance was narrower than that of the ground state by 4.9 dB, which they say is comparable to the largest mechanical quantum squeezing obtained thus far.

“Our system will enable us to realize further exotic quantum states of motions and to elucidate how quantum mechanics should behave at macroscopic scales and become classical,” Aikawa tells Physics World. “This could allow us to develop new kinds of quantum devices in the future.”

The post Physicists apply quantum squeezing to a nanoparticle for the first time appeared first on Physics World.

Reçu avant avant-hier Physics World

Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize

13 octobre 2025 à 12:45

The theoretical physicist Michael Berry from the University of Bristol has won the 2025 Isaac Newton Medal and Prize for his “profound contributions across mathematical and theoretical physics in a career spanning over 60 years”. Presented by the Institute of Physics (IOP), which publishes Physics World, the international award is given annually for “world-leading contributions to physics by an individual of any nationality”.

Born in 1941 in Surrey, UK, Berry earned a BSc in physics from the University of Exeter in 1962 and a PhD from the University of St Andrews in 1965. He then moved to Bristol, where he has remained for the rest of his career.

Berry is best known for his work in the 1980s in which he showed that, under certain conditions, quantum systems can acquire what is known as a geometric phase. He was studying quantum systems in which the Hamiltonian describing the system is slowly changed so that it eventually returns to its initial form.

Berry showed that the adiabatic theorem widely used to describe such systems was incomplete and that a system acquires a phase factor that depends on the path followed, but not on the rate at which the Hamiltonian is changed. This geometric phase factor is now known as the Berry phase.

Over his career Berry, has written some 500 papers across a wide number of topics. In physics, Berry’s ideas have applications in condensed matter, quantum information and high-energy physics, as well as optics, nonlinear dynamics, and atomic and molecular physics. In mathematics, meanwhile, his work forms the basis for research in analysis, geometry and number theory.

Berry told Physics World that the award is “unexpected recognition for six decades of obsessive scribbling…creating physics by seeking ‘claritons’ – elementary particles of sudden understanding – and evading ‘anticlaritons’ that annihilate them” as well as “getting insights into nature’s physics” such as studying tidal bores, tsunamis, rainbows and “polarised light in the blue sky”.

Over the years, Berry has won a wide number of other honours, including the IOP’s Dirac Medal and the Royal Medal from the Royal Society, both awarded in 1990. He was also given the Wolf Prize for Physics in 1998 and the 2014 Lorentz Medal from the Royal Netherlands Academy of Arts and Sciences. In 1996 he received a knighthood for his services to science.

Berry will also be a speaker at the IOP’s International Year of Quantum celebrations on 4 November.

Celebrating success

Berry’s latest honour forms part of the IOP’s wider 2025 awards, which recognize everyone from early-career scientists and teachers to technicians and subject specialists. Other winners include Julia Yeomans, who receives the Dirac Medal and Prize for her work highlighting the relevance of active physics to living matter.

Lok Yiu Wu, meanwhile, receives Jocelyn Bell Burnell Medal and Prize for her work on the development of a novel magnetic radical filter device, and for ongoing support of women and underrepresented groups in physics.

In a statement, IOP president Michele Dougherty congratulated all the winners. “It is becoming more obvious that the opportunities generated by a career in physics are many and varied – and the potential our science has to transform our society and economy in the modern world is huge,” says Dougherty. “I hope our winners appreciate they are playing an important role in this community, and know how proud we are to celebrate their successes.”

The full list of 2025 award winners is available here.

The post Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize appeared first on Physics World.

Phase shift in optical cavities could detect low-frequency gravitational waves

13 octobre 2025 à 10:00

A network of optical cavities could be used to detect gravitational waves (GWs) in an unexplored range of frequencies, according to researchers in the UK. Using technology already within reach, the team believes that astronomers could soon be searching for ripples in space–time across the milli-Hz frequency band at 10⁻⁵ Hz–1 Hz.

GWs were first observed a decade ago and since then the LIGO–Virgo–KAGRA detectors have spotted GWs from hundreds of merging black holes and neutron stars. These detectors work in the 10 Hz–30 kHz range. Researchers have also had some success at observing a GW background at nanohertz frequencies using pulsar timing arrays.

However, GWs have yet to be detected in the milli-Hz band, which should include signals from binary systems of white dwarfs, neutron stars, and stellar-mass black holes. Many of these signals would emanate from the Milky Way.

Several projects are now in the works to explore these frequencies, including the space-based interferometers LISA, Taiji, and TianQin; as well as satellite-borne networks of ultra-precise optical clocks. However, these projects are still some years away.

Multidisciplinary effort

Joining these efforts was a collaboration called QSNET, which was within the UK’s Quantum Technology for Fundamental Physics (QTFP) programme. “The QSNET project was a network of clocks for measuring the stability of fundamental constants,” explains Giovanni Barontini at the University of Birmingham. “This programme brought together physics communities that normally don’t interact, such as quantum physicists, technologists, high energy physicists, and astrophysicists.”

QTFP ended this year, but not before Barontini and colleagues had made important strides in demonstrating how milli-Hz GWs could be detected using optical cavities.

Inside an ultrastable optical cavity, light at specific resonant frequencies bounces constantly between a pair of opposing mirrors. When this resonant light is produced by a specific atomic transition, the frequency of the light in the cavity is very precise and can act as the ticking of an extremely stable clock.

“Ultrastable cavities are a main component of modern optical atomic clocks,” Barontini explains. “We demonstrated that they have reached sufficient sensitivities to be used as ‘mini-LIGOs’ and detect gravitational waves.”

When such GW passes through an optical cavity, the spacing between its mirrors does not change in any detectable way. However, QSNET results have led to Barontini’s team to conclude that milli-Hz GWs alter the phase of the light inside the cavity. What is more, they conclude that this effect would be detectable in the most precise optical cavities currently available.

“Methods from precision measurement with cold atoms can be transferred to gravitational-wave detection,” explains team member Vera Guarrera. “By combining these toolsets, compact optical resonators emerge as credible probes in the milli-Hz band, complementing existing approaches.”

Ground-based network

Their compact detector would comprise two optical cavities at 90° to each other – each operating at a different frequency – and an atomic reference at a third frequency. The phase shift caused by a passing gravitational wave is revealed in a change in how the three frequencies interfere with each other. The team proposes linking multiple detectors to create a global, ground-based network. This, they say, could detect a GW and also locate the position of its source in the sky.

By harnessing this existing technology, the researchers now hope that future studies could open up a new era of discovery of GWs in the milli-Hz range, far sooner than many projects currently in development.

“This detector will allow us to test astrophysical models of binary systems in our galaxy, explore the mergers of massive black holes, and even search for stochastic backgrounds from the early universe,” says team member Xavier Calmet at the University of Sussex. “With this method, we have the tools to start probing these signals from the ground, opening the path for future space missions.”

Barontini adds, “Hopefully this work will inspire the build-up of a global network of sensors that will scan the skies in a new frequency window that promises to be rich of sources – including many from our own galaxy,”.

The research is described in Classical and Quantum Gravity.

 

The post Phase shift in optical cavities could detect low-frequency gravitational waves appeared first on Physics World.

The physics behind why cutting onions makes us cry

10 octobre 2025 à 16:30

Researchers in the US have studied the physics of how cutting onions can produce a tear-jerking reaction.

While it is known that volatile chemicals released from the onion – called propanethial S-oxide – irritate the nerves in the cornea to produce tears, how such chemical-laden droplets reach the eyes and whether they are influenced by the knife or cutting technique remain less clear.

To investigate, Sunghwan Jung from Cornell University and colleagues built a guillotine-like apparatus and used high-speed video to observe the droplets released from onions as they were cut by steel blades.

“No one had visualized or quantified this process,” Jung told Physics World. “That curiosity led us to explore the mechanics of droplet ejection during onion cutting using high-speed imaging and strain mapping.”

They found that droplets, which can reach up to 60 cm high, were released in two stages – the first being a fast mist-like outburst that was followed by threads of liquid fragmenting into many droplets.

The most energetic droplets were released during the initial contact between the blade and the onion’s skin.

When they began varying the sharpness of the blade and the cutting speed, they discovered that a greater number of droplets were released by blunter blades and faster cutting speeds.

“That was even more surprising,” notes Jung. “Blunter blades and faster cuts – up to 40 m/s – produced significantly more droplets with higher kinetic energy.”

Another surprise was that refrigerating the onions prior to cutting also produced an increased number of droplets of similar velocity, compared to unchilled vegetables.

So if you want to reduce chances of welling up when making dinner, sharpen your knives, cut slowly and perhaps don’t keep the bulbs in the fridge.

The researchers say there are many more layers to the work and now plan to study how different onion varieties respond to cutting as well as how cutting could influence the spread of airborne pathogens such as salmonella.

The post The physics behind why cutting onions makes us cry appeared first on Physics World.

Motion blur brings a counterintuitive advantage for high-resolution imaging

10 octobre 2025 à 10:00
Three pairs of greyscale images, showing text, a pattern of lines, and an image. The left images are blurred, the right images are clearer
Blur benefit: Images on the left were taken by a camera that was moving during exposure. Images on the right used the researchers’ algorithm to increase their resolution with information captured by the camera’s motion. (Courtesy: Pedro Felzenszwalb/Brown University)

Images captured by moving cameras are usually blurred, but researchers at Brown University in the US have found a way to sharpen them up using a new deconvolution algorithm. The technique could allow ordinary cameras to produce gigapixel-quality photos, with applications in biological imaging and archival/preservation work.

“We were interested in the limits of computational photography,” says team co-leader Rashid Zia, “and we recognized that there should be a way to decode the higher-resolution information that motion encodes onto a camera image.”

Conventional techniques to reconstruct high-resolution images from low-resolution ones involve relating low-res to high-res via a mathematical model of the imaging process. These effectiveness of these techniques is limited, however, as they produce only relatively small increases in resolution. If the initial image is blurred due to camera motion, this also limits the maximum resolution possible.

Exploiting the “tracks” left by small points of light

Together with Pedro Felzenszwalb of Brown’s computer science department, Zia and colleagues overcame these problems, successfully reconstructing a high-resolution image from one or several low-resolution images produced by a moving camera. The algorithm they developed to do this takes the “tracks” left by light sources as the camera moves and uses them to pinpoint precisely where the fine details must have been located. It then reconstructs these details on a finer, sub-pixel grid.

“There was some prior theoretical work that suggested this shouldn’t be possible,” says Felzenszwalb. “But we show that there were a few assumptions in those earlier theories that turned out not to be true. And so this is a proof of concept that we really can recover more information by using motion.”

Application scenarios

When they tried the algorithm out, they found that it could indeed exploit the camera motion to produce images with much higher resolution than those without the motion. In one experiment, they used a standard camera to capture a series of images in a grid of high-resolution (sub-pixel) locations. In another, they took one or more images while the sensor was moving. They also simulated recording single images or sequences of pictures while vibrating the sensor and while moving it along a linear path. These scenarios, they note, could be applicable to aerial or satellite imaging. In both, they used their algorithm to construct a single high-resolution image from the shots captured by the camera.

“Our results are especially interesting for applications where one wants high resolution over a relatively large field of view,” Zia says. “This is important at many scales from microscopy to satellite imaging. Other areas that could benefit are super-resolution archival photography of artworks or artifacts and photography from moving aircraft.”

The researchers say they are now looking into the mathematical limits of this approach as well as practical demonstrations. “In particular, we hope to soon share results from consumer camera and mobile phone experiments as well as lab-specific setups using scientific-grade CCDs and thermal focal plane arrays,” Zia tells Physics World.

“While there are existing systems that cameras use to take motion blur out of photos, no one has tried to use that to actually increase resolution,” says Felzenszwalb. “We’ve shown that’s something you could definitely do.”

The researchers presented their study at the International Conference on Computational Photography and their work is also available on the arXiv pre-print server.

The post Motion blur brings a counterintuitive advantage for high-resolution imaging appeared first on Physics World.

Hints of a boundary between phases of nuclear matter found at RHIC

9 octobre 2025 à 17:30

In a major advance for nuclear physics, scientists on the STAR Detector at the Relativistic Heavy Ion Collider (RHIC) in the US have spotted subtle but striking fluctuations in the number of protons emerging from high-energy gold–gold collisions. The observation might be the most compelling sign yet of the long-sought “critical point” marking a boundary separating different phases of nuclear matter. This similar to how water can exist in liquid or vapour phases depending on temperature and pressure.

Team member Frank Geurts at Rice University in the US tells Physics World that these findings could confirm that the “generic physics properties of phase diagrams that we know for many chemical substances apply to our most fundamental understanding of nuclear matter, too.”

A phase diagram maps how a substance transforms between solid, liquid, and gas. For everyday materials like water, the diagram is familiar, but the behaviour of nuclear matter under extreme heat and pressure remains a mystery.

Atomic nuclei are made of protons and neutrons tightly bound together. These protons and neutrons are themselves made of quarks that are held together by gluons. When nuclei are smashed together at high energies, the protons and neutrons “melt” into a fluid of quarks and gluons called a quark–gluon plasma. This exotic high-temperature state is thought to have filled the universe just microseconds after the Big Bang.

Smashing gold ions

The quark–gluon plasma is studied by accelerating heavy ions like gold nuclei to nearly the speed of light and smashing them together. “The advantage of using heavy-ion collisions in colliders such as RHIC is that we can repeat the experiment many millions, if not billions, of times,” Geurts explains.

By adjusting the collision energy, researchers can control the temperature and density of the fleeting quark–gluon plasma they create. This allows physicists to explore the transition between ordinary nuclear matter and the quark–gluon plasma. Within this transition, theory predicts the existence of a critical point where gradual change becomes abrupt.

Now, the STAR Collaboration has focused on measuring the minute fluctuations in the number of protons produced in each collision. These “proton cumulants,” says Geurts, are statistical quantities that “help quantify the shape of a distribution – here, the distribution of the number of protons that we measure”.

In simple terms, the first two cumulants correspond to the average and width of that distribution, while higher-order cumulants describe its asymmetry and sharpness. Ratios of these cumulants are tied to fundamental properties known as susceptibilities, which become highly sensitive near a critical point.

Unexpected discovery

Over three years of experiments, the STAR team studied gold–gold collisions at a wide range of energies, using sophisticated detectors to track and identify the protons and antiprotons created in each event. By comparing how the number of these particles changed with energy, the researchers discovered something unexpected.

As the collision energy decreased, the fluctuations in proton numbers did not follow a smooth trend. “STAR observed what it calls non-monotonic behaviour,” Geurts explains. “While at higher energies the ratios appear to be suppressed, STAR observes an enhancement at lower energies.” Such irregular changes, he said, are consistent with what might happen if the collisions pass near the critical point — the boundary separating different phases of nuclear matter.

For Volodymyr Vovchenko, a physicist at the University of Houston who was not involved in the research, the new measurements represent “a major step forward”. He says that “the STAR Collaboration has delivered the most precise proton-fluctuation data to date across several collision energies”.

Still, interpretation remains delicate. The corrections required to extract pure physical signals from the raw data are complex, and theoretical calculations lag behind in providing precise predictions for what should happen near the critical point.

“The necessary experimental corrections are intricate,” Vovchenko said, and some theoretical models “do not yet implement these corrections in a fully consistent way.” That mismatch, he cautions, “can blur apples-to-apples comparisons.”

The path forward

The STAR team is now studying new data from lower-energy collisions, focusing on the range where the signal appears strongest. The results could reveal whether the observed pattern marks the presence of a nuclear matter critical point or stems from more conventional effects.

Meanwhile, theorists are racing to catch up. “The ball now moves largely to theory’s court,” Vovchenko says. He emphasizes the need for “quantitative predictions across energies and cumulants of various order that are appropriate for apples-to-apples comparisons with these data.”

Future experiments, including RHIC’s fixed-target program and new facilities such as the FAIR accelerator in Germany, will extend the search even further. By probing lower energies and producing vastly larger datasets, they aim to map the transition between ordinary nuclear matter and quark–gluon plasma with unprecedented precision.

Whether or not the critical point is finally revealed, the new data are a milestone in the exploration of the strong force and the early universe. As Geurts put it, these findings trace “landmark properties of the most fundamental phase diagram of nuclear matter,” bringing physicists one step closer to charting how everything  – from protons to stars – first came to be.

The research is described in Physical Review Letters.

The post Hints of a boundary between phases of nuclear matter found at RHIC appeared first on Physics World.

From quantum curiosity to quantum computers: the 2025 Nobel Prize for Physics

9 octobre 2025 à 15:50

This year’s Nobel Prize for Physics went to John Clarke, Michel Devoret and John Martinis “for the discovery of macroscopic quantum mechanical tunnelling and energy quantization in an electric circuit”.

That circuit was a superconducting device called a Josephson junction and their work in the 1980s led to the development of some of today’s most promising technologies for quantum computers.

To chat about this year’s laureates, and the wide-reaching scientific and technological consequences of their work I am joined by Ilana Wisby – who is a quantum physicist, deep tech entrepreneur and former CEO of UK-based Oxford Quantum Circuits. We chat about the trio’s breakthrough and its influence on today’s quantum science and technology.

Courtesy: American ElementsThis podcast is supported by American Elements, the world’s leading manufacturer of engineered and advanced materials. The company’s ability to scale laboratory breakthroughs to industrial production has contributed to many of the most significant technological advancements since 1990 – including LED lighting, smartphones, and electric vehicles.

The post From quantum curiosity to quantum computers: the 2025 Nobel Prize for Physics appeared first on Physics World.

The power of physics: what can a physicist do in the nuclear energy industry?

9 octobre 2025 à 11:00

Nuclear power in the UK is on the rise – and so too are the job opportunities for physicists. Whether it’s planning and designing new reactors, operating existing plants safely and reliably, or dealing with waste management and decommissioning, physicists play a key role in the burgeoning nuclear industry.

The UK currently has nine operational reactors across five power stations, which together provided 12% of the country’s electricity in 2024. But the government wants that figure to reach 25% by 2050 as part of its goal to move away from fossil fuels and reach net zero. Some also think that nuclear energy will be vital for powering data centres for AI in a clean and efficient way.

While many see fusion as the future of nuclear power, it is still in the research and development stages, so fission remains where most job opportunities lie. Although eight of the current fleet of nuclear reactors are to be retired by the end of this decade, the first of the next generation are already in construction. At Hinkley Point C in Somerset, two new reactors are being built with costs estimated to reach £46bn; and in July 2025, Sizewell C in Suffolk got the final go-ahead.

Rolls-Royce, meanwhile, has just won a government-funded bid to develop small modular reactors (SMR) in the UK. Although currently an unproven technology, the hope is that SMRs will be cheaper and quicker to build than traditional plants, with proponents saying that each reactor could produce enough affordable emission-free energy to power about 600,000 homes for at least 60 years.

The renaissance of the nuclear power industry has led to employment in the sector growing by 35% between 2021 and 2024, with the workforce reaching over 85,000. However – as highlighted in a 2025 members survey by the Nuclear Institute – there are concerns about a skills shortage. In fact, the Nuclear Skills Plan was detailed by the Nuclear Skills Delivery Group in 2024 with the aim to address this problem.

Supported by an investment of £763m by 2030 from the UK government and industry, the plan’s objectives include quadrupling the number of PhDs in nuclear fission, and doubling the number of graduates entering the workforce. It also aims to provide opportunities for people to “upskill” and join the sector mid-career. The overall hope is to fill 40,000 new jobs by the end of the decade.

Having a degree in physics can open the door to any part of the nuclear-energy industry, from designing, operating or decommissioning a reactor, to training staff, overseeing safety or working as a consultant. We talk to six nuclear experts who all studied physics at university but now work across the sector, for a range of companies – including EDF Energy and Great British Energy–Nuclear. They give a quick snapshot of their “nuclear journeys”, and offer advice to those thinking of following in their footsteps.

Design and construction

Michael Hodgson
(Courtesy: Michael Hodgson)

Michael Hodgson, lead engineer, Rolls-Royce SMR

My interest in nuclear power started when I did a project on energy at secondary school. I learnt that there were significant challenges around the world’s future energy demands, resource security, and need for clean generation. Although at the time these were not topics commonly talked about, I could see they were vital to work on, and thought nuclear would play an important role.

I went on to study physics at the University of Surrey, with a year at Michigan State University in the US and another at CERN. After working for a couple of years, I returned to Surrey to do a part-time masters in radiation detection and instrumentation, followed a few years later by a PhD in radiation-hard semiconductor neutron detectors.

Up until recently, my professional work has mainly been in the supply chain for nuclear applications, working for Thermo Fisher Scientific, Centronic and Exosens. Nuclear power isn’t made by one company, it’s a combination of thousands of suppliers and sub-suppliers, the majority of which are small to medium-sized enterprises that need to operate across multiple industries. My job was primarily a technical design authority for manufacturers of radiation detectors and instruments, used in applications such as reactor power monitoring, health physics, industrial controls, and laboratory equipment, to name but a few. Now I work at Rolls-Royce SMR as a lead engineer for the control and instrumentation team. This role involves selecting and qualifying the thousands of different detectors and control instruments that will support the operation of small modular reactors.

Logical, evidence-based problem solving is the cornerstone of science and a powerful tool in any work setting

Beyond the technical knowledge I’ve gained throughout my education, studying physics has also given me two important skills. Firstly, learning how to learn – this is critical in academia but it also helps you step into any professional role. The second skill is the logical, evidence-based problem solving that is the cornerstone of science, which is a powerful tool in any work setting.

A career in nuclear energy can take many forms. The industry is comprised of a range of sectors and thousands of organizations that altogether form a complex support structure. My advice for any role is that knowledge is important, but experience is critical. While studying, try to look for opportunities to gain professional experience – this may be industry placements, research projects, or even volunteering. And it doesn’t have to be in your specific area of interest – cross-disciplinary experience breeds novel thinking. Utilizing these opportunities can guide your professional interests, set your CV apart from your peers, and bring pragmatism to your future roles.

Reactor operation

Katie Barber
(Courtesy: Katie Barber)

Katie Barber, nuclear reactor operator and simulator instructor at Sizewell B, EDF

I studied physics at the University of Leicester simply because it was a subject I enjoyed – at the time I had no idea what I wanted to do for a career. I first became interested in nuclear energy when I was looking for graduate jobs. The British Energy (now EDF) graduate scheme caught my eye because it offered a good balance of training and on-the-job experience. I was able to spend time in multiple different departments at different power stations before I decided which career path was right for me.

At the end of my graduate scheme, I worked in nuclear safety for several years. This involved reactor physics testing and advising on safety issues concerning the core and fuel. It was during that time I became interested in the operational response to faults. I therefore applied for the company’s reactor operator training programme – a two-year course that was a mixture of classroom and simulator training. I really enjoyed being a reactor operator, particularly during outages when the plant would be shutdown, cooled, depressurised and dissembled for refuelling before reversing the process to start up again. But after almost 10 years in the control room, I wanted a new challenge.

Now I develop and deliver the training for the control-room teams. My job, which includes simulator and classroom training, covers everything from operator fundamentals (such as reactor physics and thermodynamics) and normal operations (e.g. start up and shutdown), through to accident scenarios.

My background in physics gives me a solid foundation for understanding the reactor physics and thermodynamics of the plant. However, there are also a lot of softer skills essential for my role. Teaching others requires the ability to present and explain technical material; to facilitate a constructive debrief after a simulator scenario; and to deliver effective coaching and feedback. The training focuses as much on human performance as it does technical knowledge, highlighting the importance of effective teamwork, error prevention and clear communications.

A graduate training scheme is an excellent way to get an overview of the business, and gain experience across many different departments and disciplines

With Hinkley Point C construction progressing well and the recent final investment decision for Sizewell C, now is an exciting time to join the nuclear industry. A graduate training scheme is an excellent way to get an overview of the business, and gain experience across many different departments and disciplines, before making the decision about which area is right for you.

Nuclear safety

Jacob Plummer
(Courtesy: Jacob Plummer)

Jacob Plummer, principal nuclear safety inspector, Office for Nuclear Regulation

I’d been generally interested in nuclear science throughout my undergraduate physics degree at the University of Manchester, but this really accelerated after studying modules in applied nuclear and reactor physics. The topic was engaging, and the nuclear industry offered a way to explore real-world implementation of physics concepts. This led me to do a masters in nuclear science and technology, also at Manchester (under the Nuclear Technology Education Consortium), to develop the skills the UK nuclear sector required.

My first job was as a graduate nuclear safety engineer at Atkins (now AtkinsRealis), an engineering consultancy. It opened my eyes to the breadth of physics-related opportunities in the industry. I worked on new and operational power station projects for Hitachi-GE and EDF, as well as a variety of defence new-build projects. I primarily worked in hazard analysis, using modelling and simulation tools to generate evidence on topics like fire, blast and flooding to support safety case claims and inform reactor designs. I was also able to gain experience in project management, business development, and other energy projects, such as offshore wind farms. The analytical and problem solving skills I had developed during my physics studies really helped me to adapt to all of these roles.

Currently I work as a principal nuclear safety inspector at the Office for Nuclear Regulation. My role is quite varied. Day to day I might be assessing safety case submissions from a prospective reactor vendor; planning and delivering inspections at fuel and waste sites; or managing fire research projects as part of an international programme. A physics background helps me to understand complex safety arguments and how they link to technical evidence; and to make reasoned and logical regulatory judgements as a result.

Physics skills and experience are valued across the nuclear industry, from hazards and fault assessment to security, safeguards, project management and more

It’s a great time to join the nuclear industry with a huge amount of activity and investment across the nuclear lifecycle. I’d advise early-career professionals to cast the net wide when looking for roles. There are some obvious physics-related areas such as health physics, fuel and core design, and criticality safety, but physics skills and experience are valued across the nuclear industry, from hazards and fault assessment to security, safeguards, project management and more. Don’t be limited by the physicist label.

Waste and decommissioning

Becky Houghton
(Courtesy: Egis)

Becky Houghton, principal consultant, Galson Sciences Ltd

My interest in a career in nuclear energy sparked mid-way through my degree in physics and mathematics at the University of Sheffield, when I was researching “safer nuclear power” for an essay. Several rabbit holes later, I had discovered a myriad of opportunities in the sector that would allow me to use the skills and knowledge I’d gained through my degree in an industrial setting.

My first job in the field was as a technical support advisor on a graduate training scheme, where I supported plant operations on a nuclear licensed site. Next, I did a stint working in strategy development and delivery across the back end of the fuel cycle, before moving into consultancy. I now work as a principal consultant for Galson Sciences Ltd, part of the Egis group. Egis is an international multi-disciplinary consulting and engineering firm, within which Galson Sciences provides specialist nuclear decommissioning and waste management consultancy services to nuclear sector clients worldwide.

Ultimately, my role boils down to providing strategic and technical support to help clients make decisions. My focus these days tends to be around radioactive waste management, which can mean anything from analysing radioactive waste inventories to assessing the environmental safety of disposal facilities.

In terms of technical skills needed for the role, data analysis and the ability to provide high-quality reports on time and within budget are at the top of the list. Physics-wise, an understanding of radioactive decay, criticality mechanisms and the physico-chemical properties of different isotopes are fairly fundamental requirements. Meanwhile, as a consultant, some of the most important soft skills are being able to lead, teach and mentor less experienced colleagues; develop and maintain strong client relationships; and look after the well-being and deployment of my staff.

Whichever part of the nuclear fuel cycle you end up in, the work you do makes a difference

My advice to anyone looking to go into the nuclear energy is to go for it. There are lots of really interesting things happening right now across the industry, all the way from building new reactors and operating the current fleet, to decommissioning, site remediation and waste management activities. Whichever part of the nuclear fuel cycle you end up in, the work you do makes a difference, whether that’s by cleaning up the legacy of years gone by or by helping to meet the UK’s energy demands. Don’t be afraid to say “yes” to opportunities even if they’re outside your comfort zone, keep learning, and keep being curious about the world around you.

Uranium enrichment

Mark Savage
(Courtesy: Mark Savage)

Mark Savage, nuclear licensing manager, Urenco UK

As a child, I remember going to the visitors’ centre at the Sellafield nuclear site – a large nuclear facility in the north-west of England that’s now the subject of a major clean-up and decommissioning operation. At the centre, there was a show about splitting the atom that really sparked my interest in physics and nuclear energy.

I went on to study physics at Durham University, and did two summer placements at Sellafield, working with radiometric instruments. I feel these placements helped me get a place on the Rolls-Royce nuclear engineering graduate scheme after university. From there I joined Urenco, an international supplier of uranium enrichment services and fuel cycle products for the civil nuclear industry.

While at Urenco, I have undertaken a range of interesting roles in nuclear safety and radiation physics, including criticality safety assessment and safety case management. Highlights have included being the licensing manager for a project looking to deploy a high-temperature gas-cooled reactor design, and presenting a paper at a nuclear industry conference in Japan. These roles have allowed me to directly apply my physics background – such as using Monte Carlo radiation transport codes to model nuclear systems and radiation sources – as well as develop broader knowledge and skills in safety, engineering and project management.

My current role is nuclear licensing manager at the Capenhurst site in Cheshire, where we operate a number of nuclear facilities including three uranium enrichment plants, a uranium chemical deconversion facility, and waste management facilities. I lead a team who ensure the site complies with regulations, and achieves the required approvals for our programme of activities. Key skills for this role include building relationships with internal and external stakeholders; being able to understand and explain complex technical issues to a range of audiences; and planning programmes of work.

I would always recommend anyone interested in working in nuclear energy to look for work experience

Some form of relevant experience is always advantageous, so I would always recommend anyone interested in working in nuclear energy to look for work experience visits, summer placements or degree schemes that include working with industry.

Skills initiatives

Saralyn Thomas
(Courtesy: Great British Energy – Nuclear)

Saralyn Thomas, skills lead, Great British Energy – Nuclear

During my physics degree at the University of Bristol, my interest in energy led me to write a dissertation on nuclear power. This inspired me to do a masters in nuclear science and technology at the University of Manchester under the Nuclear Technology Education Consortium. The course opened doors for me, such as a summer placement with the UK National Nuclear Laboratory, and my first role as a junior safety consultant with Orano.

I worked in nuclear safety for roughly 10 years, progressing to principal consultant with Abbott Risk Consulting, but decided that this wasn’t where my strengths and passions lay. During my career, I volunteered for the Nuclear Institute (NI), and worked with the society’s young members group – the Young Generation Network (YGN). I ended up becoming chair of the YGN and a trustee of the NI, which involved supporting skills initiatives including those feeding into the Nuclear Skills Plan. Having a strategic view of the sector and helping to solve its skills challenges energized me in a new way, so I chose to change career paths and moved to Great British Energy – Nuclear (GBE-N) as skills lead. In this role I plan for what skills the business and wider sector will need for a nuclear new build programme, as well as develop interventions to address skills gaps.

GBE-N’s current remit is to deliver Europe’s first fleet of small modular reactors, but there is relatively limited experience of building this technology. Problem-solving skills from my background in physics have been essential to understanding what assumptions we can put in place at this early stage, learning from other nuclear new builds and major infrastructure projects, to help set us up for the future.

The UK’s nuclear sector is seeing significant government commitment, but there is a major skills gap

To anyone interested in nuclear energy, my advice is to get involved now. The UK’s nuclear sector is seeing significant government commitment, but there is a major skills gap. Nuclear offers a lifelong career with challenging, complex projects – ideal for physicists who enjoy solving problems and making a difference.

 

The post The power of physics: what can a physicist do in the nuclear energy industry? appeared first on Physics World.

A record-breaking anisotropic van der Waals crystal?

9 octobre 2025 à 10:13

In general, when you measure material properties such as optical permittivity, your measurement doesn’t depend on the direction in which you make it.

However, recent research has shown that this is not the case for all materials. In some cases, their optical permittivity is directional. This is commonly known as in-plane optical anisotropy. A larger difference between optical permittivity in different directions means a larger anisotropy.

Materials with very large anisotropies have applications in a wide range of fields from photonics and electronics to medical imaging. However, for most materials remains available today, the value remains relatively low.

These potential applications combined with the current limitation has driven a large amount of research into novel anisotropic materials.

In this latest work, a team of researchers studied the quasi-one-dimensional van der Waals crystal: Ta2NiSe5.

Van der Waals (vdW) crystals are made up of chains, ribbons, or layers of atoms that stick together through weak van der Waals forces.

In quasi-one-dimensional vdW crystals, the atoms are strongly connected along one direction, while the connections in the other directions are much weaker, making their properties very direction-dependent.

This structure makes quasi-one-dimensional vdW crystals a good place to search for large optical anisotropy values. The researchers studied the new crystal by using a range of measurement techniques such as ellipsometry and spectroscopy as well as state of the art first principles computer simulations.

The results show that Ta2NiSe5 has a record-breaking in-plane optical anisotropy across the visible to infrared spectral region, representing the highest value reported among van der Waals materials to date.

The study therefore has large implications for next-generation devices in photonics and beyond.

Read the full article

Giant in-plane anisotropy in novel quasi-one-dimensional van der Waals crystal – IOPscience

Zhou et al. 2025 Rep. Prog. Phys. 88 050502

 

The post A record-breaking anisotropic van der Waals crystal? appeared first on Physics World.

Unlocking the limits of quantum security

9 octobre 2025 à 10:12

In quantum information theory, secret-key distillation is a crucial process for enabling secure communication across quantum networks. It works by extracting confidential bits from shared quantum states or channels using local operations and limited classical communication, ensuring privacy even over insecure links.

A bipartite quantum state is a system shared between two parties (often called Alice and Bob) that may exhibit entanglement. If they successfully distil a secret key, they can encrypt and decrypt messages securely, using the key like a shared password known only to them.

To achieve this, Alice and Bob use point-to-point quantum channels and perform local operations, meaning each can only manipulate their own part of the system. They also rely on one-way classical communication, where Alice sends messages to Bob, but Bob cannot reply. This constraint reflects realistic limitations in quantum networks and helps researchers identify the minimum requirements for secure key generation.

This paper investigates how many secret bits can be extracted under these conditions. The authors introduce a resource-theoretic framework based on unextendible entanglement which is a form of entanglement that cannot be shared with additional parties. This framework allows them to derive efficiently computable upper bounds on secret-key rates, helping determine how much security is achievable with limited resources.

Their results apply to both one-shot scenarios, where the quantum system is used only once, and asymptotic regimes, where the same system is used repeatedly and statistical patterns emerge. Notably, they extend their approach to quantum channels assisted by forward classical communication, resolving a long-standing open problem about the one-shot forward-assisted private capacity.

Finally, they show that error rates in private communication can decrease exponentially with repeated channel use, offering a scalable and practical path toward building secure quantum messaging systems.

Read the full article

Extendibility limits quantum-secured communication and key distillation

Vishal Singh and Mark M Wilde 2025 Rep. Prog. Phys. 88 067601

Do you want to learn more about this topic?

Distribution of entanglement in large-scale quantum networks by S PerseguersG J Lapeyre JrD CavalcantiM Lewenstein and A Acín (2013)

The post Unlocking the limits of quantum security appeared first on Physics World.

Optical gyroscope detects Earth’s rotation with the highest precision yet

8 octobre 2025 à 18:11

As the Earth moves through space, it wobbles. Researchers in Germany have now directly observed this wobble with the highest precision yet thanks to a large ring laser gyroscope they developed for this purpose. The instrument, which is located in southern Germany and operates continuously, represents an important advance in the development of super-sensitive rotation sensors. If further improved, such sensors could help us better understand the interior of our planet and test predictions of relativistic effects, including the distortion of space-time due to Earth’s rotation.

The Earth rotates once every day, but there are tiny fluctuations, or wobbles, in its axis of rotation. These fluctuations are caused by several factors, including the gravitational forces of the Moon and Sun and, to a lesser extent, the neighbouring planets in our Solar System. Other, smaller fluctuations stem from the exchange of momentum between the solid Earth and the oceans, atmosphere and ice sheets. The Earth’s shape, which is not a perfect sphere but is flattened at the poles and thickened at the equator, also contributes to the wobble.

These different types of fluctuations produce effects known as precession and nutation that cause the extension of the Earth’s axis to trace a wrinkly circle in the sky. At the moment, this extended axis is aligned precisely with the North Star. In the future, it will align with other stars before returning to the North Star again in a cycle that lasts 26,000 years.

Most studies of the Earth’s rotation involve combining data from many sources. These sources include very long baseline radio-astronomy observations of quasars; global satellite navigation systems (GNSS); and GNSS observations combined with satellite laser ranging (SLR) and Doppler orbitography and radiopositioning integrated by satellite (DORIS). These techniques are based on measuring the travel time of light, and because it is difficult to combine them, only one such measurement can be made per day.

An optical interferometer that works using the Sagnac effect

The new gyroscope, which is detailed in Science Advances, is an optical interferometer that operates using the Sagnac effect. At its heart is an optical cavity that guides a light beam around a square path 16 m long. Depending on the rate of rotation it experiences, this cavity selects two different frequencies from the beam to be coherently amplified. “The two frequencies chosen are the only ones that have an integer number of waves around the cavity,” explains team leader Ulrich Schreiber of the Technische Universität München (TUM). “And because of the finite velocity of light, the co-rotating beam ‘sees’ a slightly larger cavity, while the anti-rotating beam ‘sees’ a slightly shorter one.”

The frequency shift in the interference pattern produced by the co-rotating beam is projected onto an external detector and is strictly proportional to the Earth’s rotation rate. Because the accuracy of the measurement depends, in part, on the mechanical stability of the set-up, the researchers constructed their gyroscope from a glass ceramic that does not expand much with temperature. They also set it up horizontally in an underground laboratory, the Geodetic Observatory Wettzell in southern Bavaria, to protect it as much as possible from external vibrations.

The instrument can sense the Earth’s rotation to within an accuracy of 48 parts per billion (ppb), which corresponds to picoradians per second. “This is about a factor of 100 better than any other rotation sensor,” says Schreiber, “and, importantly, is less than an order of magnitude away from the regime in which relativistic effects can be measured – but we are not quite there yet.”

An increase in the measurement accuracy and stability of the ring laser by a factor of 10 would, Schreiber adds, allow the researchers to measure the space-time distortion caused by the Earth’s rotation. For example, it would permit them to conduct a direct test for the Lense-Thirring effect — that is, the “dragging” of space by the Earth’s rotation – right at the Earth’s surface.

To reach this goal, the researchers say they would need to amend several details of their sensor design. One example is the composition of the thin-film coatings on the mirrors inside their optical interferometer. “This is neither easy nor straightforward,” explains Schreiber, “but we have some ideas to try out and hope to progress here in the near future.

“In the meantime, we are working towards implementing our measurements into a routine evaluation procedure,” he tells Physics World.

The post Optical gyroscope detects Earth’s rotation with the highest precision yet appeared first on Physics World.

Susumu Kitagawa, Richard Robson and Omar Yaghi win the 2025 Nobel Prize for Chemistry

8 octobre 2025 à 12:01

Susumu Kitagawa, Richard Robson and Omar Yaghi have been awarded the 2025 Nobel Prize for Chemistry “for developing metal-organic frameworks”.

The award includes a SEK 11m prize ($1.2m), which is shared equally by the winners. The prize will be presented at a ceremony in Stockholm on 10 December.

The prize was announced this morning by members of the Royal Swedish Academy of Science. Speaking on the phone during the press conference, Kitagawa noted that he was “deeply honoured and delighted” that his research had been recognized. 

A new framework

Beginning in the late 1980s and for the next couple of decades, the trio, who are all trained chemists, developed a new form of molecular architecture in that metal ions function as cornerstones that are linked by long organic carbon-based molecules.

Together, the metal ions and molecules form crystals that contain large cavities through which gases and other chemicals can flow.

“It’s a little like Hermione’s handbag – small on the outside, but very large on the inside,” noted Heiner Linke, chair of the Nobel Committee for Chemistry.

Yet the trio had to overcome several challenges before they could be used such as making them stable and flexible, which Kitagawa noted “was very tough”.

These porous materials are now called metal-organic frameworks (MOF). By varying the building blocks used in the MOFs, researchers can design them to capture and store specific substances as well as drive chemical reactions or conduct electricity.

“Metal-organic frameworks have enormous potential, bringing previously unforeseen opportunities for custom-made materials with new functions,” added Linke.

Following the laureates’ work, chemists have built tens of thousands of different MOFs.

3D MOFs are an important class of materials that could be used in applications as diverse as sensing, gas storage, catalysis and optoelectronics.  

MOFs are now able to capture water from air in the desert, sequester carbon dioxide from industry effluents, store hydrogen gas, recover rare-earth metals from waste, break down oil contamination as well as extract “forever chemicals” such as PFAS from water.

“My dream is to capture air and to separate air into CO2, oxygen and water and convert them to usable materials using renewable energy,” noted Kitagawa. 

Their 2D versions might even be used as flexible material platforms to realize exotic quantum phases, such as topological and anomalous quantum Hall insulators.

Life scientific

Kitagawa was born in 1951 in Kyoto, Japan. He obtained a PhD from Kyoto University, Japan, in 1979 and then held positions at Kindai University before joining Tokyo Metropolitan University in 1992. He then joined Kyoto University in 1998 where he is currently based.

Robson was born in 1937 in Glusburn, UK. He obtained a PhD from University of Oxford in 1962. After postdoc positions at California Institute of Technology and Stanford University, in 1966 he moved to the University of Melbourne where he remained for the rest of his career.

Yaghi was born in 1965 in Amman, Jordan. He obtained a PhD from University of Illinois Urbana-Champaign, US, in 1990. He then held positions at Arizona State University, the University of Michigan and the University of California, Los Angeles, before joining the University of California, Berkeley, in 2012 where he is currently based.

The post Susumu Kitagawa, Richard Robson and Omar Yaghi win the 2025 Nobel Prize for Chemistry appeared first on Physics World.

Machine learning optimizes nanoparticle design for drug delivery to the brain

8 octobre 2025 à 10:30

Neurodegenerative diseases affect millions of people worldwide, but treatment of such conditions is limited by the blood–brain barrier (BBB), which blocks the passage of drugs to the brain. In the quest for more effective therapeutic options, a multidisciplinary research team has developed a novel machine learning-based technique to predict the behaviour of nanoparticles as drug delivery systems.

The work focuses on nanoparticles that can cross the BBB and provide a promising platform for enhancing drug transport into the brain. But designing specific nanoparticles to target specific brain regions is a complex and time-consuming task; there’s a need for improved design frameworks to identify potential candidates with desirable bioactivity profiles. For this, the team – comprising researchers from the University of the Basque Country (UPV/EHU) in Spain and Tulane University in the USA, led by the multicentre CHEMIF.PTML Lab – turned to machine learning.

Machine learning uses molecular and clinical data to detect trends that may lead to novel drug delivery strategies with improved efficiency and reduced side effects. In contrast to slow and costly trial-and-error or physical modelling approaches, machine learning could provide efficient initial screening of large combinations of nanoparticle compositions. Traditional machine learning, however, can be hindered by the lack of suitable data sets.

To address this limitation, the CHEMIF.PTML Lab team developed the IFE.PTML method – an approach that integrates information fusion, Python-based encoding and perturbation theory with machine learning algorithms, describing the model in Machine Learning: Science and Technology.

“The main advantage of our IFE.PTML method lies in its ability to handle heterogeneous nanoparticle data,” corresponding author Humberto González-Díaz explains. “Standard machine learning approaches often struggle with disperse and multi-source datasets from nanoparticle experiments. Our approach integrates information fusion to combine diverse data types – such as physicochemical properties, bioassays and so on – and applies perturbation theory to model these uncertainties as probabilistic perturbations around baseline conditions. This results in more robust, generalizable predictions of nanoparticle behaviour.”

To build the predictive models, the researchers created a database containing physicochemical and bioactivity parameters for 45 different nanoparticle systems across 41 different cell lines. They used these data to train IFE.PTML models with three machine learning algorithms – random forest, extreme gradient boosting and decision tree – to predict the drug delivery behaviour of various nanomaterials. The random forest-based model showed the best overall performance, with accuracies of 95.1% and 89.7% on training and testing data sets, respectively.

Experimental demonstration

To illustrate the real-world applicability of the random forest-based IFE.PTML model, the researchers synthetized two novel magnetite nanoparticle systems (the 31 nm-diameter Fe3O4_A and the 26 nm-diameter Fe3O4_B). Magnetite-based nanoparticles are biocompatible, can be easily functionalized and have a high surface area-to-volume ratio, making them efficient drug carriers. To make them water soluble, the nanoparticles were coated with either PMAO (poly(maleic anhydride-alt-1-octadecene)) or PMAO plus PEI (poly(ethyleneimine).

Nanoparticle preparation process
Preparation process Functionalization of Fe3O4 nanoparticles with PMAO and PEI polymers. (Courtesy: Mach. Learn.: Sci. Technol. 10.1088/2632-2153/ae038a)

The team characterized the structural, morphological and magnetic properties of the four nanoparticle systems and then used the optimized model to predict their likelihood of favourable bioactivity for drug delivery in various human brain cell lines, including models of neurodegenerative disease, brain tumour models and a cell line modelling the BBB.

As inputs for their model, the researchers used a reference function based on the bioactivity parameters for each system, plus perturbation theory operators for various nanoparticle parameters. The IFE.PTML model calculated key bioactivity parameters, focusing on indicators of toxicity, efficacy and safety. These included the 50% cytotoxic, inhibitory, lethal and toxic concentrations (at which 50% of the biological effect is observed) and the zeta potential, which affects the nanoparticles’ capacity to cross the BBB. For each parameter, the model output a binary result: “0” for undesired and “1” for desired bioactivities.

The model identified PMAO-coated nanoparticles as the most promising candidates for BBB and neuronal applications, due to their potentially favourable stability and biocompatibility. Nanoparticles with PMAO-PEI coatings, on the other hand, could prove optimal for targeting brain tumour cells.

The researchers point out that, where comparisons were possible, the trends predicted by the RF-IFE.PTML model agreed with the experimental findings, as well as with previous studies reported in the literature. As such, they conclude that their model is efficient and robust and offers valuable predictions on nanoparticle–coating combinations designed to act on specific targets.

“The present study focused on the nanoparticles as potential drug carriers. Therefore, we are currently implementing a combined machine learning and deep learning methodology with potential drug candidates for neurodegenerative diseases,” González-Díaz tells Physics World.

The post Machine learning optimizes nanoparticle design for drug delivery to the brain appeared first on Physics World.

Advances in quantum error correction showcased at Q2B25

7 octobre 2025 à 17:00

This year’s Q2B meeting took place at the end of last month in Paris at the Cité des Sciences et de l’Industrie, a science museum in the north-east of the city. The event brought together more than 500 attendees and 70 speakers – world-leading experts from industry, government institutions and academia. All major quantum technologies were highlighted: computing, AI, sensing, communications and security.

Among the quantum computing topics was quantum error correction (QEC) – something that will be essential for building tomorrow’s fault-tolerant machines. Indeed, it could even be the technology’s most important and immediate challenge, according to the speakers on the State of Quantum Error Correction Panel: Paul Hilaire of Telecom Paris/IP Paris, Michael Vasmer of Inria, Quandela’s Boris Bourdoncle, Riverlane’s Joan Camps and Christophe Vuillot from Alice & Bob.

As was clear from the conference talks, quantum computers are undoubtedly advancing in leaps and bounds. One of their most important weak points, however, is that their fundamental building blocks (quantum bits, or qubits) are highly prone to errors. These errors are caused by interactions with the environment – also known as noise – and correcting them will require innovative software and hardware. Today’s machines are only capable of running on average a few hundred operations before an error occurs; but in the future, we will have to develop quantum computers capable of processing a million error-free quantum operations (known as a MegaQuOp) or even a trillion error-free operations (TeraQuOps).

QEC works by distributing one quantum bit of information – called a logical qubit – across several different physical qubits, such as superconducting circuits or trapped atoms. Each physical qubit is noisy, but they work together to preserve the quantum state of the logical qubit – at least for long enough to perform a calculation. It was Peter Shor who first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A technique known as syndrome decoding is then used to diagnose which error was the likely source of corruption on an encoded state. The error can then be reversed by applying a corrective operation depending on the syndrome.

Prototype quantum computer from NVIDIA
Computing advances A prototype quantum computer from NVIDIA that makes use of seven qubits. (Courtesy: Isabelle Dumé)

While error correction should become more effective as the number of physical qubits in a logical qubit increases, adding more physical qubits to a logical qubit also adds more noise. Much progress has been made in addressing this and other noise issues in recent years, however.

“We can say there’s a ‘fight’ when increasing the length of a code,” explains Hilaire. “Doing so allows us to correct more errors, but we also introduce more sources of errors. The goal is thus being able to correct more errors than we introduce. What I like with this picture is the clear idea of the concept of a fault-tolerant threshold below which fault-tolerant quantum computing becomes feasible.”

Developments in QEC theory

Speakers at the Q2B25 meeting shared a comprehensive overview of the most recent advancements in the field – and they are varied. First up, concatenated error correction codes. Prevalent in the early days of QEC, these fell by the wayside in favour of codes like surface code, but are making a return as recent work has shown. Concatenated codes can achieve constant encoding rates and a quantum computer operating on a linear, nearest-neighbour connectivity was recently put forward. Directional codes, the likes of which are being developed by Riverlane, are also being studied. These leverage native transmon qubit logic gates – for example, iSWAP gates – and could potentially outperform surface codes in some aspects.

The panellists then described bivariate bicycle codes, being developed by IBM, which offer better encoding rates than surface codes. While their decoding can be challenging for real-time applications, IBM’s “relay belief propagation” (relay BP) has made progress here by simplifying decoding strategies that previously involved combining BP with post-processing. The good thing is that this decoder is actually very general and works for all the “low-density parity check codes” — one of the most studied class of high performance QEC codes (these also include, for example, surface codes and directional codes).

There is also renewed interest in decoders that can be parallelized and operate locally within a system, they said. These have shown promise for codes like the 1D repetition code, which could revive the concept of self-correcting or autonomous quantum memory. Another possibility is the increased use of the graphical language ZX calculus as a tool for optimizing QEC circuits and understanding spacetime error structures.

Hardware-specific challenges

The panel stressed that to achieve robust and reliable quantum systems, we will need to move beyond so-called hero experiments. For example, the demand for real-time decoding at megahertz frequencies with microsecond latencies is an important and unprecedented challenge. Indeed, breaking down the decoding problem into smaller, manageable bits has proven difficult so far.

There are also issues with qubit platforms themselves that need to be addressed: trapped ions and neutral atoms allow for high fidelities and long coherence times, but they are roughly 1000 times slower than superconducting and photonic qubits and therefore require algorithmic or hardware speed-ups. And that is not all: solid-state qubits (such as superconducting and spin qubits) suffer from a “yield problem”, with dead qubits on manufactured chips. Improved fabrication methods will thus be crucial, said the panellists.

Q2B25

 

Collaboration between academia and industry

The discussions then moved towards the subject of collaboration between academia and industry. In the field of QEC, such collaboration is highly productive today, with joint PhD programmes and shared conferences like Q2B, for example. Large companies also now boast substantial R&D departments capable of funding high-risk, high-reward research, blurring the lines between fundamental and application-oriented research. Both sectors also use similar foundational mathematics and physics tools.

At the moment there’s an unprecedented degree of openness and cooperation in the field. This situation might change, however, as commercial competition heats up, noted the panellists. In the future, for example, researchers from both sectors might be less inclined to share experimental chip details.

Last, but certainly not least, the panellists stressed the urgent need for more PhDs trained in quantum mechanics to address the talent deficit in both academia and industry. So, if you were thinking of switching to another field, perhaps now could be the time to jump.

The post Advances in quantum error correction showcased at Q2B25 appeared first on Physics World.

A low vibration wire scanner fork for free electron lasers

7 octobre 2025 à 16:51
High performance, proven, wire scanner for transverse beam profile measurement for the latest generation of low emittance accelerators and FELs. (Courtesy: UHV Design)
High performance, proven, wire scanner for transverse beam profile measurement for the latest generation of low emittance accelerators and FELs. (Courtesy: UHV Design)

A new high-performance wire scanner fork that the latest generation of free electron lasers (FELs) can use for measuring beam profiles has been developed by UK-based firm UHV Design. Produced using technology licensed from the Paul Scherrer Institute (PSI) in Switzerland, the device could be customized for different FELs and low emittance accelerators around the world. It builds on the company’s PLSM range, which allows heavy objects to be moved very smoothly and with minimal vibrations.

The project began 10 years ago when the PSI was starting to build the Swiss Free Electron Laser and equipping the facility, explains Jonty Eyres. The remit for UHV Design was to provide a stiff, very smooth, bellows sealed, ultra-high vacuum compatible linear actuator that could move a wire fork without vibrating it adversely. The fork, designed by PSI, can hold wires in two directions and can therefore scan the intensity of the beam profile in both X and Y planes using just one device as opposed to two or more as in previous such structures.

“We decided to employ an industrial integrated ball screw and linear slide assembly with a very stiff frame around it, the construction of which provides the support and super smooth motion,” he says. “This type of structure is generally not used in the ultra-high vacuum industry.”

The position of the wire fork is determined through a (radiation-hard) side mounted linear optical encoder in conjunction with the PSI’s own motor and gearbox assembly. A power off brake is also incorporated to avoid any issues with back driving under vacuum load if electrical power was to be lost to the PLSM.  All electrical connections terminated with UTO style connectors to PSI specification.

Long term reliability was important to avoid costly and unnecessary down time, particularly between planned FEL maintenance shutdowns. The industrial ball screw and slide assembly by design was the perfect choice in conjunction with a bellows assembly rated for 500,000 cycles with an option to increase to 1 million cycles.

Eyres and his UHV design team began by building a prototype that the PSI tested themselves with a high-speed camera. Once validated, the UHV engineers then built a batch of 20 identical units to prove that the device could be replicated in terms of constraints and tolerances.

The real challenge in constructing this device, says Eyres, was about trying to minimize the amount of vibration on the wire, which, for PSI, is typically between 5 and 25 microns thick. This is only possible if the vibration of the wire during a scan is low compared to the cross section of the wire – that is, about a micron for a 25-micron wire. “Otherwise, you are just measuring noise,” explains Eyres. “The small vibration we achieved can be corrected for in calculations, so providing an accurate value for the beam profile intensity.”

UHV Design holds the intellectual property rights for the linear actuator and PSI the property rights of the fork. Following the success of the project and a subsequent agreement between them both, it was recently decided that UHV Design buy the licence to promote the wire fork, allowing the company to sell the device or a version of it to any institution or company operating a FEL or low-emittance accelerator. “The device is customizable and can be adapted to different types of fork, wires, motors or encoders,” says Eyres. “The heart of the design remains the same: a very stiff structure and its integrated ball screw and linear slide assembly. But, it can be tailored to meet the requirements of different beam lines in terms of stroke size, specific wiring and the components employed.”

UHV Design’s linear actuator was installed on the Swiss FEL in 2016 and has been performing very well since, says Eyres.

A final and important point to note, he adds, is that UHV Design built an identical copy of their actuator when we took on board the licence agreement, so that we could prove it could still reproduce the same performance. “We built an exact copy of the wire scanner, including the PSI fork assembly and sent it to the PSI, who then used the very same high-speed camera rig that they’d employed in 2015 to directly compare the new actuator with the original ones supplied. They reported that the results were indeed comparable, meaning that if fitted to the Swiss FEL today, it would perform in the same way.”

For more information: https://www.uhvdesign.com/products/linear-actuators/wire-scanner/

The post A low vibration wire scanner fork for free electron lasers appeared first on Physics World.

Rapid calendar life screening of electrolytes for silicon anodes using voltage holds

7 octobre 2025 à 15:48

2025-09-ecs-wb-schematic-main-image

Silicon-based lithium-ion batteries exhibit severe time-based degradation resulting in poor calendar lives. In this webinar, we will talk about how calendar aging is measured, why the traditional measurement approaches are time intensive and there is a need for new approaches to optimize materials for next generation silicon based systems. Using this new approach we also screen multiple new electrolyte systems that can lead to calendar life improvements in Si containing batteries.

An interactive Q&A session follows the presentation.

ankit-verma-headshot
Ankit Verma

Ankit Verma’s expertise is in physics-based and data-driven modeling of lithium-ion and next generation lithium metal batteries. His interests lie in unraveling the coupled reaction-transport-mechanics behavior in these electrochemical systems with experiment-driven validation to provide predictive insights for practical advancements. Predominantly, he’s working on improving silicon anodes energy density and calendar life as part of the Silicon Consortium Project, understanding solid-state battery limitations and upcycling of end-of-life electrodes as part of the ReCell Center.

Verma’s past works include optimization of lithium-ion battery anodes and cathodes for high-power and fast-charge applications and understanding electrodeposition stability in metal anodes.

 

the-electrochemical-society-logo

biologic-battery-logo-800

The post Rapid calendar life screening of electrolytes for silicon anodes using voltage holds appeared first on Physics World.

John Clarke, Michel Devoret and John Martinis win the 2025 Nobel Prize for Physics

7 octobre 2025 à 11:52

John Clarke, Michel Devoret and John Martinis share the 2025 Nobel Prize for Physics “for the discovery of macroscopic quantum mechanical tunnelling and energy quantization in an electric circuit”. 

The award includes a SEK 11m prize ($1.2m), which is shared equally by the winners. The prize will be presented at a ceremony in Stockholm on 10 December.

The prize was announced this morning by members of the Royal Swedish Academy of Science. Olle Eriksson of Uppsala University and chair of the Nobel Committee for Physics commented, “There is no advanced technology today that does not rely on quantum mechanics.”

Göran Johansson of Chalmers University of Technology explained that the three laureates took quantum tunnelling from the microscopic world and onto superconducting chips, allowing physicists to study quantum physics and ultimately create quantum computers.

Speaking on the telephone, John Clarke said of his win, “To put it mildly, it was the surprise of my life,” adding “I am completely stunned. It had never occurred to me that this might be the basis of a Nobel prize.” On the significance of the trio’s research, Clarke said, “The basis of quantum computing relies to quite an extent on our discovery.”

As well as acknowledging the contributions of Devoret and Martinis, Clarke also said that their work was made possible by the work of Anthony Leggett and Brian Josephson – who laid the groundwork for their work on tunnelling in superconducting circuits. Leggett and Josephson are previous Nobel winners.

As well as having scientific significance, the trio’s work has led to the development of nascent commercial quantum computers that employ superconducting circuits. Physicist and tech entrepreneur Ilana Wisby, who co-founded Oxford Quantum Circuits, told Physics World, “It’s such a brilliant and well-deserved recognition for the community”.

A life in science

Clarke was born in 1942 in Cambridge, UK. He received his BA in physics from the University of Cambridge in 1964 before carrying out a PhD at Cambridge in 1968. He then moved to the University of California, Berkeley, to carry out a postdoc before joining the physics faculty in 1969 where he has remained since.

Devoret was born in Paris, France in 1953. He graduated from Ecole Nationale Superieure des Telecommunications in Paris in 1975 before earning a PhD from the University of Paris, Orsay, in 1982. He then moved to the University of California, Berkeley, to work in Clarke’s group collaborating with Martinis who was a graduate student at the time. In 1984 Devoret returned to France to start his own research group at the Commissariat à l’Energie Atomique in Saclay (CEA-Saclay) before heading to the US to Yale University in 2002. In 2024 he moved to the University of California, Santa Barbara, and also became chief scientist at Google Quantum AI.

Martinis was born in the US in 1958. He received a BS in physics in 1980 and a PhD in physics both from the University of California, Berkeley. He then carried out postdocs at CEA-Saclay, France, and the National Institute of Standards and Technology in Boulder, Colorado, before moving to the University of California, Santa Barbara, in 2004. In 2014 Martinis and his team joined Google with the aim of building the first useful quantum computer before he moved to Australia in 2020 to join the start-up Silicon Quantum Computing. In 2022 he co-founded the company Qolab, of which he is currently the chief technology officer.

The trio did its prizewinning work in the mid-1980s at the University of California, Berkeley. At the time Devoret was a postdoctoral fellow and Martinis was a graduate student – both working for Clarke. They were looking for evidence of macroscopic quantum tunnelling (MQT) in a device called a Josephson junction. This comprises two pieces of superconductor that are separated by an insulating barrier. In 1962 the British physicist Brian Josephson predicted how the Cooper pairs of electrons that carry current in a superconductor can tunnel across the barrier unscathed. This Josephson effect was confirmed experimentally in 1963.

Single wavefunction

The lowest-energy (ground) state of a superconductor is a macroscopic quantum state in which all Cooper pairs are described by a single quantum-mechanical wavefunction. In the late 1970s, the British–American physicist Anthony Leggett proposed that the tunnelling of this entire macroscopic state could be observed in a Josephson junction.

The idea is to put the system into a metastable state in which electrical current flows without resistance across the junction – resulting in zero voltage across the junction. If the system is indeed a macroscopic quantum state, then it should be able to occasionally tunnel out of this metastable state, resulting in a voltage across the junction.

This tunnelling can be observed by increasing the current through the junction and measuring the current at which a voltage occurs – obtaining an average value over many such measurements. As the temperature of the device is reduced, this average current increases – something that is expected regardless of whether the system is in a macroscopic quantum state.

However, at very low temperatures the average current becomes independent of temperature, which is the signature of macroscopic quantum tunnelling that Martinis, Devoret and Clarke were seeking. Their challenge was to reduce the noise in their experimental apparatus, because noise has a similar effect as tunnelling on their measurements.

Multilevel system

As well as observing the signature of tunnelling, they were also able to show that the macroscopic quantum state exists in several different energy states. Such a multilevel system is essentially a macroscopic version of an atom or nucleus, with its own spectroscopic structure.

The noise-control techniques developed by the trio to observe MQT and the fact that a Josephson junction can function as a macroscopic multilevel quantum system have led to the development of superconducting quantum bits (qubits) that form the basis of some nascent quantum computers.

The post John Clarke, Michel Devoret and John Martinis win the 2025 Nobel Prize for Physics appeared first on Physics World.

❌