↩ Accueil

Vue normale

Michele Dougherty steps aside as president of the Institute of Physics

29 janvier 2026 à 16:00

The space physicist Michele Dougherty has stepped aside as president of the Institute of Physics, which publishes Physics World. The move was taken to avoid any conflicts of interest given her position as executive chair of the Science and Technology Facilities Council (STFC) – one of the main funders of physics research in the UK.

Dougherty, who is based at Imperial College London, spent two years as IOP president-elect from October 2023 before becoming president in October 2025. Dougherty was appointed executive chair of the STFC in January 2025 and in July that year was also announced as the next Astronomer Royal – the first woman to hold the position.

The changes at the IOP come in the wake of UK Research and Innovation (UKRI) stating last month that it will be adjusting how it allocates government funding for scientific research and infrastructure. Spending on curiosity-driven research will remain flat from 2026 to 2030, with UKRI prioritising funding in three key areas or “buckets”.

The three buckets are: curiosity-driven research, which will be the largest; strategic government and societal priorities; and supporting innovative companies. There will also be a fourth “cross-cutting” bucket with funding for infrastructure, facilities and talent. In the four years to 2030, UKRI’s budget will be £38.6bn.

While the detailed implications of the funding changes are still to be worked out, the IOP says its “top priority” is understanding and responding to them. With the STFC being one of nine research councils within UKRI, Dougherty is stepping aside as IOP president to ensure the IOP can play what it says is “a leadership role in advocating for physics without any conflict of interest”.

In her role as STFC executive chair, Dougherty yesterday wrote to the UK’s particle physics, astronomy and nuclear physics community, asking researchers to identify by March how their projects would respond to flat cash as well as reductions of 20%, 40% and 60% – and to “identify the funding point at which the project becomes non-viable”. The letter says that a “similar process” will happen for facilities and labs.

In her letter, Dougherty says that the UK’s science minister Lord Vallance and UKRI chief executive Ian Chapman want to protect curiosity-driven research, which they say is vital, and grow it “as the economy allows”. However, she adds, “the STFC will need to focus our efforts on a more concentrated set of priorities, funded at a level that can be maintained over time”.

Tom Grinyer, chief executive officer of the IOP, says that the IOP is “fully focused on ensuring physics is heard clearly as these serious decisions are shaped”. He says the IOP is “gathering insight from across the physics community and engaging closely with government, UKRI and the research councils so that we can represent the sector with authority and evidence”.

Grinyer warns, however, that UKRI’s shift in funding priorities and the subsequent STFC funding cuts will have “severe consequences” for physics. “The promised investment in quantum, AI, semiconductors and green technologies is welcome but these strengths depend on a stable research ecosystem,” he says.

“I want to thank Michele for her leadership, and we look forward to working constructively with her in her capacity at STFC as this important period for physics unfolds,” adds Grinyer.

Next steps

The nuclear physicist Paul Howarth, who has been IOP president-elect since September, will now take on Dougherty’s responsibilities – as prescribed by the IOP’s charter – with immediate effect, with the IOP Council discussing its next steps at its February 2026 meeting.

With a PhD in nuclear physics, Howarth has had a long career in the nuclear sector working on the European Fusion Programme and at British Nuclear Fuels, as well as co-founding the Dalton Nuclear Institute at the University of Manchester.

He was a non-executive board director of the National Physical Laboratory and until his retirement earlier this year was chief executive officer of the National Nuclear Laboratory.

In response to the STFC letter, Howarth says that the projected cuts “are a devastating blow for the foundations of UK physics”.

“Physics isn’t a luxury we can afford to throw away through confusion,” says Howarth. “We urge the government to rethink these cuts, listen to the physics community, and deliver to a 10-year strategy to secure physics for the future.”

The post Michele Dougherty steps aside as president of the Institute of Physics appeared first on Physics World.

AI-based tool improves the quality of radiation therapy plans for cancer treatment

29 janvier 2026 à 14:36

This episode of the Physics World Weekly podcast features Todd McNutt, who is a medical physicist at Johns Hopkins University and the founder of Oncospace. In a conversation with Physics World’s Tami Freeman, McNutt explains how an artificial intelligence-based tool called Plan AI can help improve the quality of radiation therapy plans for cancer treatments.

As well as discussing the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres, they examine its evolution from an idea developed by an academic collaboration to a clinical product offered today by Sun Nuclear, a US manufacturer of radiation equipment and software.

This podcast is sponsored by Sun Nuclear.

The post AI-based tool improves the quality of radiation therapy plans for cancer treatment appeared first on Physics World.

The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’

29 janvier 2026 à 10:00

Last November I visited the CERN particle-physics lab near Geneva to attend the 4th International Symposium on the History of Particle Physics, which focused on advances in particle physics during the 1980s and 1990s. As usual, it was a refreshing, intellectually invigorating visit. I’m always inspired by the great diversity of scientists at CERN – complemented this time by historians, philosophers and other scholars of science.

As noted by historian John Krige in his opening keynote address, “CERN is a European laboratory with a global footprint. Yet for all its success it now faces a turning point.” During the period under examination at the symposium, CERN essentially achieved the “world laboratory” status that various leaders of particle physics had dreamt of for decades.

By building the Large Electron Positron (LEP) collider and then the Large Hadron Collider (LHC), the latter with contributions from Canada, China, India, Japan, Russia, the US and other non-European nations, CERN has attracted researchers from six continents. And as the Cold War ended in 1989–1991, two prescient CERN staff members developed the World Wide Web, helping knit this sprawling international scientific community together and enable extensive global collaboration.

The LHC was funded and built during a unique period of growing globalization and democratization that emerged in the wake of the Cold War’s end. After the US terminated the Superconducting Super Collider in 1993, CERN was the only game in town if one wanted to pursue particle physics at the multi-TeV energy frontier. And many particle physicists wanted to be involved in the search for the Higgs boson, which by the mid-1990s looked as if it should show up at accessible LHC energies.

Having discovered this long-sought particle at the LHC in 2012, CERN is now contemplating an ambitious construction project, the Future Circular Collider (FCC). Over three times larger than the LHC, it would study this all-important, mass-generating boson in greater detail using an electron–positron collider dubbed FCC-ee, estimated to cost $18bn and start operations by 2050.

Later in the century, the FCC-hh, a proton–proton collider, would go in the same tunnel to see what, if anything, may lie at much higher energies. That collider, the cost of which is currently educated guesswork, would not come online until the mid 2070s.

But the steadily worsening geopolitics of a fragmenting world order could make funding and building these colliders dicey affairs. After Russia’s expulsion from CERN, little in the way of its contributions can be expected. Chinese physicists had hoped to build an equivalent collider, but those plans seem to have been put on the backburner for now.

And the “America First” political stance of the current US administration is hardly conducive to the multibillion-dollar contribution likely required from what is today the world’s richest (albeit debt-laden) nation. The ongoing collapse of the rules-based world order was recently put into stark relief by the US invasion of Venezuela and abduction of its president Nicolás Maduro, followed by Donald Trump’s menacing rhetoric over Greenland.

While these shocking events have immediate significance for international relations, they also suggest how difficult it may become to fund gargantuan international scientific projects such as the FCC. Under such circumstances, it is very difficult to imagine non-European nations being able to contribute a hoped-for third of the FCC’s total costs.

But the mounting European populist right-wing parties are no great friends of physics either, nor of international scientific endeavours. And Europeans face the not-insignificant costs of military rearmament in the face of Russian aggression and likely US withdrawal from Europe.

So the other two thirds of the FCC’s many billions in costs cannot be taken for granted – especially not during the decades needed to construct its 91 km tunnel, 350 GeV electron–positron collider, the subsequent 100 TeV proton collider, and the massive detectors both machines require.

According to former CERN director-general Chris Llewellyn Smith in his symposium lecture, “The political history of the LHC“, just under 12% of the material project costs of the LHC eventually came from non-member nations. It therefore warps the imagination to believe that a third of the much greater costs of the FCC can come from non-member nations in the current “Wild West” geopolitical climate.

But particle physics desperately needs a Higgs factory. After the 1983 Z boson discovery at the CERN SPS Collider, it took just six years before we had not one but two Z factories – LEP and the Stanford Linear Collider – which proved very productive machines. It’s now been more than 13 years since the Higgs boson discovery. Must we wait another 20 years?

Other options

CERN therefore needs a more modest, realistic, productive new scientific facility – a “Plan B” – to cope with the geopolitical uncertainties of an imperfect, unpredictable world. And I was encouraged to learn that several possible ideas are under consideration, according to outgoing CERN director-general Fabiola Gianotti in her symposium lecture, “CERN today and tomorrow“.

Three of these ideas reflect the European Strategy for Particle Physics, which states that “an electron–positron Higgs factory is the highest-priority next CERN collider”. Two linear electron–positron colliders would require just 11–34 km of tunnelling and could begin construction in the mid-2030s, but would involve a fair amount of technical risk and cost roughly €10bn.

The least costly and risky option, dubbed LEP3, involves installing superconducting radio-frequency cavities in the existing LHC tunnel once the high-luminosity proton run ends. Essentially an upgrade of the 200 GeV LEP2, this approach is based on well-understood technologies and would cost less than €5bn but can reach at most 240 GeV. The linear colliders could attain over twice that energy, enabling research on Higgs-boson decays into top quarks and the triple-Higgs self-interaction.

Other proposed projects involving the LHC tunnel can produce large numbers of Higgs bosons with relatively minor backgrounds, but they can hardly be called “Higgs factories”. One of these, dubbed the LHeC, could only produce a few thousand Higgs bosons annually and would allow other important research on proton structure functions. Another idea is the proposed Gamma Factory, in which laser beams would be backscattered from LHC beams of partially stripped ions. If sufficient photon energies and intensity can be achieved, it will allow research on the γγ → H interaction. These alternatives would cost at most a few billion euros.

As Krige stressed in his keynote address, CERN was meant to be more than a scientific laboratory at which European physicists could compete with their US and Soviet counterparts. As many of its founders intended, he said, it was “a cultural weapon against all forms of bigoted nationalism and anti-science populism that defied Enlightenment values of critical reasoning”. The same logic holds true today.

In planning the next phase in CERN’s estimable history, it is crucial to preserve this cultural vitality, while of course providing unparalleled opportunities to do world-class science – lacking which, the best scientists will turn elsewhere.

I therefore urge CERN planners to be daring but cognizant of financial and political reality in the fracturing world order. Don’t for a nanosecond assume that the future will be a smooth extrapolation from the past. Be fairly certain that whatever new facility you decide to build, there is a solid financial pathway to achieving it in a reasonable time frame.

The future of CERN – and the bracing spirit of CERN – rests in your hands.

The post The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’ appeared first on Physics World.

Ion-clock transition could benefit quantum computing and nuclear physics

28 janvier 2026 à 16:07
Schematic showing how the shape of ytterbium-173 nucleus affects the clock transition
Nuclear effect The deformed shape of the ytterbium-173 nucleus (right) makes it possible to excite the clock transition with a relatively low-power laser. The same transition is forbidden (left) if the nucleus is not deformed. (Courtesy: Physikalisch-Technische Bundesanstalt (PTB))

An atomic transition in ytterbium-173 could be used to create an optical multi-ion clock that is both precise and stable. That is the conclusion of researchers in Germany and Thailand who have characterized a clock transition that is enhanced by the non-spherical shape of the ytterbium-173 nucleus. As well as applications in timekeeping, the transition could be used in quantum computing. Furthermore, the interplay between atomic and nuclear effects in the transition could provide insights into the physics of deformed nuclei.

The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. These clocks play crucial roles in technologies that require precision timing – such as global navigation satellite systems and communications networks. Currently, the international definition of the second is given by the frequency of caesium-based clocks, which deliver microwave time signals.

Today’s best clocks, however, work at higher optical frequencies and are therefore much more precise than microwave clocks. Indeed, at some point in the future metrologists will redefine the second in terms of an optical transition – but the international metrology community has yet to decide which transition will be used.

Broadly speaking, there are two types of optical clock. One uses an ensemble of atoms that are trapped and cooled to ultralow temperatures using lasers; the other involves a single atomic ion (or a few ions) held in an electromagnetic trap. Clocks that use one ion are extremely precise, but lack stability; whereas clocks that use many atoms are very stable, but sacrifice precision.

Optimizing performance

As a result, some physicists are developing clocks that use multiple ions with the aim of creating a clock that optimizes precision and stability.

Now, researchers at PTB and NIMT (the national metrology institutes of Germany and Thailand respectively) have characterized a clock transition in ions of ytterbium-173, and have shown that the transition could be used to create a multi-ion clock.

“This isotope has a particularly interesting transition,” explains PTB’s Tanja Mehlstäubler – who is a pioneer in the development of multi-ion clocks.

The ytterbium-173 nucleus is highly deformed with a shape that resembles a rugby ball. This deformation affects the electronic properties of the ion, which should make it much easier to use a laser to excite a specific transition that would be very useful for creating a multi-ion clock.

Stark effect

This clock transition can also be excited in ytterbium-171 and has already been used to create a single-ion clock. However, excitation in a ytterbium-171 clock requires an intense laser pulse, which creates a strong electric field that shifts the clock frequency (called the AC Stark effect). This is a particular problem for multi-ion clocks because the intensity of the laser (and hence the clock frequency) can vary across the region in which the ions are trapped.

To show that a much lower laser intensity can be used to excite the clock transition in ytterbium-173, the team studied a “Coulomb crystal” in which three ions were trapped in a line and separated by about 10 micron. They illuminated the ions with laser light that was not uniform in intensity across the crystal. They were able to excite the transition at a relatively low laser intensity, which resulted in very small AC Stark shifts between the frequencies of the three ions.

According to the team, this means that as many as 100 trapped ytterbium-173 ions could be used to create a clock that could be used as a time standard; to redefine the second; and also to make very precise measurements of the Earth’s gravitational field.

As well as being useful for creating an optical ion clock, this multi-ion capability could also be exploited to create quantum-computing architectures based on multiple trapped ions. And because the observed effect is a result of the shape of the ytterbium-173 nucleus, further studies could provide insights into nuclear physics.

The research is described in Physical Review Letters.

 

The post Ion-clock transition could benefit quantum computing and nuclear physics appeared first on Physics World.

The power of a poster

28 janvier 2026 à 12:00

Most researchers know the disappointment of submitting an abstract to give a conference lecture, only to find that it has been accepted as a poster presentation instead. If this has been your experience, I’m here to tell you that you need to rethink the value of a good poster.

For years, I pestered my university to erect a notice board outside my office so that I could showcase my group’s recent research posters. Each time, for reasons of cost, my request was unsuccessful. At the same time, I would see similar boards placed outside the offices of more senior and better-funded researchers in my university. I voiced my frustrations to a mentor whose advice was, It’s better to seek forgiveness than permission.” So, since I couldn’t afford to buy a notice board, I simply used drawing pins to mount some unauthorized posters on the wall beside my office door.

Some weeks later, I rounded the corner to my office corridor to find the head porter standing with a group of visitors gathered around my posters. He was telling them all about my research using solar energy to disinfect contaminated drinking water in disadvantaged communities in Sub-Saharan Africa. Unintentionally, my illegal posters had been subsumed into the head porter’s official tour that he frequently gave to visitors.

The group moved on but one man stayed behind, examining the poster very closely. I asked him if he had any questions. “No, thanks,” he said, “I’m not actually with the tour, I’m just waiting to visit someone further up the corridor and they’re not ready for me yet. Your research in Africa is very interesting.” We chatted for a while about the challenges of working in resource-poor environments. He seemed quite knowledgeable on the topic but soon left for his meeting.

A few days later while clearing my e-mail junk folder I spotted an e-mail from an Asian “philanthropist” offering me €20,000 towards my research. To collect the money, all I had to do was send him my bank account details. I paused for a moment to admire the novelty and elegance of this new e-mail scam before deleting it. Two days later I received a second e-mail from the same source asking why I hadn’t responded to their first generous offer. While admiring their persistence, I resisted the urge to respond by asking them to stop wasting their time and mine, and instead just deleted it.

So, you can imagine my surprise when the following Monday morning I received a phone call from the university deputy vice-chancellor inviting me to pop up for a quick chat. On arrival, he wasted no time before asking why I had been so foolish as to ignore repeated offers of research funding from one of the college’s most generous benefactors. And that is how I learned that those e-mails from the Asian philanthropist weren’t bogus.

The gentleman that I’d chatted with outside my office was indeed a wealthy philanthropic funder who had been visiting our university. Having retrieved the e-mails from my deleted items folder, I re-engaged with him and subsequently received €20,000 to install 10,000-litre harvested-rainwater tanks in as many primary schools in rural Uganda as the money would stretch to.

Kevin McGuigan
Secret to success Kevin McGuigan discovered that one research poster can lead to generous funding contributions. (Courtesy: Antonio Jaen Osuna)

About six months later, I presented the benefactor with a full report accounting for the funding expenditure, replete with photos of harvested-rainwater tanks installed in 10 primary schools, with their very happy new owners standing in the foreground. Since you miss 100% of the chances you don’t take, I decided I should push my luck and added a “wish list” of other research items that the philanthropist might consider funding.

The list started small and grew steadily ambitious. I asked for funds for more tanks in other schools, a travel bursary, PhD registration fees, student stipends and so on. All told, the list came to a total of several hundred thousand euros, but I emphasized that they had been very generous so I would be delighted to receive funding for any one of the listed items and, even if nothing was funded, I was still very grateful for everything he had already done. The following week my generous patron deposited a six-figure-euro sum into my university research account with instructions that it be used as I saw fit for my research purposes, “under the supervision of your university finance office”.

In my career I have co-ordinated several large-budget, multi-partner, interdisciplinary, international research projects. In each case, that money was hard-earned, needing at least six months and many sleepless nights to prepare the grant submission. It still amuses me that I garnered such a large sum on the back of one research poster, one 10-minute chat and fewer than six e-mails.

So, if you have learned nothing else from this story, please don’t underestimate the power of a strategically placed and impactful poster describing your research. You never know with whom it may resonate and down which road it might lead you.

The post The power of a poster appeared first on Physics World.

ATLAS narrows the hunt for dark matter

28 janvier 2026 à 10:04

Researchers at the ATLAS collaboration have been searching for signs of new particles in the dark sector of the universe, a hidden realm that could help explain dark matter. In some theories, this sector contains dark quarks (fundamental particles) that undergo a shower and hadronization process, forming long-lived dark mesons (dark quarks and antiquarks bound by a new dark strong force), which eventually decay into ordinary particles. These decays would appear in the detector as unusual “emerging jets”: bursts of particles originating from displaced vertices relative to the primary collision point.

Using 51.8 fb⁻¹ of proton–proton collision data at 13.6 TeV collected in 2022–2023, the ATLAS team looked for events containing two such emerging jets. They explored two possible production mechanisms, which are a vector mediator (Z′) produced in the s‑channel and a scalar mediator (Φ) exchanged in the t‑channel. The analysis combined two complementary strategies. A cut-based strategy relying on high-level jet observables, including track-, vertex-, and jet-substructure-based selections, enables a straightforward reinterpretation for alternative theoretical models. A machine learning approach employs a per-jet tagger using a transformer architecture trained on low-level tracking variables to discriminate emerging from Standard Model jets, maximizing sensitivity for the specific models studied.

No emerging‑jet signal excess was found, but the search set the first direct limits on emerging‑jet production via a Z′ mediator and the first constraints on t‑channel Φ production. Depending on the model assumptions, Z′ masses up to around 2.5 TeV and Φ masses up to about 1.35 TeV are excluded. These results significantly narrow the space in which dark sector particles could exist and form part of a broader ATLAS programme to probe dark quantum chromodynamics. The work sharpens future searches for dark matter and advances our understanding of how a dark sector might behave.

Read the full article

Search for emerging jets in pp collisions at √s = 13.6 TeV with the ATLAS experiment

The ATLAS Collaboration 2025 Rep. Prog. Phys. 88 097801

Do you want to learn more about this topic?

Dark matter and dark energy interactions: theoretical challenges, cosmological implications and observational signatures by B WangE AbdallaF Atrio-Barandela and D Pavón (2016)

The post ATLAS narrows the hunt for dark matter appeared first on Physics World.

How do bacteria produce entropy?

28 janvier 2026 à 10:02

Active matter is matter composed of large numbers of active constituents, each of which consumes chemical energy in order to move or to exert mechanical forces.

This type of matter is commonly found in biology: swimming bacteria or migrating cells are both classic examples. In addition, a wide range of synthetic systems, such as active colloids or robotic swarms, can also fall into this umbrella.

Active matter has therefore been the focus of much research over the past decade, unveiling many surprising theoretical features and a suggesting a plethora of applications.

Perhaps most importantly, these systems’ ability to perform work leads to sustained non-equilibrium behaviour. This is distinctly different from that of relaxing equilibrium thermodynamic systems, commonly found in other areas of physics.

The concept of entropy production is often used to quantify this difference and to calculate how much useful work can be performed. If we want to harvest and utilise this work however, we need to understand the small-scale dynamics of the system. And it turns out this is rather complicated.

One way to calculate entropy production is through field theory, the workhorse of statistical mechanics. Traditional field theories simplify the system by smoothing out details, which works well for predicting densities and correlations. However, these approximations often ignore the individual particle nature, leading to incorrect results for entropy production.

The new paper details a substantial improvement on this method. By making use of Doi-Peliti field theory, they’re able to keep track of microscopic particle dynamics, including reactions and interactions.

The approach starts from the Fokker-Planck equation and provides a systematic way to calculate entropy production from first principles. It can be extended to include interactions between particles and produces general, compact formulas that work for a wide range of systems. These formulas are practical because they can be applied to both simulations and experiments.

The authors demonstrated their method with numerous examples, including systems of Active Brownian Particles, showing its broad usefulness. The big challenge going forward though is to extend their framework to non-Markovian systems, ones where future states depend on the present as well as past states.

Read the full article

Field theories of active particle systems and their entropy production – IOPscience

G. Pruessner and R. Garcia-Millan, 2025 Rep. Prog. Phys. 88 097601

The post How do bacteria produce entropy? appeared first on Physics World.

Einstein’s recoiling slit experiment realized at the quantum limit

28 janvier 2026 à 10:00

Quantum mechanics famously limits how much information about a system can be accessed at once in a single experiment. The more precisely a particle’s path can be determined, the less visible its interference pattern becomes. This trade-off, known as Bohr’s complementarity principle, has shaped our understanding of quantum physics for nearly a century. Now, researchers in China have brought one of the most famous thought experiments surrounding this principle to the quantum limit, using a single atom as a movable slit.

The thought experiment dates back to the 1927 Solvay Conference, where Albert Einstein proposed a modification of the double-slit experiment in which one of the slits could recoil. He argued that if a photon caused the slit to recoil as it passed through, then measuring that recoil might reveal which path the photon had taken without destroying the interference pattern. Conversely, Niels Bohr argued that any such recoil would entangle the photon with the slit, washing out the interference fringes.

For decades, this debate remained largely philosophical. The challenge was not about adding a detector or a label to track a photon’s path. Instead, the question was whether the “which-path” information could be stored in the motion of the slit itself. Until now, however, no physical slit was sensitive enough to register the momentum kick from a single photon.

A slit that kicks back

To detect the recoil from a single photon, the slit’s momentum uncertainty must be comparable to the photon’s momentum. For any ordinary macroscopic slit, its quantum fluctuations are significantly larger than the recoil, washing out the which-path information. To give a sense of scale, the authors note that even a 1 g object modelled as a 100 kHz oscillator (for example, a mirror on a spring) would have a ground-state momentum uncertainty of about 10-16 kg m s-1, roughly 11 orders of magnitude larger than the momentum of an optical photon (approximately 10-27 kg m s-1).

Illustration showing the experimental realization
Experimental realization To perform Einstein’s thought experiment in the lab, the researchers used a single trapped atom as a movable slit. Photon paths become correlated with the atom’s motion, allowing researchers to probe the trade-off between interference and which-path information. (Courtesy: Y-C Zhang et al. Phys. Rev. Lett. 135 230202)

In their study, published in Physical Review Letters, Yu-Chen Zhang and colleagues from the University of Science and Technology of China overcame this obstacle by replacing the movable slit with a single rubidium atom held in an optical tweezer and cooled to its three-dimensional motional ground state. In this regime, the atom’s momentum uncertainty reaches the quantum limit, making the recoil from a single photon directly measurable.

Rather than using a conventional double-slit geometry, the researchers built an optical interferometer in which photons scattered off the trapped atom. By tuning the depth of this optical trap, the researchers were able to precisely control the atom’s intrinsic momentum uncertainty, effectively adjusting how “movable” the slit was.

Watching interference fade 

As the researchers decreased the atom’s momentum uncertainty, they observed a loss of interference in the scattered photons. Increasing the atom’s momentum uncertainty caused the interference to reappear.

This behaviour directly revealed the trade-off between interference and which-path information at the heart of the Einstein–Bohr debate. The researchers note that the loss of interference arose not from classical noise, but from entanglement between the photon and the atom’s motion.

“The main challenge was matching the slit’s momentum uncertainty to that of a single photon,” says corresponding author Jian-Wei Pan. “For macroscopic objects, momentum fluctuations are far too large – they completely hide the recoil. Using a single atom cooled to its motional ground state allows us to reach the fundamental quantum limit.”

Maintaining interferometric phase stability was equally demanding. The team used active phase stabilization with a reference laser to keep the optical path length stable to within a few nanometres (roughly 3 nm) for over 10 h.

Beyond settling a historical argument, the experiment offers a clean demonstration of how entanglement plays a key role in Bohr’s complementarity principle. As Pan explains, the results suggest that “entanglement in the momentum degree-of-freedom is the deeper reason behind the loss of interference when which-path information becomes available”.

This experiment opens the door to exploring quantum measurement in a new regime. By treating the slit itself as a quantum object, future studies could probe how entanglement emerges between light and matter. Additionally, the same set-up could be used to gradually increase the mass of the slit, providing a new way to study the transition from quantum to classical behaviour.

The post Einstein’s recoiling slit experiment realized at the quantum limit appeared first on Physics World.

European Space Agency unveils first images from Earth-observation ‘sounder’ satellite

27 janvier 2026 à 19:26

The European Space Agency has released the first images from the Meteosat Third Generation-Sounder (MTG-S) satellite. They show variations in temperature and humidity over Europe and northern Africa in unprecedented detail with further data from the mission set to improve weather-forecasting models and improve measurements of air quality over Europe.

Launched on 1 July 2025 from the Kennedy Space Center in Florida aboard a SpaceX Falcon 9 rocket, MTG-S operates from a geostationary orbit, about 36 000 km above Earth’s surface and is able to provide coverage of Europe and part of northern Africa on a 15-minute repeat cycle.

The satellite carries a hyperspectral sounding instrument that uses interferometry to capture data on temperature and humidity as well as being able to measure wind and trace gases in the atmosphere. It can scan nearly 2,000 thermal infrared wavelengths every 30 minutes.

The data will eventually be used to generate 3D maps of the atmosphere and help improve the accuracy of weather forecasting, especially for rapidly evolving storms.

The “temperature” image, above, was taken in November 2025 and shows heat (red) from the African continent, while a dark blue weather front covers Spain and Portugal.

The “humidity” image, below, was captured using the sounder’s medium-wave infrared channel. Blue colours represent regions in the atmosphere with higher humidity, while red colours correspond to lower humidity.

Whole-Earth image showing cloud formation
(Courtesy: EUMETSAT)

“Seeing the first infrared sounder images from MTG-S really brings this mission and its potential to life,” notes Simonetta Cheli, ESA’s director of Earth observation programmes. “We expect data from this mission to change the way we forecast severe storms over Europe – and this is very exciting for communities and citizens, as well as for meteorologists and climatologists.”

ESA is expected to launch a second Meteosat Third Generation-Imaging satellite later this year following the launch of the first one – MTG-I1 – in December 2022.

The post European Space Agency unveils first images from Earth-observation ‘sounder’ satellite appeared first on Physics World.

Uranus and Neptune may be more rocky than icy, say astrophysicists

27 janvier 2026 à 14:00

Our usual picture of Uranus and Neptune as “ice giant” planets may not be entirely correct. According to new work by scientists at the University of Zürich (UZH), Switzerland, the outermost planets in our solar system may in fact be rock-rich worlds with complex internal structures – something that could have major implications for our understanding of how these planets formed and evolved.

Within our solar system, planets fall into three categories based on their internal composition. Mercury, Venus, Earth and Mars are deemed terrestrial rocky planets; Jupiter and Saturn are gas giants; and Uranus and Neptune are ice giants.

An agnostic approach

The new work, which was led by PhD student Luca Morf in UZH’s astrophysics department, challenges this last categorization by numerically simulating the two planets’ interiors as a mixture of rock, water, hydrogen and helium. Morf explains that this modelling framework is initially “agnostic” – meaning unbiased – about what the density profiles of the planets’ interiors should be. “We then calculate the gravitational fields of the planets so that they match with observational measurements to infer a possible composition,” he says.

This process, Morf continues, is then repeated and refined to ensure that each model satisfies several criteria. The first criteria is that the planet should be in hydrostatic equilibrium, meaning that its internal pressure is enough to counteract its gravity and keep it stable. The second is that the planet should have the gravitational moments observed in spacecraft data. These moments describe the gravitational field of a planet, which is complex because planets are not perfect spheres.

The final criteria is that the modelled planets need to be thermodynamically and compositionally consistent with known physics. “For example, a simulation of the planets’ interiors must obey equations of state, which dictate how materials behave under given pressure and temperature conditions,” Morf explains.

After each iteration, the researchers adjust the density profile of each planet and test it to ensure that the model continues to adhere to the three criteria. “We wanted to bridge the gap between existing physics-based models that are overly constrained and empirical approaches that are too simplified,” Morf explains. Avoiding strict initial assumptions about composition, he says, “lets the physics and data guide the solution [and] allows us to probe a larger parameter space.”

A wide range of possible structures

Based on their models, the UZH astrophysicists concluded that the interiors of Uranus and Neptune could have a wide range of possible structures, encompassing both water-rich and rock-rich configurations. More specifically, their calculations yield rock-to-water ratios of between 0.04-3.92 for Uranus and 0.20-1.78 for Neptune.

Diagrams showing possible "slices" of Uranus and Neptune. Four slices are shown, two for each planet. Each slice is filled with brown areas representing silicon dioxide rock and blue areas representing water ice, plus smaller areas of tan colouring for hydrogen-helium mixtures and (for Neptune only) grey areas representing iron. Two slices are mostly blue, while the other two contain large fractions of brown.
Slices of different pies: According to models developed with “agnostic” initial assumptions, Uranus (top) and Neptune (bottom) could be composed mainly of water ice (blue areas), but they could also contain substantial amounts of silicon dioxide rock (brown areas). (Courtesy: Luca Morf)

The models, which are detailed in Astronomy and Astrophysics, also contain convective regions with ionic water pockets. The presence of such pockets could explain the fact that Uranus and Neptune, unlike Earth, have more than two magnetic poles, as the pockets would generate their own local magnetic dynamos.

Traditional “ice giant” label may be too simple

Overall, the new findings suggest that the traditional “ice giant” label may oversimplify the true nature of Uranus of Neptune, Morf tells Physics World. Instead, these planets could have complex internal structures with compositional gradients and different heat transport mechanisms. Though much uncertainty remains, Morf stresses that Uranus and Neptune – and, by extension, similar intermediate-class planets that may exist in other solar systems – are so poorly understood that any new information about their internal structure is valuable.

A dedicated space mission to these outer planets would yield more accurate measurements of the planets’ gravitational and magnetic fields, enabling scientists to refine the limited existing observational data. In the meantime, the UZH researchers are looking for more solutions for the possible interiors of Uranus and Neptune and improving their models to account for additional constraints, such as atmospheric conditions. “Our work will also guide laboratory and theoretical studies on the way materials behave in general at high temperatures and pressures,” Morf says.

The post Uranus and Neptune may be more rocky than icy, say astrophysicists appeared first on Physics World.

String-theory concept boosts understanding of biological networks

27 janvier 2026 à 10:35

Many biological networks – including blood vessels and plant roots – are not organized to minimize total length, as long assumed. Instead, their geometry follows a principle of surface minimization, following a rule that is also prevalent in string theory. That is the conclusion of physicists in the US, who have created a unifying framework that explains structural features long seen in real networks but poorly captured by traditional mathematical models.

Biological transport and communication networks have fascinated scientists for decades. Neurons branch to form synapses, blood vessels split to supply tissues, and plant roots spread through soil. Since the mid-20th century, many researchers believed that evolution favours networks that minimize total length or volume.

“There is a longstanding hypothesis, going back to Cecil Murray from the 1940s, that many biological networks are optimized for their length and volume,” Albert-László Barabási of Northeastern University explains. “That is, biological networks, like the brain and the vascular systems, are built to achieve their goals with the minimal material needs.” Until recently, however, it had been difficult to characterize the complicated nature of biological networks.

Now, advances in imaging have given Barabási and colleagues a detailed 3D picture of real physical networks, from individual neurons to entire vascular systems. With these new data in hand, the researchers found that previous theories are unable to describe real networks in quantitative terms.

From graphs to surfaces

To remedy this, the team defined the problem in terms of physical networks, systems whose nodes and links have finite thickness and occupy space. Rather than treating them as abstract graphs made of idealized edges, the team models them as geometrical objects embedded in 3D space.

To do this, the researchers turned to an unexpected mathematical tool. “Our work relies on the framework of covariant closed string field theory, developed by Barton Zwiebach and others in the 1980s,” says team member Xiangyi Meng at Rensselaer Polytechnic Institute. This framework provides a correspondence between network-like graphs and smooth surfaces.

Unlike string theory, their approach is entirely classical. “These surfaces, obtained in the absence of quantum fluctuations, are precisely the minimal surfaces we seek,” Meng says. No quantum mechanics, supersymmetry, or exotic string-theory ingredients are required. “Those aspects were introduced mainly to make string theory quantum and thus do not apply to our current context.”

Using this framework, the team analysed a wide range of biological systems. “We studied human and fruit fly neurons, blood vessels, trees, corals, and plants like Arabidopsis,” says Meng. Across all these cases, a consistent pattern emerged: the geometry of the networks is better predicted by minimizing surface area rather than total length.

Complex junctions

One of the most striking outcomes of the surface-minimization framework is its ability to explain structural features that previous models cannot. Traditional length-based theories typically predict simple Y-shaped bifurcations, where one branch splits into two. Real networks, however, often display far richer geometries.

“While traditional models are limited to simple bifurcations, our framework predicts the existence of higher-order junctions and ‘orthogonal sprouts’,” explains Meng.

These include three- or four-way splits and perpendicular, dead-end offshoots. Under a surface-based principle, such features arise naturally and allow neurons to form synapses using less membrane material overall and enable plant roots to probe their environment more effectively.

Ginestra Bianconi of the UK’s Queen Mary University of London says that the key result of the new study is the demonstration that “physical networks such as the brain or vascular networks are not wired according to a principle of minimization of edge length, but rather that their geometry follows a principle of surface minimization.”

Bianconi, who was not involved in the study, also highlights the interdisciplinary leap of invoking ideas from string theory, “This is a beautiful demonstration of how basic research works”.

Interdisciplinary leap

The team emphasizes that their work is not immediately technological. “This is fundamental research, but we know that such research may one day lead to practical applications,” Barabási says. In the near term, he expects the strongest impact in neuroscience and vascular biology, where understanding wiring and morphology is essential.

Bianconi agrees that important questions remain. “The next step would be to understand whether this new principle can help us understand brain function or have an impact on our understanding of brain diseases,” she says. Surface optimization could, for example, offer new ways to interpret structural changes observed in neurological disorders.

Looking further ahead, the framework may influence the design of engineered systems. “Physical networks are also relevant for new materials systems, like metamaterials, who are also aiming to achieve functions at minimal cost,” Barabási notes. Meng points to network materials as a particularly promising area, where surface-based optimization could inspire new architectures with tailored mechanical or transport properties.

The research is described in Nature.

The post String-theory concept boosts understanding of biological networks appeared first on Physics World.

The secret life of TiO₂ in foams

26 janvier 2026 à 17:31

Porous carbon foams are an exciting area of research because they are lightweight, electrically conductive, and have extremely high surface areas. Coating these foams with TiO₂ makes them chemically active, enabling their use in energy storage devices, fuel cells, hydrogen production, CO₂‑reduction catalysts, photocatalysis, and thermal management systems. While many studies have examined the outer surfaces of coated foams, much less is known about how TiO₂ coatings behave deep inside the foam structure.

In this study, researchers deposited TiO₂ thin films onto carbon foams using magnetron sputtering and applied different bias voltages to control ion energy, which in turn affects coating density, crystal structure, thickness, and adhesion. They analysed both the outer surface and the interior of the foam using microscopy, particle‑transport simulations, and X‑ray techniques.

They found that the TiO₂ coating on the outer surface is dense, correctly composed, and crystalline (mainly anatase with a small amount of rutile) ideal for catalytic and energy applications. They also discovered that although fewer particles reach deep inside the foam, those do retain the same energy, meaning particle quantity decreases with depth but particle energy does not. Because devices like batteries and supercapacitors rely on uniform coatings, variations in thickness or structure inside the foam can lead to poorer performance and faster degradation.

Overall, this research provides a much clearer understanding of how TiO₂ coatings grow inside complex 3D foams, showing how thickness, density, and crystal structure evolve with depth and how bias voltage can be used to tune these properties. By revealing how plasma particles move through the foam and validating models that predict coating behaviour, it enables the design of more reliable, higher‑performing foam‑based devices for energy and catalytic applications.

Read the full article

A comprehensive multi-scale study on the growth mechanisms of magnetron sputtered coatings on open-cell 3D foams

Loris Chavée et al 2026 Prog. Energy 8 015002

Do you want to learn more about this topic?

Advances in thermal conductivity for energy applications: a review Qiye Zheng et al. (2021)

The post The secret life of TiO₂ in foams appeared first on Physics World.

Laser processed thin NiO powder coating for durable anode-free batteries

26 janvier 2026 à 17:30

Traditional lithium‑ion batteries use a thick graphite anode, where lithium ions move in and out of the graphite during charging and discharging. In an anode‑free lithium metal battery, there is no anode material at the start, only a copper foil. During the first charge, lithium leaves the cathode and deposits onto the copper as pure lithium metal, effectively forming the anode. Removing the anode increases energy density dramatically by reducing weight, and it also simplifies and lowers the cost of manufacturing. Because of this, anode‑free batteries are considered to have major potential for next‑generation energy storage. However, a key challenge is that lithium deposits unevenly on bare copper, forming long needle‑like dendrites that can pierce the separator and cause short circuits. This uneven growth also leads to rapid capacity loss, so anode‑free batteries typically fail after only a few hundred cycles.

In this research, the scientists coated the copper foil with NiO powder and used a CO₂ laser (l = 10.6 mm) to rapidly heat the same in a rapid scanning mode to transform it. The laser‑treated NiO becomes porous and strongly adherent to the copper, helping lithium spread out more evenly. The process is fast, energy‑efficient, and can be done in air. As a result, lithium ions diffuse or move more easily across the surface, reducing dendrite formation. The exchange current density also doubled compared to bare copper, indicating better charge‑transfer behaviour. Overall, battery performance improved dramatically. The modified cells lasted 400 cycles at room temperature and 700 cycles at 40°C, compared with only 150 cycles for uncoated copper.

This simple, rapid, and scalable technique offers a powerful way to improve anode‑free lithium metal batteries, one of the most promising next‑generation battery technologies.

Read the full article

Microgradient patterned NiO coating on copper current collector for anode-free lithium metal battery

Supriya Kadam et al 2025 Prog. Energy 7 045003

Do you want to learn more about this topic?

Lithium aluminum alloy anodes in Li-ion rechargeable batteries: past developments, recent progress, and future prospects by Tianye Zheng and Steven T Boles (2023)

The post Laser processed thin NiO powder coating for durable anode-free batteries appeared first on Physics World.

Planning a sustainable water future in the United States

26 janvier 2026 à 17:28

Within 45 years, water demand in the United States is predicted to double, while climate change is expected to worsen freshwater supplies, with 44% of the country already experiencing some form of drought. One way to expand water resources is desalination, where salt is removed from seawater or brackish groundwater to make clean, usable water. Brackish groundwater contains far less salt than seawater, making it much easier and cheaper to treat, and the United States has vast reserves of it in deep aquifers. The challenge is that desalination traditionally requires a lot of energy and produces a concentrated brine waste stream that is difficult and costly to dispose of. As a result, desalination currently provides only about 1% of the nation’s water supply, even though it is a major source of drinking water in regions such as the Middle East and North Africa.

Researchers Vasilis Fthenakis (left) and Zhuoran Zhang (right) from Columbia University taken at Nassau Point in Long Island
Researchers Vasilis Fthenakis (left) and Zhuoran Zhang (right) from Columbia University taken at Nassau Point in Long Island (Courtesy: Zhuoran Zhang, Columbia University)

In this work, the researchers show how desalination of brackish groundwater can be made genuinely sustainable and economically viable for addressing the United States’ looming water shortages. A key part of the solution is zero‑liquid‑discharge, which avoids brine disposal by extracting more freshwater and recovering salts such as sodium, calcium, and magnesium for reuse. Crucially, the study demonstrates that when desalination is powered by low‑cost solar and wind energy, the overall process becomes far more affordable. By 2040, solar photovoltaics paired with optimised battery storage are projected to produce electricity at lower cost than the grid in the states facing the largest water deficits, making renewable‑powered desalination a competitive option.

The researchers also show that advanced technologies, such as high‑recovery reverse osmosis and crystallisation, can achieve zero‑liquid‑discharge without increasing costs, because the extra water and salt recovery offsets the expense of brine management. Their modelling indicates that a full renewable‑powered zero‑liquid‑discharge pathway can produce freshwater at an affordable cost, while reducing environmental impacts and avoiding brine disposal altogether. Taken together, this work outlines a realistic, sustainable pathway for large‑scale desalination in the United States, offering a credible strategy for securing future water supplies in increasingly water‑stressed regions.

Progress diagram adapted from article
Progress diagram adapted from article (Courtesy: Zhuoran Zhang, Columbia University)

Do you want to learn more about this topic?

Review of solar-enabled desalination and implications for zero-liquid-discharge applications by Vasilis Fthenakis et al. (2024)

 

The post Planning a sustainable water future in the United States appeared first on Physics World.

Could silicon become the bedrock of quantum computers?

26 janvier 2026 à 17:00

Silicon, in the form of semiconductors, integrated chips and transistors, is the bedrock of modern classical computers – so much so that it lends its name to technological hubs around the world, beginning with Silicon Valley in the US . For quantum computers, the bedrock is still unknown, but a new platform developed by researchers in Australia suggests that silicon could play a role here, too.

Dubbed the 14|15 platform due to its elemental constituents, it combines a crystalline silicon substrate with qubits made from phosphorus atoms . By relying on only two types of atoms, team co-leader Michelle Simmons says the device “avoids the interfaces and complexities that plague so many multi-material platforms” while enabling “high-quality qubits with lower noise, simplicity of design and device stability”.

Boarding at platform 14|15

Quantum computers take registers of qubits, which store quantum information, and apply basic operations to them sequentially to execute algorithms. One of the primary challenges they face is scalability – that is, sustaining reliable, or high-fidelity, operations on an increasing number of qubits. Many of today’s platforms use only a small number of qubits, for which operations can be individually tuned for optimal performance. However, as the amount of hardware, complexity and noise increases, this hands-on approach becomes debilitating.

Silicon quantum processors may offer a solution. Writing in Nature, Simmons, Ludwik Kranz, and their team at Silicon Quantum Computing (a spinout from the University of New South Wales in Sydney) describe a system that uses the nuclei of phosphorus atoms as its primary qubit. Each nucleus behaves a little like a bar magnet with an orientation (north/south or up/down) that represents a 0 or 1.

These so-called spin qubits are particularly desirable because they exhibit relatively long coherence times, meaning information can be preserved for long enough to apply the numerous operations of an algorithm. Using monolithic, high-purity silicon as the substrate further benefits coherence since it reduces undesirable charge and magnetic noise arising from impurities and interfaces.

To make their quantum processor, the team deposited phosphorus atoms in small registers a few nanometres across. Within each register, the phosphorus nuclei do not interact enough to generate the entangled states required for a quantum computation. The team remedy this by loading each cluster of phosphorous atoms with a electron that is shared between the atoms. The result is that so-called hyperfine interactions, wherein each nuclear spin interacts with the electron like an interacting bar magnet, arise and provide the interaction necessary to entangle nuclear spins within each register.

By combining these interactions with control of individual nuclear spins, the researchers showed that they can generate Bell states (maximally entangled two-qubit states) between pairs of nuclei within a register with error rates as low as 0.5% – the lowest to date for semiconductor platforms.

Scaling through repulsion

The team’s next step was to connect multiple processors – a step that exponentially increases their combined capacity. To understand how, consider two quantum processors, one with n qubits and the other m qubits. Isolated from one another, they can collectively represent at most 2n + 2m states. Once they are entangled, however, they can represent 2n + m states.

Simmons says that silicon quantum processors offer an inherent advantage in scaling, too. Generating numerous registers on a single chip and using “naturally occurring” qubits, she notes, reduces their need for extraneous confinement gates and electronics as they scale.

The researchers showcased these scaling capabilities by entangling a register of four phosphorus atoms with a register of five, separated by 13 nm. The entanglement of these registers is mediated by the electron-exchange interaction, a phenomenon arising from the combination of Pauli’s exclusion principle and Coulomb repulsion when electrons are confined in a small region. By leveraging this and all other interactions and control in their toolkit, the researchers generate entanglement of eight data qubits across the two registers.

Retaining such high-quality qubits and individual control of them despite their high density demonstrates the scaling potential of the platform. Future avenues of exploration include increasing the size of 2D arrays of registers to increase the number of qubits, but Simmons says the rest is “top secret”, adding “the world will know soon enough”.

The post Could silicon become the bedrock of quantum computers? appeared first on Physics World.

Is our embrace of AI naïve and could it lead to an environmental disaster?

26 janvier 2026 à 12:00

According to today’s leading experts in artificial intelligence (AI), this new technology is a danger to civilization. A statement on AI risk published in 2023 by the US non-profit Center for AI Safety warned that mitigating the risk of extinction from AI must now be “a global priority”, comparing it to other societal-scale dangers such as pandemics and nuclear war. It was signed by more than 600 people, including the winner of the 2024 Nobel Prize for Physics and so-called “Godfather of AI” Geoffrey Hinton. In a speech at the Nobel banquet after being awarded the prize, Hinton noted that AI may be used “to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim”.

Despite signing the letter, Sam Altman of OpenAI, the firm behind ChatGPT, has stated that the company’s explicit ambition is to create artificial general intelligence (AGI) within the next few years, to “win the AI-race”. AGI is predicted to surpass human cognitive capabilities for almost all tasks, but the real danger is if or when AGI is used to generate more powerful versions of itself. Sometimes called “superintelligence”, this would be impossible to control. Companies do not want any regulation of AI and their business model is for AGI to replace most employees at all levels. This is how firms are expected to benefit from AI, since wages are most companies’ biggest expense.

AI, to me, is not about saving the world, but about a handful of people wanting to make enormous amounts of money from it. No-one knows what internal mechanism makes even today’s AI work – just as one cannot find out what you think from how the neurons in your brain are firing. If we don’t even understand today’s AI models, how are we going to understand – and control – the more powerful models that already exist or are planned in the near future?

AI has some practical benefits but too often is put to mostly meaningless, sometimes downright harmful, uses such as cheating your way through school or creating disinformation and fake videos online. What’s more, an online search with the help of AI requires at least 10 times as much energy as a search without AI. It already uses 5% of all electricity in the US and by 2028 this figure is expected to be 15%, which will be over a quarter of all US households’ electricity consumption. AI data servers are more than 50% as carbon intensive as the rest of the US’s electricity supply.

Those energy needs are why some tech companies are building AI data centres – often under confidential, opaque agreements – very quickly for fear of losing market share. Indeed, the vast majority of those centres are powered by fossil-fuel energy sources – completely contrary to the Paris Agreement to limit global warming. We must wisely allocate Earth’s strictly limited resources, with what is wasted on AI instead going towards vital things.

To solve the climate crisis, there is definitely no need for AI. All the solutions have already been known for decades: phasing out fossil fuels, reversing deforestation, reducing energy and resource consumption, regulating global trade, reforming the economic system away from its dependence on growth. The problem is that the solutions are not implemented because of short-term selfish profiteering, which AI only exacerbates.

Playing with fire

AI, like all other technologies, is not a magic wand and, as Hinton says, potentially has many negative consequences. It is not, as the enthusiasts seem to think, a magical free resource that provides output without input (and waste). I believe we must rethink our naïve, uncritical, overly fast, total embrace of AI. Universities are known for wise reflection, but worryingly they seem to be hurrying to jump on the AI bandwagon. The problem is that the bandwagon may be going in the wrong direction or crash and burn entirely.

Why then should universities and organizations send their precious money to greedy, reckless and almost totalitarian tech billionaires? If we are going to use AI, shouldn’t we create our own AI tools that we can hopefully control better? Today, more money and power is transferred to a few AI companies that transcend national borders, which is also a threat to democracy. Democracy only works if citizens are well educated, committed, knowledgeable and have influence.

AI is like using a hammer to crack a nut. Sometimes a hammer may be needed but most of the time it is not and is instead downright harmful. Happy-go-lucky people at universities, companies and throughout society are playing with fire without knowing about the true consequences now, let alone in 10 years’ time. Our mapped-out path towards AGI is like a zebra on the savannah creating an artificial lion that begins to self-replicate, becoming bigger, stronger, more dangerous and more unpredictable with each generation.

Wise reflection today on our relationship with AI is more important than ever.

The post Is our embrace of AI naïve and could it lead to an environmental disaster? appeared first on Physics World.

New sensor uses topological material to detect helium leaks

26 janvier 2026 à 10:00

A new sensor detects helium leaks by monitoring how sound waves propagate through a topological material – no chemical reactions required. Developed by acoustic scientists at Nanjing University, China, the innovative, physics-based device is compact, stable, accurate and capable of operating at very low temperatures.

Helium is employed in a wide range of fields, including aerospace, semiconductor manufacturing and medical applications as well as physics research. Because it is odourless, colourless, and inert, it is essentially invisible to traditional leak-detection equipment such as adsorption-based sensors. Specialist helium detectors are available, but they are bulky, expensive and highly sensitive to operating conditions.

A two-dimensional acoustic topological material

The new device created by Li Fan and colleagues at Nanjing consists of nine cylinders arranged in three sub-triangles with tubes in between the cylinders. The corners of the sub-triangles touch and the tubes allow air to enter the device. The resulting two-dimensional system has a so-called “kagome” structure and is an example of a topological material – that is, one that contains special, topologically protected, states that remain stable even if the bulk structure contains minor imperfections or defects. In this system, the protected states are the corners.

To test their setup, the researchers placed speakers under the corners that send sound waves into the structure and make the gas within it vibrate at a certain frequency (the resonance frequency). When they replaced the air in the device with helium, the sound waves travelled faster, changing the vibration frequency. Measuring this shift in frequency enabled the researchers to calculate the concentration of helium in the device.

Many advantages over traditional gas sensors

Fan explains that the device works because the interface/corner states are impacted by the properties of the gas within it. This mechanism has many advantages over traditional gas sensors. First, it does not rely on chemical reactions, making it ideal for detecting inert gases like helium. Second, the sensor is not affected by external conditions and can therefore work at extremely low temperatures – something that is challenging for conventional sensors that contain sensitive materials. Third, its sensitivity to the presence of helium does not change, meaning it does not need to be recalibrated during operation. Finally, it detects frequency changes quickly and rapidly returns to its baseline once helium levels decrease.

As well as detecting helium, Fan says the device can also pinpoint the direction a gas leak is coming from. This is because when helium begins to fill the device, the corner closest to the source of the gas is impacted first. Each corner thus acts as an independent sensing point, giving the device a spatial sensing capability that most traditional detectors lack.

Other gases could be detected

Detecting helium leaks is important in fields such as semiconductor manufacturing, where the gas is used for cooling, and in medical imaging systems that operate at liquid helium temperatures. “We think our work opens an avenue for inert gas detection using a simple device and is an example of a practical application for two-dimensional acoustic topological materials,” says Fan.

While the new sensor was fabricated to detect helium, the same mechanism could also be employed to detect other gases such as hydrogen, he adds.

Spurred on by these promising preliminary results, which they report in Applied Physics Letters, the researchers plan to extend their fabrication technique to create three-dimensional acoustic topological structures. “These could be used to orientate the corner points so that helium can be detected in 3D space,” says Fan. “Ultimately, we are trying to integrate our system into a portable structure that can be deployed in real-world environments without complex supporting equipment.,” he tells Physics World.

The post New sensor uses topological material to detect helium leaks appeared first on Physics World.

Encrypted qubits can be cloned and stored in multiple locations

24 janvier 2026 à 16:09

Encrypted qubits can be cloned and stored in multiple locations without violating the no-cloning theorem of quantum mechanics, researchers in Canada have shown. Their work could potentially allow quantum-secure cloud storage, in which data can be stored on multiple servers, thereby allowing for redundancy without compromising security. The research also has implications for quantum fundamentals.

Heisenberg’s uncertainty principle – which states that it is impossible to measure conjugate variables of a quantum object with less than a combined minimum uncertainty – is one of the central tenets of quantum mechanics. The no-cloning theorem – that it is impossible to create identical clones of unknown quantum states – flows directly from this. Achim Kempf of the University of Waterloo explains, “If you had [clones] you could take half your copies and perform one type of measurement, and the other half of your copies and perform an incompatible measurement, and then you could beat the uncertainty principle.”

No-cloning poses a challenge those trying to create a quantum internet. On today’s Internet, storage of information on remote servers is common, and multiple copies of this information are usually stored in different locations to preserve data in case of disruption. Users of a quantum cloud server would presumably desire the same degree of information security, but no-cloning theorem would apparently forbid this.

Signal and noise

In the new work, Kempf and his colleague Koji Yamaguchi, now at Japan’s Kyushu University, show that this is not the case. Their encryption protocol begins with the generation of a set of pairs of entangled qubits. When a qubit, called A, is encrypted, it interacts with one qubit (called a signal qubit) from each pair in turn. In the process of interaction, the signal qubits record information about the state of A, which has been altered by previous interactions. As each signal qubit is entangled with a noise qubit, the state of the noise qubits is also changed.

Another central tenet of quantum mechanics, however, is that quantum entanglement does not allow for information exchange. “The noise qubits don’t know anything about the state of A either classically or quantum mechanically,” says Kempf. “The noise qubits’ role is to serve as a record of noise…We use the noise that is in the signal qubit to encrypt the clone of A. You drown the information in noise, but the noise qubit has a record of exactly what noise has been added because [the signal qubits and noise qubits] are maximally entangled.”

Therefore, a user with all of the noise qubits knows nothing about the signal, but knows all of the noise that was added to it. Possession of just one of the signal qubits, therefore, allows them to recover the unencrypted qubit. This does not violate the uncertainty principle, however, because decrypting one copy of A involves making a measurement of the noise qubits: “At the end of [the measurement], the noise qubits are no longer what they were before, and they can no longer be used for the decryption of another encrypted clone,” explains Kempf.

Cloning clones

Kempf says that, working with IBM, they have demonstrated hundreds of steps of iterative quantum cloning (quantum cloning of quantum clones) on a Heron 2 processor successfully and showed that the researchers could even clone entangled qubits and recover the entanglement after decryption. “We’ll put that on the arXiv this month,” he says.

 The research is described in Physical Review Letters and Barry Sanders at Canada’s University of Calgary is impressed by both the elegance and the generality of the result. He notes it could have significance for topics as distant as information loss from black holes: “It’s not a flash in the pan,” he says; “If I’m doing something that is related to no-cloning, I would look back and say ‘Gee, how do I interpret what I’m doing in this context?’: It’s a paper I won’t forget.”

Seth Lloyd of MIT agrees: “It turns out that there’s still low-hanging fruit out there in the theory of quantum information, which hasn’t been around long,” he says. “It turns out nobody ever thought to look at this before: Achim is a very imaginative guy and it’s no surprise that he did.” Both Lloyd and Sanders agree that quantum cloud storage remains hypothetical, but Lloyd says “I think it’s a very cool and unexpected result and, while it’s unclear what the implications are towards practical uses, I suspect that people will find some very nice applications in the near future.”

The post Encrypted qubits can be cloned and stored in multiple locations appeared first on Physics World.

Cosmic time capsules: the search for pristine comets

23 janvier 2026 à 14:40

In this episode of Physics World Stories, host Andrew Glester explores the fascinating hunt for pristine comets – icy bodies that preserve material from the solar system’s beginnings and even earlier. Unlike more familiar comets that repeatedly swing close to the Sun and transform, these frozen relics act as time capsules, offering unique insights into our cosmic history.

Pale blue circle against red streaks. composite image of interstellar comet 3I/ATLAS captured by the Europa Ultraviolet Spectrograph instrument on NASA’s Europa Clipper spacecraft
Interstellar comet 3I/ATLAS is seen in this composite image captured on 6 November 2025 by the Europa Ultraviolet Spectrograph instrument on NASA’s Europa Clipper spacecraft. (Courtesy: NASA/JPL-Caltech/SWRI)

The first guest is Tracy Becker, deputy principal investigator for the Ultraviolet Spectrograph on NASA’s Europa Clipper mission. Becker describes how the Jupiter-bound spacecraft recently turned its gaze to 3I/ATLAS, an interstellar visitor that appeared last July. Mission scientists quickly reacted to this unique opportunity, which also enabled them to test the mission’s instruments before it arrives at the icy world of Europa.

Michael Küppers then introduces the upcoming Comet Interceptor mission, set for launch in 2029. This joint ESA–JAXA mission will “park” in space until a suitable comet arrives from the outer reaches of the solar system. They will deploy two probes to study it from multiple angles – offering a first-ever close look at material untouched since the solar system’s birth.

From interstellar wanderers to carefully orchestrated intercepts, this episode blends pioneering missions and cosmic detective work. Keep up to date with all the latest space and astronomy developments in the dedicated section of the Physics World website.

The post Cosmic time capsules: the search for pristine comets appeared first on Physics World.

💾

Hot ancient galaxy cluster challenges current cosmological models

23 janvier 2026 à 12:30

As with people, age in cosmology does not always extrapolate. An early-career politician may be more likely to win a debate with a student than with a seasoned diplomat, but put all three in a room with a toddler and the toddler will almost certainly get their own way – they are following a different set of rules. A team of global collaborators noticed a similar phenomenon when peering at a cluster of developing galaxies from a time when the universe was just a tenth of its current age.

Cosmological theories suggest that such infant clusters should host much cooler and less abundant gas than more mature clusters. But what the researchers saw was at least five times hotter than expected – apparently not abiding by those rules.

“That’s a massive surprise and forces us to rethink how large structures actually form and evolve in the universe,” says first author Dazhi Zhou, a PhD candidate at the University of British Columbia.

Eyes on the past

Looking into distant outer space allows us to peer into the past. The protocluster of developing galaxies that Zhou and collaborators investigated – known as SPT2349–56 – is 12.4 billion light years away, so the light observed from it left home when the universe was just 1.4 billion years old. Light from so far away will be quite faint and hard to detect by the time it reaches us, so the researchers used the Atacama Large Millimeter/submillimeter Array (ALMA) to study SPT2349–56 using a special type of shadow.

As this type of protocluster develops, Zhou explains, the gas around its galaxies  becomes so hot that electrons in the gas interact with, and confer some of their energy upon, passing photons. This leaves light passing through the gas with more photons at the higher energy end of the spectrum and fewer at the lower end. When viewing the cosmic microwave background radiation – the “afterglow” left behind by the Big Bang – this results in a shadow at low energies. This energy shift, discovered by physicists Rashid Sunyaev and Yakov Zeldovich, not only reveals the presence of the protocluster, but the strength of this signature indicates the thermal energy of the gas in the protocluster.

The team’s observations were not easy. “This shadow is actually pretty tiny,” Zhou explains. In addition, there is thermal emission from the dust inside galaxies at radio wavelengths, originally estimated to be 20 times stronger than the Sunyaev–Zeldovich signature. “It really is like finding a needle in a haystack,” he adds. Nonetheless, the team did identify a definite Sunyaev–Zeldovich signature from SPT2349–56, with a thermal energy indicating that it was at least five times hotter than expected – thousands of times hotter than the surface of our Sun.

Time to upgrade?

SPT2349–56 has some quirks that may explain its high thermal energy, including three supermassive black holes shooting out jets of high-energy matter – a known but rare phenomenon for these supermassive black holes. However, simulations that take these outbursts into account as a heating mechanism that’s more efficient and occurs much earlier than heating from gravitational collapse (as current models suggest) still do not give the high temperatures observed, perhaps pointing to gaps in our knowledge of the underlying physics.

Eiichiro Komatsu from the Max-Planck-Institut für Astrophysik describes the work as “a wonderful  measurement”. Although not directly involved in this research, Komatsu has also looked at what the Sunyaev–Zeldovich effect can reveal about the cosmos. “The amount of thermal energy measured by the authors is staggering, yet its origin is a mystery,” he tells Physics World. He suggests these results will motivate further observations of other systems in the early universe.

“We need to be cautious rather than making any big claim,” adds Zhou. This is the first Sunyaev–Zeldovich detection of a protocluster from the first three billion years of the universe’s existence. Next, he aims to study similar protoclusters, and he hopes others will also work to corroborate the observations.

The research is reported in Nature.

The post Hot ancient galaxy cluster challenges current cosmological models appeared first on Physics World.

Laser fusion: Focused Energy charts a course to commercial viability

22 janvier 2026 à 16:01

This episode of the Physics World Weekly podcast features a conversation with the plasma physicist Debbie Callahan who is chief strategy officer at Focused Energy – a California and Germany based fusion-energy startup. Prior to that she spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US.

Focused Energy is developing a commercial system for generating energy from the laser-driven fusion of hydrogen isotopes. Callahan describes LightHouse, which is the company’s design for a laser-fusion power plant, and Pearl, which is the firm’s deuterium–tritium fuel capsule.

Callahan talks about the challenges and rewards of working in the fusion industry and also calls on early-career physicists to consider careers in this burgeoning sector.

The post Laser fusion: Focused Energy charts a course to commercial viability appeared first on Physics World.

Fuel cell catalyst requirements for heavy-duty vehicle applications

22 janvier 2026 à 12:25

Heavy-duty vehicles (HDVs) powered by hydrogen-based proton-exchange membrane (PEM) fuel cells offer a cleaner alternative to diesel-powered internal combustion engines for decarbonizing long-haul transportation sectors. The development path of sub-components for HDV fuel-cell applications is guided by the total cost of ownership (TCO) analysis of the truck.

TCO analysis suggests that the cost of the hydrogen fuel consumed over the lifetime of the HDV is more dominant because trucks typically operate over very high mileages (~a million miles) than the fuel cell stack capital expense (CapEx). Commercial HDV applications consume more hydrogen and demand higher durability, meaning that TCO is largely related to the fuel-cell efficiency and durability of catalysts.

This article is written to bridge the gap between the industrial requirements and academic activity for advanced cathode catalysts with an emphasis on durability. From a materials perspective, the underlying nature of the carbon support, Pt-alloy crystal structure, stability of the alloying element, cathode ionomer volume fraction, and catalyst–ionomer interface play a critical role in improving performance and durability.

We provide our perspective on four major approaches, namely, mesoporous carbon supports, ordered PtCo intermetallic alloys, thrifting ionomer volume fraction, and shell-protection strategies that are currently being pursued. While each approach has its merits and demerits, their key developmental needs for future are highlighted.

Nagappan Ramaswamy

Nagappan Ramaswamy joined the Department of Chemical Engineering at IIT Bombay as a faculty member in January 2025. He earned his PhD in 2011 from Northeastern University, Boston specialising in fuel cell electrocatalysis.

He then spent 13 years working in industrial R&D – two years at Nissan North American in Michigan USA focusing on lithium-ion batteries, followed by 11 years at General Motors in Michigan USA focusing on low-temperature fuel cells and electrolyser technologies. While at GM, he led two multi-million-dollar research projects funded by the US Department of Energy focused on the development of proton-exchange membrane fuel cells for automotive applications.

At IIT Bombay, his primary research interests include low-temperature electrochemical energy-conversion and storage devices such as fuel cells, electrolysers and redox-flow batteries involving materials development, stack design and diagnostics.

The post Fuel cell catalyst requirements for heavy-duty vehicle applications appeared first on Physics World.

Ask me anything: Mažena Mackoit-Sinkevičienė – ‘Above all, curiosity drives everything’

22 janvier 2026 à 12:00

What skills do you use every day in your job?

Much of my time is spent trying to build and refine models in quantum optics, usually with just a pencil, paper and a computer. This requires an ability to sit with difficult concepts for a long time, sometimes far longer than is comfortable, until they finally reveal their structure.

Good communication is equally essential – I teach students; collaborate with colleagues from different subfields; and translate complex ideas into accessible language for the broader public. Modern physics connects with many different fields, so being flexible and open-minded matters as much as knowing the technical details. Above all, curiosity drives everything. When I don’t understand something, that uncertainty becomes my strongest motivation to keep going.

What do you like best and least about your job?

What I like the best is the sense of discovery – the moment when a problem that has evaded understanding for weeks suddenly becomes clear. Those flashes of insight feel like hearing the quiet whisper of nature itself. They are rare, but they bring along a joy that is hard to find elsewhere.

I also value the opportunity to guide the next generation of physicists, whether in the university classroom or through public science communication. Teaching brings a different kind of fulfilment: witnessing students develop confidence, curiosity and a genuine love for physics.

What I like the least is the inherent uncertainty of research. Questions do not promise favourable answers, and progress is rarely linear. Fortunately, I have come to see this lack of balance not as a weakness but as a source of power that forces growth, new perspectives, and ultimately deeper understanding.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had known that feeling lost is not a sign of inadequacy but a natural part of doing physics at a high level. Not understanding something can be the greatest motivator, provided one is willing to invest time and effort. Passion and curiosity matter far more than innate brilliance. If I had realized earlier that steady dedication can carry you farther than talent alone, I would have embraced uncertainty with much more confidence.

The post Ask me anything: Mažena Mackoit-Sinkevičienė – ‘Above all, curiosity drives everything’ appeared first on Physics World.

Modelling wavefunction collapse as a continuous flow yields insights on the nature of measurement

22 janvier 2026 à 10:30

“God does not play dice.”

With this famous remark at the 1927 Solvay Conference, Albert Einstein set the tone for one of physics’ most enduring debates. At the heart of his dispute with Niels Bohr lay a question that continues to shape the foundations of physics: does the apparently probabilistic nature of quantum mechanics reflect something fundamental, or is it simply due to lack of information about some “hidden variables” of the system that we cannot access?

Physicists at University College London, UK (UCL) have now addressed this question via the concept of quantum state diffusion (QSD). In QSD, the wavefunction does not collapse abruptly. Instead, wavefunction collapse is modelled as a continuous interaction with the environment that causes the system to evolve gradually toward a definite state, restoring some degree of intuition to the counterintuitive quantum world.

A quantum coin toss

To appreciate the distinction (and the advantages it might bring), imagine tossing a coin. While the coin is spinning in midair, it is neither fully heads nor fully tails – its state represents a blend of both possibilities. This mirrors a quantum system in superposition.

When the coin eventually lands, the uncertainty disappears and we obtain a definite outcome. In quantum terms, this corresponds to wavefunction collapse: the superposition resolves into a single state upon measurement.

In the standard interpretation of quantum mechanics, wavefunction collapse is considered instantaneous. However, this abrupt transition is challenging from a thermodynamic perspective because uncertainty is closely tied to entropy. Before measurement, a system in superposition carries maximal uncertainty, and thus maximum entropy. After collapse, the outcome is definite and our uncertainty about the system is reduced, thereby reducing the entropy.

This apparent reduction in entropy immediately raises a deeper question. If the system suddenly becomes more ordered at the moment of measurement, where does the “missing” entropy go?

From instant jumps to continuous flows

Returning to the coin analogy, imagine that instead of landing cleanly and instantly revealing heads or tails, the coin wobbles, leans, slows and gradually settles onto one face. The outcome is the same, but the transition is continuous rather than abrupt.

This gradual settling captures the essence of QSD. Instead of an instantaneous “collapse”, the quantum state unfolds continuously over time. This makes it possible to track various parameters of thermodynamic change, including a quantity called environmental stochastic entropy production that measures how irreversible the process is.

Another benefit is that whereas standard projective measurements describe an abrupt “yes/no” outcome, QSD models a broader class of generalized or “weak” measurements, revealing the subtle ways quantum systems evolve. It also allows physicists to follow individual trajectories rather than just average outcomes, uncovering details that the standard framework smooths over.

“The QSD framework helps us understand how unpredictable environmental influences affect quantum systems,” explains Sophia Walls, a PhD student at UCL and the first author of a paper in Physical Review A on the research. Environmental noise, Walls adds, is particularly important for quantum technologies, making the study’s insights valuable for quantum error correction, control protocols and feedback mechanisms.

Bridging determinism and probability

At first glance, QSD might seem to resemble decoherence, which also arises from system–environment interactions such as noise. But the two differ in scope. “Decoherence explains how a system becomes a classical mixed state,” Walls clarifies, “but not how it ultimately purifies into a single eigenstate.” QSD, with its stochastic term, describes this final purification – the point where the coin’s faint shimmer sharpens into heads or tails.

In this view, measurement is not a single act but a continuous, entropy-producing flow of information between system and environment – a process that gradually results in manifestation of one of the possible quantum states, rather than an abrupt “collapse”.

“Standard quantum mechanics separates two kinds of dynamics – the deterministic Schrödinger evolution and the probabilistic, instantaneous collapse,” Walls notes. “QSD connects both in a single dynamical equation, offering a more unified description of measurement.”

This continuous evolution makes otherwise intractable quantities, such as entropy production, measurable and meaningful. It also breathes life into the wavefunction itself. By simulating individual realizations, QSD distinguishes between two seemingly identical mixed states: one genuinely entangled with its environment, and another that simply represents our ignorance. Only in the first case does the system dynamically evolve – a distinction invisible in the orthodox picture.

A window on quantum gravity?

Could this diffusion-based framework also illuminate other fundamental questions beyond the nature of measurement? Walls thinks it’s possible. Recent work suggests that stochastic processes could provide experimental clues about how gravity behaves at the quantum scale. QSD may one day offer a way to formalize or test such ideas. “If the nature of quantum gravity can be studied through a diffusive or stochastic process, then QSD would be a relevant framework to explore it,” Walls says.

The post Modelling wavefunction collapse as a continuous flow yields insights on the nature of measurement appeared first on Physics World.

❌