↩ Accueil

Vue lecture

New cosmic map will put dark-matter theories to the test

Astronomers have created the most detailed map to date of the vast structures of dark matter that appear to permeate the universe. Using the James Webb Space Telescope (JWST), the team, led by Diana Scognamiglio at NASA’s Jet Propulsion Laboratory, used gravitational lensing to map the dark matter filaments and clusters with unprecedented resolution. As a result, physicists have new and robust data to test theories of dark matter.

Dark matter is a hypothetical substance that appears to account for about 85% of the mass in universe – yet it has never been observed directly. Dark matter is invoked by physicists to explain the dynamics and evolution of large scale structures in the universe. This includes the gravitational formation of galaxy clusters and the cosmic filaments connecting them over 100-million-light–year distances.

Light from very distant objects beyond these structures is deflected by the gravitational tug of dark matter within the clusters and filaments. This can be observed on Earth as the gravitational lensing of these distant objects. This distorts images of the distant objects and affects their observed brightness. These effects can be used to determine the dark-matter content of the clusters and filaments.

In 2007, the Cosmic Evolution Survey (COSMOS) used the Hubble Space Telescope to create a map of cosmic filaments in an area of the sky about nine times larger than that occupied by the Moon.

“The COSMOS field was published by Richard Massey and my advisor, Jason Rhodes,” Scognamiglio recounts. “It has a special place in the history of dark-matter mapping, with the first wide-area map of space-based weak lensing mass.”

However, Hubble’s limited resolution meant that many smaller-scale features remained invisible in COSMOS. In a new survey called COSMOS-Web, Scognamiglio’s team harnessed the vastly improved imaging capabilities of the JWST, which offers over twice the resolution of its predecessor.

Sharp and sensitive

“We used JWST’s exceptional sharpness and sensitivity to measure the shapes of many more faint, distant galaxies in the COSMOS-Web field – the central part of the original COSMOS field,” Scognamiglio describes. “This allowed us to push weak gravitational lensing into a new regime, producing a much sharper and more detailed mass map over a contiguous area.”

With these improvements, the team could measure the shapes of 129 galaxies per square arcminute in area of sky the size of 2.5 full moons. With thorough mathematical analysis, they could then identify which of these galaxies had been distorted by dark-matter lensing.

“The map revealed fine structure in the cosmic web, including filaments and mass concentrations that were not visible in previous space-based maps,” Scognamiglio says.

Peak star formation

The map allowed the team to identify lensing structures out to distances of roughly 5 billion light–years, corresponding to the universe’s peak era of star formation. Beyond this point, galaxies became too sparse and dim for their shapes to be measured reliably, placing a new limit on the COSMOS-Web map’s resolution.

With this unprecedented resolution, the team could also identify features as small as the dark matter halos encircling small clusters of galaxies, which were invisible in the original COSMOS survey. The astronomers hope their result will set a new, higher-resolution benchmark for future studies using JWST’s observations to probe the elusive nature of dark matter, and its intrinsic connection with the formation and evolution of the universe’s largest structures.

“It also sets the stage for current and future missions like ESA’s Euclid and NASA’s Nancy Grace Roman Space Telescope, which will extend similar dark matter mapping techniques to much larger areas of the sky,” Scognamiglio says.

The observations are described in Nature Astronomy.

The post New cosmic map will put dark-matter theories to the test appeared first on Physics World.

  •  

Top-cited authors from India and North America share their tips for early-career researchers

Some 20 papers from researchers based in North America have been recognized with a Top Cited Paper award for 2025 from IOP Publishing, which publishes Physics World.

The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2022 to 2024 that are in the top 1% of the most cited papers.

Meanwhile, 29 papers from India have been recognized with a Top Cited Paper award for 2025.

Below, some of the winners of the 2025 top-cited paper award from India and North America outline their tips for early-career researchers who are looking to boost the impact of their work.

Answers have been edited for clarity and brevity.

Shikhar Mittal from Tata Institute of Fundamental Research in Mumbai: Early-career researchers, especially PhD students, often underestimate the importance of presentation and visibility when it comes to their work. While doing high-quality research is, of course, essential, it is equally important to write your paper clearly and professionally. Even the tiniest of details, such as consistent scientific notation, clean figures, correct punctuation and avoiding typos can make a big difference. A paper full of careless errors may not be taken seriously, even if it contains strong scientific results.

Another crucial aspect is visibility. It is important to actively advertise your research by presenting your work at conferences and reaching out to researchers who are working on related topics. If someone misses citing your relevant work, a polite message can often lead to recognition and even collaboration. Being proactive in how you communicate and share your research can significantly improve its impact.

Sandip Mondal from the Indian Institute of Technology Bombay: Don’t try to solve everything at once. Pick a focused, well-motivated question and go deep into it. It’s tempting to jump on “hot topics”, but the work that lasts – and gets cited – is methodologically sound, reproducible and well-characterized. Even incremental advances, if rigorously done, can be very impactful.

Another tip is to work with people who bring complementary skills — whether in theory, device fabrication or characterization. Collaboration isn’t just about co-authors; it’s about deepening the quality of your work. And once your paper is published, your job isn’t done. Promote it as visibility breeds engagement, which leads to impact.

Sarika Jalan from the Indian Institute of Technology Indore: Try to go in-depth into the problem you are working on. Publications alone cannot give visibility, its understanding and creativity that will matter in the long run.

Marcia Rieke from the University of Arizona: Write clearly and concisely. I would also suggest being careful with your choice of journal – high-impact-factor journals can be great but may lead to protracted refereeing while other journals are very reputable and sometimes have faster publication rates.

Dan Scolnic from Duke University: At some point there needs to be a transition from thinking about “number of papers” to “number of citations” instead. Graduate students typically talk about writing as many papers as possible – that’s the metric. But at some point scientists start getting judged on the impact of their papers, which is most easily understood with citations. I’m not saying one should e-mail anyone with a paper to cite them, but rather, to think about what one wants to put time in to work on. One should say “I’d like to work on this because I think it can have a big impact”.

P Veeresha from CHRIST University in Bangalore: Build a strong foundation in the fundamentals and always think critically about what society truly needs. Also focus on how your research can be different, novel, and practically useful. It’s important to present your work in a simple and clear way so that it connects with both the academic community and real-world applications.

Parasuraman Swaminathan from the Indian Institute of Technology Madras: Thorough research is critical for good quality research, be bold and try to push the boundaries of your chosen topic.

Arnab Pal from the Institute of Mathematical Sciences in Chennai: Focus on asking meaningful, well-motivated questions rather than just solving technically difficult problems. Write clearly and communicate your ideas with simplicity and purpose. Engage with the research community early through talks, preprints and collaborations. Above all, be patient and consistent; impactful work often takes time to be recognized.

Steven Finkelstein from the University of Texas at Austin: Work on topics that both you think are interesting, and that others find interesting, and above all work with people who you trust.

The post Top-cited authors from India and North America share their tips for early-career researchers appeared first on Physics World.

  •  

Twenty-three nominations, yet no Nobel prize: how Chien-Shiung Wu missed out on the top award in physics

The facts seem simple enough. In 1957 Chen Ning Yang and Tsung-Dao Lee won the Nobel Prize for Physics “for their penetrating investigation of the so-called parity laws which has led to important discoveries regarding the elementary particles”. The idea that parity is violated shocked physicists, who had previously assumed that every process in nature remains the same if you reverse all three spatial co-ordinates.

Thanks to the work of Lee and Yang, who were Chinese-American theoretical physicists, it now appeared that this fundamental physics concept wasn’t true (see box below). As Yang once told Physics World columnist and historian of science Robert Crease, the discovery of parity violation was like having the lights switched off and being so confused that you weren’t sure you’d be in the same room when they came back on.

But one controversy has always surrounded the prize.

Lee and Yang published their findings in a paper in October 1956 (Phys. Rev. 1 254), meaning that their Nobel prize was one of the rare occasions that satisfied Alfred Nobel’s will, which says the award should go to work done “during the preceding year”. However, the first verification of parity violation was published in February 1957 (Phys. Rev. 105 1413) by a team of experimental physicists led by Chien-Shiung Wu at Columbia University, where Lee was also based. (Yang was at the Institute for Advanced Study in Princeton at the time.)

Surely Wu, an eminent experimentalist (see box below “Chien-Shiung Wu: a brief history”), deserved a share of the prize for contributing to such an fundamental discovery? In her paper, entitled “Experimental Test of Parity Conservation in Beta Decay”, Wu says she had “inspiring discussions” with Lee and Yang. Was gender bias at play, did her paper miss the deadline, or was she simply never nominated?

The Wu experiment

Wu's parity conservation experimental results
(Courtesy: IOP Publishing)

Parity is a property of elementary particles that says how they behave when reflected in a mirror. If the parity of a particle does not change during reflection, parity is said to be conserved. In 1956 Tsung-Dao Lee and Chen Ning Yang realized that while parity conservation had been confirmed in electromagnetic and strong interactions, there was no compelling evidence that it should also hold in weak interactions, such as radioactive decay. In fact, Lee and Yang thought parity violation could explain the peculiar decay patterns of K mesons, which are governed by the weak interaction.

In 1957 Chien-Shiung Wu suggested an experiment to check this based on unstable cobalt-60 nuclei radioactively decaying into nickel-60 while emitting beta rays (electrons). Working at very low temperatures to ensure almost no random thermal motion – and thereby enabling a strong magnetic field to align the cobalt nuclei with their spins parallel – Wu found that far more electrons were emitted in a downward direction than upward.

In the figure, (a) shows how a mirror image of this experiment should also produce more electrons going down than up. But when the experiment was repeated, with the direction of the magnetic field reversed to change the direction of the spin as it would be in the mirror image, Wu and colleagues found that more electrons were produced going upwards (b). The fact that the real-life experiment with reversed spin direction behaved differently from the mirror image proved that parity is violated in the weak interaction of beta decay.

Back then, the Nobel statutes stipulated that all details about who had been nominated for a Nobel prize – and why the winners were chosen by the Nobel committee – were to be kept secret forever. Later, in 1974, the rules were changed, allowing the archives to be opened 50 years after an award had been made. So why did the mystery not become clear in 2007, half a century after the 1957 prize?

The reason is that there is a secondary criterion for prizes awarded by the Royal Swedish Academy of Sciences – in physics and chemistry – which is that the archive must stay shut for as long as a laureate is still alive. Lee and Yang were in their early 30s when they were awarded the prize and both went on to live very long lives. Lee died on 24 August 2024 aged 97 and it was not until the death of Yang on 18 October 2025 at 103 that the chance to solve the mystery finally arose.

Chien-Shiung Wu: a brief history

Chien-Shiung Wu by some experimental apparatus
Overlooked for a Nobel Chien-Shiung Wu in 1963 at Columbia University by which time she had already received the first three of her 23 known nominations for a Nobel prize. (Courtesy: Smithsonian Institution)

Born on 31 May 1912 in Jiangsu province in eastern China, Chien-Shiung Wu graduated with a degree in physics from National Central University in Nanjing. After a few years of research in China, she moved to the US, gaining a PhD at the University of California at Berkeley in 1940. Three years later Wu took up a teaching job at Princeton University in New Jersey – a remarkable feat given that women were not then even allowed to study at Princeton.

During the Second World War, Wu joined the Manhattan atomic-bomb project, working on radiation detectors at Columbia University in New York. After the conflict was over, she started studying beta decay – one of the weak interactions associated with radioactive decay. Wu famously led a crucial experiment studying the beta decay of cobalt-60 nuclei, which confirmed a prediction made in October 1956 by her Columbia colleague Tsung-Dao Lee and Chen Ning Yang in Princeton that parity can be violated in the weak interaction.

Lee and Yang went on to win the 1957 Nobel Prize for Physics but the Nobel Committee was not aware that Lee had in fact consulted Wu in spring 1956 – several months before their paper came out – about potential experiments to prove their prediction. As she was to recall in 1973, studying the decay of cobalt-60 was “a golden opportunity” to test their ideas that she “could not let pass”.

The first woman in the Columbia physics department to get a tenured position and a professorship, Wu remained at Columbia for the rest of her career. Taking an active interest in physics well into retirement, she died on 16 February 1997 at the age of 84. Only now, with the publication of this Physics World article, has it become clear that despite receiving 23 nominations from 18 different physicists in 16 years between 1958 and 1974, she never won a Nobel prize.

Entering the archives

As two physicists based in Stockholm with a keen interest in the history of science, we had already examined the case of Lise Meitner, another female physicist who never won a Nobel prize – in her case for fission. We’d published our findings about Meitner in the December 2023 issue of Fysikaktuellt – the journal of the Swedish Physical Society. So after Yang died, we asked the Center for History of Science at the Royal Swedish Academy of Sciences if we could look at the 1957 archives.

A previous article in Physics World from 2012 by Magdolna Hargittai, who had spoken to Anders Bárány, former secretary of the Nobel Committee for Physics, seemed to suggest that Wu wasn’t awarded the 1957 prize because her Physical Review paper had been published in February of that year. This was after the January cut-off and therefore too late to be considered on that occasion (although the trio could have been awarded a joint prize in a subsequent year).

Mats Larsson and Ramon Wyss at the Center for History of Science at the Royal Swedish Academy of Sciences in Stockholm, Sweden
History in the making Left image: Mats Larsson (centre) and Ramon Wyss (left) at the Center for History of Science at the Royal Swedish Academy of Sciences in Stockholm, Sweden, on 13 November 2025, where they become the first people to view the archive containing information about the nominations for the 1957 Nobel Prize for Physics. They are shown here in the company of centre director Karl Grandin (right). Right image: Larsson and Wyss with their hands on the archives, on which this Physics World article is based. (Courtesy: Anne Miche de Malleray)

After receiving permission to access the archives, we went to the centre on Thursday 13 November 2025, where – with great excitement – we finally got our hands on the thick, black, hard-bound book containing information about the 1957 Nobel prizes in physics and chemistry. About 500 pages long, the book revealed that there were a total of 58 nominations for the 1957 Nobel Prize for Physics – but none at all for Wu that year. As we shall go on to explain, she did, however, receive a total of 23 nominations over the next 16 years.

Lee and Yang, we discovered, received just a single nomination for the 1957 prize, submitted by John Simpson, an experimental physicist at the University of Chicago in the US. His nomination reached the Nobel Committee on 29 January 1957, just before the deadline of 31 January. Simpson clearly had a lot of clout with the committee, which commissioned two reports from its members – both Swedish physicists – based on his recommendation. One was by Oskar Klein on the theoretical aspects of the prize and the other by Erik Hulthén on the experimental side of things.

Report revelations

Klein devotes about half of his four-page report to the Hungarian-born theorist Eugene Wigner, who – we discovered – received seven separate nominations for the 1957 prize. In his opening remarks, Klein notes that Wigner’s work on symmetry principles in physics, first published in 1927, had gained renewed relevance in light of recent experiments by Wu, Leon Lederman and others. According to Klein, these experiments cast a new light on the fundamental symmetry principles of physics.

Klein then discusses three important papers by Wigner and concludes that he, more than any other physicist, established the conceptual background on symmetry principles that enabled Lee and Yang to clarify the possibilities of experimentally testing parity non-conservation. Klein also analyses Lee and Yang’s award-winning Physical Review paper in some detail and briefly mentions subsequent articles of theirs as well as papers by two future Nobel laureates – Lev Landau and Abdus Salam.

Klein does not end his report with an explicit recommendation, but identifies Lee, Yang and Wigner as having made the most important contributions. It is noteworthy that every physicist mentioned in Klein’s report – apart from Wu – eventually went on to receive a Nobel Prize for Physics. Wigner did not have to wait long, winning the 1963 prize together with Maria Goeppert Mayer and Hans Jensen, who had also been nominated in 1957.

As for Hulthén’s experimental report, it acknowledges that Wu’s experiment started after early discussions with Lee and Yang. In fact, Lee had consulted Wu at Columbia on the subject of parity conservation in beta-decay before Lee and Yang’s famous paper was published. According to Wu, she mentioned to Lee that the best way would be to use a polarized cobalt-60 source for testing the assumption of parity violation in beta-decay.

Many physicists were aware of Lee and Yang’s paper, which was certainly seen as highly speculative, whereas Wu realized the opportunity to test the far-reaching consequences of parity violation. Since she was not a specialist of low-temperature nuclear alignment, she contacted Ernest Ambler at the National Bureau of Standards in Washington DC, who was a co-author on her Physics Review paper of 15 February 1957.

Hulthén describes in detail the severe technical challenges that Wu’s team had to overcome to carry out the experiment. These included achieving an exceptionally low temperature of 0.001 K, placing the detector inside the cryostat, and mitigating perturbations from the crystalline field that weakened the magnetic field’s effectiveness.

Despite these difficulties, the experimentalists managed to obtain a first indication of parity violations, which they presented on 4 January 1957 at a regular lunch that took place at Columbia every Friday. The news of these preliminary results spread like wildfire throughout the physics community, prompting other groups to immediately follow suit.

Hulthén mentions, for example, a measurement of the magnetic moment of the mu (μ) meson (now known as the muon) that Richard Garvin, Leon Lederman and Marcel Weinrich performed at Columbia’s cyclotron almost as soon as Lederman had obtained information of Wu’s work. He also cites work at the University of Leiden in the Netherlands led by C J Gorter that apparently had started to look into parity violation independently of Wu’s experiment (Physica 23 259).

Wu’s nominations

It is clear from Hulthén’s report that the Nobel Physics Committee was well informed about the experimental work carried out in the wake of Lee and Yang’s paper of October 1956, in particular the groundbreaking results of Wu. However, it is not clear from a subsequent report dated 20 September 1957 (see box below) from the Nobel Committee why Wigner was not awarded a share of the 1957 prize, despite his seven nominations. Nor is there any suggestion of postponing the prize a year in order to include Wu. The report was discussed on 23 October 1957 by members of the “Physics Class” – a group of physicists in the academy who always consider the committee’s recommendations – who unanimously endorsed it.

The Nobel Committee report of 1957

Sheet from a Nobel report written on 20 September 1957 by the Nobel Committee for Physics
(Courtesy: The Nobel Archive, The Royal Swedish Academy of Sciences, Stockholm)

This image is the final page of a report written on 20 September 1957 by the Nobel Committee for Physics about who should win the 1957 Nobel Prize for Physics. Dated 20 September 1957 and published here for the first time since it was written, the English translation is as follows. “Although much experimental and theoretical work remains to be done to fully clarify the necessary revision of the parity principle, it can already be said that a discovery with extremely significant consequences has emerged as a result of the above-mentioned study by Lee and Yang. In light of the above, the committee proposes that the 1957 Nobel Prize in Physics be awarded jointly to: Dr T D Lee, New York, and Dr C N Yang, Princeton, for their profound investigation of the so-called parity laws, which has led to the discovery of new properties of elementary particles.” The report was signed by Manne Siegbahn (chair), Gudmund Borelius, Erik Hulthén, Oskar Klein, Erik Rudberg and Ivar Waller.

Most noteworthy with regard to this meeting of the Physics Class was that Meitner – who had also been overlooked for the Nobel prize – took part in the discussions. Meitner, who was Austrian by birth, had been elected a foreign member of the Royal Swedish Academy of Sciences in 1945, becoming a “Swedish member” after taking Swedish citizenship in 1951. In the wake of these discussions, the academy decided on 31 October 1957 to award the 1957 Nobel Prize for Physics to Lee and Yang. We do not know, though, if Meitner argued for Wu to be awarded a share of that year’s prize.

A total of 23 nominations to give a Nobel prize to Wu reached the Nobel Committee on 10 separate years and she was nominated by 18 leading physicists, including various Nobel-prize winners and Tsung-Dao Lee himself

Although Wu did not receive any nominations in 1957, she was nominated the following year by the 1955 Nobel laureates in physics, Willis Lamb and Polykarp Kusch. In fact, after Lee and Yang won the prize, nominations to give a Nobel prize to Wu reached the committee on 10 separate years out of the next 16 (see graphic below). She was nominated by a total of 18 leading physicists, including various Nobel-prize winners and Lee himself. In fact, Lee nominated Wu for a Nobel prize on three separate occasions – in 1964, 1971 and 1972.

However, it appears she was never nominated by Yang (at the time of writing, we only have archive information up to 1974). One reason for Lee’s support and Yang’s silence could be attributed to the early discussions that Lee had with Wu, influencing the famous Lee and Yang paper, which Yang may not have been aware of. It is also not clear why Lee and Yang never acknowledged their discussion with Wu about the cobalt-60 experiment that was proposed in their paper; further research may shed more light on this topic.

Following Wu’s nomination in 1958, the Nobel Committee simply re-examined the investigations already carried out by Klein and Hulthén. The same procedure was repeated in subsequent years, but no new investigations into Wu’s work were carried out until 1971 when she received six nominations – the highest number she got in any one year.

Nominations for Wu from 1958 to 1974

Diagram showing the nominations for Wu from 1958 to 1974
(Courtesy: IOP Publishing)

Our examination of the newly released Nobel archive from 1957 indicates that although Chien-Shiung Wu was not nominated for that year’s prize, which was won by Chen Ning Yang and Tsung-Dao Lee, she did receive a total of 23 nominations over the next 16 years (1974 being the last open archive at the time of writing). Those 23 nominations were made by 18 different physicists, with Lee nominating Wu three times and Herwig Schopper, Emilio Segrè and Ryoya Utiyama each doing so twice. The peak year for nominations for her was 1971 when she received six nominations. The archives also show that in October 1957 Werner Heisenberg submitted a nomination for Lee (but not Yang); it was registered as a nomination for 1958. The nomination is very short and it is not clear why Heisenberg did not nominate Yang.

That year the committee decided to ask Bengt Nagel, a theorist at KTH Royal Institute of Technology, to investigate the theoretical importance of Wu’s experiments. The nominations she received for the Nobel prize concerned three experiments. In addition to her 1957 paper on parity violation there was a 1949 article she’d written with her Columbia colleague R D Albert verifying Enrico Fermi’s theory of beta decay (Phys. Rev. 75 315) and another she wrote in 1963 with Y K Lee and L W Mo on the conserved vector current, which is a fundamental hypothesis of the Standard Model of particle physics (Phys. Rev. Lett. 10 253).

After pointing out that four of the 1971 nominations came from Wu’s colleagues at Columbia, which to us may have hinted at a kind of lobbying campaign for her, Nagel stated that the three experiments had “without doubt been of great importance for our understanding of the weak interaction”. However, he added, “the experiments, at least the last two, have been conducted to certain aspects as commissioned or direct suggestions of theoreticians”.

In Nagel’s view, Wu’s work therefore differed significantly from, for example, James Cronin and Val Fritsch’s famous discovery in 1964 of charge-parity (CP) violation in the decay of Ko mesons. They had made their discovery under their own steam, whereas (Nagel suggested) Wu’s work had been carried out only after being suggested by theorists. “I feel somewhat hesitant whether their theoretical importance is a sufficient motivation to render Wu the Nobel prize,” Nagel concluded.

Missed opportunity

The Nobel archives are currently not open beyond 1974 so we don’t know if Wu received any further nominations over the next 23 years until her her death in 1997. Of course, had Wu not carried out her experimental test of parity violation, it is perfectly possible that another physicist or group of physicists would have something similar in due course.

Nevertheless, to us it was a missed opportunity not to include Wu as the third prize winner alongside Lee and Yang. Sure, she could not have won the prize in 1957 as she was not nominated for it and her key publication did not appear before the January deadline. But it would simply have been a case of waiting a year and giving Wu and her theoretical colleagues the prize jointly in 1958.

Another possible course of action would have been to single out the theoretical aspects of symmetry violation and award the prize to Lee, Wigner and Yang, as Klein had suggested in his report. Unfortunately, full details of the physics committee’s discussions are not contained in the archives, which means we don’t know if this was a genuine possibility being considered at the time.

But what is clear is that the Nobel committee knew full well the huge importance of Wu’s experimental confirmation of parity violation following the bold theoretical insights of Lee and Yang. Together, their work opened a new chapter in the world of physics. Without Wu’s interest in parity violation and her ingenious experimental knowledge, Lee and Yang would never have won the Nobel prize.

The post Twenty-three nominations, yet no Nobel prize: how Chien-Shiung Wu missed out on the top award in physics appeared first on Physics World.

  •  

Multi-ion cancer therapy tackles the LET trilemma

Cancer treatments using heavy ions offer several key advantages over conventional proton therapy: a sharper Bragg peak and small lateral scattering for precision tumour targeting, as well as high linear energy transfer (LET). High-LET radiation induces complex DNA damage in cancer cells, enabling effective treatment of even hypoxic, radioresistant tumours. A team at the National Institutes for Quantum Science and Technology (QST) in Japan is now exploring the potential benefits of multi-ion therapy combining beams of carbon, oxygen and neon ions.

“Different ions exhibit distinct physical and biological characteristics,” explains QST researcher Takamitsu Masuda. “Combining them in a way that is tailored to the specific characteristics of a tumour and its environment allows us to enhance tumour control while reducing damage to surrounding healthy tissues.”

The researchers are using multi-ion therapy to increase the dose-averaged LET (LETd) within the tumour, performing a phase I trial at the QST Hospital to evaluate the safety and feasibility of this LETd escalation for head-and-neck cancers. But while high LETd prescriptions can improve treatment efficacy, increasing LETd can also deteriorate plan robustness. This so-called “LET trilemma” – a complex trade-off between target dose homogeneity, range robustness and high LETd – is a major challenge in particle therapy optimization.

In their latest study, reported in Physics in Medicine & Biology, Masuda and colleagues evaluated the impact of range and setup uncertainties on LETd-optimized multi-ion treatment plans, examining strategies that could potentially overcome this LET trilemma.

Robustness evaluation

The team retrospectively analysed the data of six patients who had previously been treated with carbon-ion therapy. Patients 1, 2 and 3 had small, medium and large central tumours, respectively, and adjacent dose-limiting organs-at-risk (OARs); and patients 4, 5 and 6 had small, medium and large peripheral tumours and no dose-limiting OARs.

Multi-ion therapy plans
Multi-ion therapy plans Reference dose and LETd distributions for patients 1, 2 and 3 for multi-ion therapy with a target LETd of 90 keV/µm. The GTV, clinical target volume (CTV) and OARs are shown in cyan, green and magenta, respectively. (Courtesy: Phys. Med. Biol.10.1088/1361-6560/ae387b)

For each case, the researchers first generated baseline carbon-ion therapy plans and then incorporated oxygen- or neon-ion beams and tuned the plans to achieve a target LETd of 90 keV/µm to the gross tumour volume (GTV).

Particle therapy plans can be affected by both range uncertainties and setup variations. To assess the impact of these uncertainties, the researchers recalculated the multi-ion plans to incorporate range deviations of +2.5% (overshoot) and –2.5% (undershoot) and various setup uncertainties, evaluating their combined effects on dose and LETd distributions.

They found that range uncertainty was the main contributor to degraded plan quality. In general, range overshoot increased dose to the target, while undershoot decreased dose. Range uncertainties had the largest effect on small tumours and central tumours: patient #1 exhibited a deviation of around ±6% from the reference, while patient #3 showed a dose deviation of just ±1%. Robust target coverage was maintained in all large or peripheral tumours, but deteriorated in patient 1, leading to an uncertainty band of roughly 11%.

“Wide uncertainty bands indicate a higher risk that the intended dose may not be accurately delivered,” Masuda explains. “In particular, a pronounced lower band for the GTV suggests the potential for cold spots within the tumour, which could compromise local tumour control.”

The team also observed that range undershoot increased LETd and overshoot decreased it, although absolute differences in LETd within the entire target were small. Importantly, all OAR dose constraints were satisfied even in the largest error scenarios, with uncertainty bands comparable to those of conventional carbon-ion treatment plans.

Addressing the LET trilemma

To investigate strategies to improve plan robustness, the researchers created five new plans for patient 1, who had a small, central tumour that was particularly susceptible to uncertainties. They modified the original multi-ion plan (carbon- and oxygen-ion beams delivered at 70° and 290°) in five ways: expanding the target; altering the beam angles to orthogonal or opposing arrangements; increasing the number of irradiation fields to a four-field arrangement; and using oxygen ions for both beam ports (“heavier-ion selection”).

The heavier-ion selection plan proved the most effective in mitigating the effects of range uncertainty, substantially narrowing the dose uncertainty bands compared with the original plan. The team attribute this to the inherently higher LETd in heavier ions, making the 90 keV/µm target easier to achieve with oxygen-ion beams alone. The other plan modifications led to limited improvements.

Dose–volume histograms
Improving robustness Dose–volume histograms for patient 1, for the original multi-ion plan and the heavier-ion selection plan, showing the combined effects of range and setup uncertainties. Solid, dashed and dotted curves represent the reference plans, and upper and lower uncertainty scenarios, respectively. (Courtesy: Phys. Med. Biol.10.1088/1361-6560/ae387b)

These findings suggest that strategically employing heavier ions to enhance plan robustness could help control the balance among range robustness, uniform dose and high LETd – potentially offering a practical strategy to overcome the LET trilemma.

“Clinically, this strategy is particularly well-suited for small, deep-seated tumours and complex, variable sites such as the nasal cavity, where range uncertainties are amplified by depth, steep dose gradients and daily anatomical changes,” says Masuda. “In such cases, the use of heavier ions enables robust dose delivery with high LETd.”

The researchers are now exploring the integration of emerging technologies – such as robust optimization, arc therapy, dual-energy CT, in-beam PET and online adaptation – to minimize uncertainties. “This integration is highly desirable for applying multi-ion therapy to challenging cases such as pancreatic cancer, where uncertainties are inherently large, or hypofractionated treatments, where even a single error can have a significant impact,” Masuda tells Physics World.

The post Multi-ion cancer therapy tackles the LET trilemma appeared first on Physics World.

  •  

New project takes aim at theory-experiment gap in materials data

Condensed-matter physics and materials science have a silo problem. Although researchers in these fields have access to vast amounts of data – from experimental records of crystal structures and conditions for synthesizing specific materials to theoretical calculations of electron band structures and topological properties – these datasets are often fragmented. Integrating experimental and theoretical data is a particularly significant challenge.

Researchers at the Beijing National Laboratory for Condensed Matter Physics and the Institute of Physics (IOP) of the Chinese Academy of Sciences (CAS) recently decided to address this challenge. Their new platform, MaterialsGalaxy, unifies data from experiment, computation and scientific literature, making it easier for scientists to identify previously hidden relationships between a material’s structure and its properties. In the longer term, their goal is to establish a “closed loop” in which experimental results validate theory and theoretical calculations guide experiments, accelerating the discovery of new materials by leveraging modern artificial intelligence (AI) techniques.

Physics World spoke to team co-leader Quansheng Wu to learn more about this new tool and how it can benefit the materials research community.

How does MaterialsGalaxy work?

The platform works by taking the atomic structure of materials and mathematically mapping it into a vast, multidimensional vector space. To do this, every material – regardless of whether its structure is known from experiment, from a theoretical calculation or from simulation – must first be converted into a unique structural vector that acts like a “fingerprint” for the material.

Then, when a MaterialsGalaxy user focuses on a material, the system automatically identifies its nearest neighbors in this vector space. This allows users to align heterogeneous data – for example, linking a synthesized crystal in one database with its calculated topological properties in another – even when different data sources define the material slightly differently.

The vector-based approach also enables the system to recommend “nearest neighbour” materials (analogs) to fill knowledge gaps, effectively guiding researchers from known data into unexplored territories. It does this by performing real-time vector similarity searches to dynamically link relevant experimental records, theoretical calculations and literature information. The result is a comprehensive profile for the material.

Where does data for MaterialsGalaxy come from?

We aggregated data from three primary channels: public databases; our institute’s own high-quality internal experimental records (known as the MatElab platform); and the scientific literature. All data underwent rigorous standardization using tools such as the pymatgen (Python Materials Genomics) materials analysis code and the spglib crystal structure library to ensure consistent definitions for crystal structures and physical properties.

Who were your collaborators on this project?

This project is a multi-disciplinary effort involving a close-knit collaboration among several research groups at the IOP, CAS and other leading institutions. My colleague Hongming Weng and I supervised the core development and design under the strategic guidance of Zhong Fang, while Tiannian Zhu (the lead author of our Chinese Physics B paper about MaterialsGalaxy) led the development of the platform’s architecture and core algorithms, as well as its technical implementation.

We enhanced the platform’s capabilities by integrating several previously published AI-driven tools developed by other team members. For example, Caiyuan Ye contributed the Con-CDVAE model for advanced crystal structure generation, while Jiaxuan Liu contributed VASPilot, which automates and streamlines first-principles calculations. Meanwhile, Qi Li contributed PXRDGen, a tool for simulating and generating powder X-ray diffraction patterns.

Finally, much of the richness of MaterialsGalaxy stems from the high-quality data it contains. This came from numerous collaborators, including Weng (who contributed the comprehensive topological materials database, Materiae), Youguo Shi (single-crystal growth), Shifeng Jin (crystal structure and diffraction), Jinbo Pan (layered materials), Qingbo Yan (2D ferroelectric materials), Yong Xu (nonlinear optical materials), and Xingqiu Chen (topological phonons). My own contribution was a library of AI-generated crystal structures produced by the Con-CDVAE model.

What does MaterialsGalaxy enable scientists to do that they couldn’t do before?

One major benefit is that it prevents researchers from becoming stalled when data for a specific material is missing. By leveraging the tool’s “structural analogs” feature, they can look to the properties or growth paths of similar materials for insights – a capability not available in traditional, isolated databases.

We also hope that MaterialsGalaxy will offer a bridge between theory and experiment. Traditionally, experimentalists tend to consult the Inorganic Crystal Structure Database while theorists check the Materials Project. Now, they can view the entire lifecycle of a material – from how to grow a single crystal (experiment) to its topological invariants (theory) – on a single platform.

Beyond querying known materials, MaterialsGalaxy also allows researchers to use integrated generative AI models to create new structures. These can be immediately compared against the known database to assess synthesis feasibility and potential performance throughout the “vertical comparison” workflow.

What do you plan to do next?

We’re focusing on enhancing the depth and breadth of the tool’s data fusion. For example, we plan to develop representations based on graph neural networks (GNNs) to better handle experimental data that may contain defects or disorder, thereby improving matching accuracy.

We’re also interested in moving beyond crystal structure by introducing multi-modal anchors such as electronic band structures, X-ray diffraction (XRD) patterns and spectroscopic data. To do this, we plan to utilize technologies derived from computational linguistics and information processing (CLIP) to enable cross-modal retrieval, for example searching for theoretical band data by uploading an experimental XRD pattern.

Separately, we want to continue to expand our experimental data coverage, specifically targeting synthesis recipes and “failed” experimental records, which are crucial for training the next generation of “AI-enabled” scientists. Ultimately, we plan to connect an even wider array of databases, establishing robust links between them to realize a true Materials Galaxy of interconnected knowledge.

The post New project takes aim at theory-experiment gap in materials data appeared first on Physics World.

  •  

The pros and cons of patenting

For any company or business, it’s important to recognize and protect intellectual property (IP). In the case of novel inventions, which can include machines, processes and even medicines, a patent offers IP protection and lets firms control how those inventions are used. Patents, which in most countries can be granted for up to 20 years, give the owner exclusive rights so that others can’t directly copy the creation. A patent essentially prevents others from making, using or selling your invention.

But there are more reasons for holding a patent than IP protection alone. In particular, patents go some way to protecting the investment that may have been necessary to generate the IP in the first place, such as the cost of R&D facilities, materials, labour and expertise. Those factors need to be considered when you’re deciding if patenting is the right approach or not.

Patents are tangible assets that can be sold to other businesses or licensed for royalties to provide your compay with regular income

Patents are in effect a form of currency. Counting as tangible assets that add to the overall value of a company, they can be sold to other businesses or licensed for royalties to provide regular income. Some companies, in fact, build up or acquire significant patent portfolios, which can be used for bargaining with competitors, potentially leading to cross-licensing agreements where both parties agree to use each other’s technology.

Patents also say something about the competitive edge of a company, by demonstrating technical expertise and market position through the control of a specific technology. Essentially, patents give credibility to a company’s claims of its technical know-how: a patent shows investors that a firm has a unique, protected asset, making the business more appealing and attractive to further investment.

However, it’s not all one-way traffic and there are obligations on the part of the patentee. Firstly, a patent holder has to reveal to the world exactly how their invention works. Governments favour this kind of public disclosure as it encourages broader participation in innovation. The downside is that whilst your competitors cannot directly copy you, they can enhance and improve upon your invention, provided those changes aren’t covered by the original patent.

It’s also worth bearing in mind that a patent holder is responsible for patent enforcement and any ensuing litigation; a patent office will not do this for you. So you’ll have to monitor what your competitors are up to and decide on what course of action to take if you suspect your patent’s been infringed. Trouble is, it can sometimes be hard to prove or disprove an infringement – and getting the lawyers in can be expensive, even if you win.

Money talks

Probably the biggest consideration of all is the cost and time involved in making a patent application. Filing a patent requires a rigorous understanding of “prior art” – the existing body of relevant knowledge on which novelty is judged. You’ll therefore need to do a lot of work finding out about relevant established patents, any published research and journal articles, along with products or processes publicly disclosed before the patent’s filing date.

Before it can be filed with a patent office, a patent needs to be written as a legal description, which includes all the legwork like an abstract, background, detailed specifications, drawings and claims of the invention. Once filed, an expert in the relevant technical field will be assigned to assess the worth of the claim; this examiner must be satisfied that the application is both unique and “non-obvious” before it’s granted.

Even when the invention is judged to be technically novel, in order to be non-obvious, it must also involve an “inventive step” that would not be obvious to a person with “ordinary skill” in that technical field at the time of filing. The assessment phase can result in significant to-ing and fro-ing between the examiner and the applicant to determine exactly what is patentable. If insufficient evidence is found, the patent application will be refused.

Patents are only ever granted in a particular country or region, such as Europe, and the application process has to be repeated for each new place (although the information required is usually pretty similar). Translations may be required for some countries, there are fees for each application and, even if a patent is granted, you have to pay an additional annual bill to maintain the patent (which in the UK rises year on year).

Patents can take years to process, which is why many companies pay specialized firms to support their applications

Patent applications, in other words, can be expensive and can take years to process. That’s why many companies pay specialized firms to support their patent applications. Those firms employ patent attorneys – legal experts with a technical background who help inventors and companies manage their IP rights by drafting patent applications, navigating patent office procedures and advising on IP strategy. Attorneys can also represent their clients in disputes or licensing deals, thereby acting as a crucial bridge between science/engineering and law.

Perspiration and aspiration

It’s impossible to write about patents without mentioning the impact that Thomas Edison had as an inventor. During the 20th century, he became the world’s most prolific inventor with a staggering 1093 US patents granted in his lifetime. This monumental achievement remained unsurpassed until 2003, when it was overtaken by the Japanese inventor Shunpei Yamazaki and, more recently, by the Australian “patent titan” Kia Silverbrook in 2008.

Edison clearly saw there was a lot of value in patents, but how did he achieve so much? His approach was grounded in systematic problem solving, which he accomplished through his Menlo Park lab in New Jersey. Dedicated to technological development and invention, it was effectively the world’s first corporate R&D lab. And whilst Edison’s name appeared on all the patents, they were often primarily the work of his staff; he was effectively being credited for inventions made by his employees.

I have a love–hate relationship with patents or at least the process of obtaining them

I will be honest; I have a love–hate relationship with patents or at least the process of obtaining them. As a scientist or engineer, it’s easy to think all the hard work is getting an invention over the line, slogging your guts out in the lab. But applying for a patent can be just as expensive and time-consuming, which is why you need to be clear on what and when to patent. Even Edison grew tired of being hailed a genius, stating that his success was “1% inspiration and 99% perspiration”.

Still, without the sweat of patents, your success might be all but 99% aspiration.

The post The pros and cons of patenting appeared first on Physics World.

  •  

Practical impurity analysis for biogas producers

Biogas is a renewable energy source formed when bacteria break down organic materials such as food waste, plant matter, and landfill waste in an oxygen‑free (anaerobic) process. It contains methane and carbon dioxide, along with trace amounts of impurities. Because of its high methane content, biogas can be used to generate electricity and heat, or to power vehicles. It can also be upgraded to almost pure methane, known as biomethane, which can directly replace natural fossil gas.

Strict rules apply to the amount of impurities allowed in biogas and biomethane, as these contaminants can damage engines, turbines, and catalysts during upgrading or combustion. EN 16723 is the European standard that sets maximum allowable levels of siloxanes and sulfur‑containing compounds for biomethane injected into the natural gas grid or used as vehicle fuel. These limits are extremely low, meaning highly sensitive analytical techniques are required. However, most biogas plants do not have the advanced equipment needed to measure these impurities accurately.

Researchers from the Paul Scherrer Institute, Switzerland: Julian Indlekofer (left) and Ayush Agarwal (right), with the Liquid Quench Sampling System
Researchers from the Paul Scherrer Institute, Switzerland: Julian Indlekofer (left) and Ayush Agarwal (right), with the Liquid Quench Sampling System (Courtesy: Markus Fischer/Paul Scherrer Institute PSI)

The researchers developed a new, simpler method to sample and analyse biogas using GC‑ICP‑MS. Gas chromatography (GC) separates chemical compounds in a gas mixture based on how quickly they travel through a column. Inductively Coupled Plasma Mass Spectrometry (ICP‑MS) then detects the elements within those compounds at very low concentrations. Crucially, this combined method can measure both siloxanes and sulfur compounds simultaneously. It avoids matrix effects that can limit other detectors and cause biased or ambiguous results. It also achieves the very low detection limits required by EN 16723.

The sampling approach and centralized measurement enables biogas plants to meet regulatory standards using an efficient, less complex, and more cost‑effective method with fewer errors. Overall, this research provides a practical, high‑accuracy tool that makes reliable biogas impurity monitoring accessible to plants of all sizes, strengthening biomethane quality, protecting infrastructure, and accelerating the transition to cleaner energy systems.

Read the full article

Sampling to analysis: simultaneous quantification of siloxanes and sulfur compounds in biogas for cleaner energy

Ayush Agarwal et al 2026 Prog. Energy 8 015001

Do you want to learn more about this topic?

Household biogas technology in the cold climate of low-income countries: a review of sustainable technologies for accelerating biogas generation Sunil Prasad Lohani et al. (2024)

The post Practical impurity analysis for biogas producers appeared first on Physics World.

  •  

Cavity-based X-ray laser delivers high-quality pulses

Physicists in Germany have created a new type of X-ray laser that uses a resonator cavity to improve the output of a conventional X-ray free electron laser (XFEL). Their proof-of-concept design delivers X-ray pulses that are more monochromatic and coherent than those from existing XFELs.

In recent decades, XFELs have delivered pulses of monochromatic and coherent X-rays for a wide range of science including physics, chemistry, biology and materials science.

Despite their name, XFELs do not work like conventional lasers. In particular, there is no gain medium or resonator cavity. Instead, XFELs rely on the fact that when a free electron is accelerated, it will emit electromagnetic radiation. In an XFEL, pulses of high-energy electrons are sent through an undulator, which deflects the electrons back and forth. These wiggling electrons radiate X-rays at a specific energy. As the X-rays and electrons travel along the undulator, they interact in such a way that the emitted X-ray pulse has a high degree of coherence.

While these XFELs have proven very useful, they do not deliver radiation that is as monochromatic or as coherent as radiation from conventional lasers. One reason why conventional lasers perform better is that the radiation is reflected back and forth many times in a mirrored cavity that is tuned to resonate at a specific frequency – whereas XFEL radiation only makes one pass through an undulator.

Practical X-ray cavities, however, are difficult to create. This is because X-rays penetrate deep into materials, where they are usually absorbed – making reflection with conventional mirrors impossible.

Crucial overlap

Now, researchers working at the European XFEL at DESY in Germany have created a proof-of-concept hybrid system that places an undulator within a mirrored resonator cavity. X-ray pulses that are created in the undulator are directed at a downstream mirror and reflected back to a mirror upstream of the undulator. The X-ray pulses are then reflected back downstream through the undulator. Crucially, a returning X-ray pulse overlaps with a subsequent electron pulse in the undulator, amplifying the X-ray pulse. As a result, the X-ray pulses circulating within the cavity quickly become more monochromatic and more coherent than pulses created by an undulator alone.

The team solved the mirror challenge by using diamond crystals that achieve the Bragg reflection of X-rays with a specific frequency. These are used at either end of the cavity in conjunction with Kirkpatrick–Baez mirrors, which help focus the reflected X-rays back into the cavity.

Some of the X-ray radiation circulating in the cavity is allowed to escape downstream, providing a beam of monochromatic and coherent X-ray pulses. They have called their system X-ray Free-Electron Laser Oscillator (XFELO). The cavity is about 66 m long.

Narrow frequency range

DESY accelerator scientist Patrick Rauer explains, “With every round trip, the noise in the X-ray pulse gets less and the concentrated light more defined”. Rauer pioneered the design of the cavity in his PhD work and is now the DESY lead on its implementation. “It gets more stable and you start to see this single, clear frequency – this spike.” Indeed, the frequency width of XFELO X-ray pulses is about 1% that of pulses that are created by the undulators alone

Ensuring the overlap of electron and X-pulses within the cavity was also a significant challenge. This required a high degree of stability within the accelerator that provides electron pulses to XFELO. “It took years to bring the accelerator to that state, which is now unique in the world of high-repetition-rate accelerators”, explains Rauer.

Team member Harald Sinn says, “The successful demonstration shows that the resonator principle is practical to implement”. Sinn is head of  European XFEL’s instrumentation department and he adds, “In comparison with methods used up to now, it delivers X-ray pulses with a very narrow wavelength as well as a much higher stability and coherence.”

The team will now work towards improving the stability of XFELO so that in the future it can be used to do experiments by European XFEL’s research community.

XFELO is described in Nature.

The post Cavity-based X-ray laser delivers high-quality pulses appeared first on Physics World.

  •  

The physics of an unethical daycare model that uses illness to maximize profits

When I had two kids going through daycare, or nursery as we call it in the UK, every day seemed like a constant fight with germs and illness. After all, at such a young age kids still have a developing immune system and are not exactly hot on personal hygiene.

That same dilemma faced mathematician Lauren Smith from the University of Auckland. She has two children at a “wonderful daycare centre” who often fall ill. As many parents juggling work and parenting will understand, Smith is frequently faced with the issue of whether her kids are well enough to attend daycare.

Smith then thought about how an unethical daycare centre might take advantage of this to maximize its profits – under the assumption that if there are not enough children attending (who still pay) then staff get sent home without pay, and also don’t get sick pay themselves.

“It occurred to me that a sick kid attending daycare could actually be financially beneficial to the centre, while clearly being a detriment to the wellbeing of the other children as well as the staff and the broader community,” Smith told Physics World.

For a hypothetical daycare centre that is solely focused on making as much money as possible, Smith realized that full attendance of sick children is not optimal financially as this requires maximal staffing at all times, whereas zero attendance of sick children does not give an opportunity for the disease to spread such that other children are then sent home.

But in between these two extremes, Smith thought there should be an optimal attendance rate so that the disease is still able to spread and some children – and staff – are sent home. “As a mathematician I knew I had the tools to find it,” adds Smith.

Model behaviour

Using the so-called Susceptible-Infected-Recovered model for 100 children, a teacher to child ratio of 1:6 and a recovery rate from illness of 10 days, Smith found that the more infectious the disease, the lower the optimal attendance rate for sick children is, and so the more savings the unethical daycare centre can make.

In other words, the more infectious a disease, fewer ill children are required to attend to spread it around, and so can keep more of them – and importantly staff – at home while still making sure it still spreads to non-infected kids.

For a measles outbreak with a basic reproductive number of 12-18, for example, the model resulted in a potential staff saving of 90 working days, whereas for seasonal flu with a basic reproductive rate of 1.2 to 1.3, the potential staff savings is 4.4 days.

Smith writes in the paper that the work is “not intended as a recipe for unethical daycare centre” but is rather to illustrate the financial incentive that exists for daycare centres to propagate diseases among children, which would lead to more infections of at-risk populations in the wider community.

“I hope that as well as being an interesting topic, it can show that mathematics itself is interesting and is useful for describing the real world,” adds Smith.

The post The physics of an unethical daycare model that uses illness to maximize profits appeared first on Physics World.

  •  

Saving the Titanic: the science of icebergs and unsinkable ships

When the Titanic was built, her owners famously described her as “unsinkable”. A few days into her maiden voyage, an iceberg in the North Atlantic famously proved them wrong. But what if we could make ships that really are unsinkable? And what if we could predict exactly how long a hazardous iceberg will last before it melts?

These are the premises of two separate papers published independently this week by Chunlei Guo and colleagues at the University of Rochester, and by Daisuke Noto and Hugo N Ulloa of the University of Pennsylvania, both in the US. The Rochester group’s paper, which appears in Advanced Functional Materials, describes how applying a superhydrophobic coating to an open-ended metallic tube can make it literally unsinkable – a claim supported by extensive tests in a water tank. Noto and Ulloa’s research, which they describe in Science Advances, likewise involved a water tank. Theirs, however, was equipped with cameras, lasers and thermochromic liquid crystals that enabled them to track a freely floating miniature iceberg as it melted.

Imagine a spherical iceberg

Each study is surprising in its own way. For the iceberg paper, arguably the biggest surprise is that no-one had ever done such experiments before. After all, water and ice are readily available. Fancy tanks, lasers, cameras and temperature-sensitive crystals are less so, yet surely someone, somewhere, must have stuck some ice in a tank and monitored what happened to it?

Noto and Ulloa’s answer is, in effect, no. “Despite the relevance of melting of floating ice in calm and energetic environments…most experimental and numerical efforts to examine this process, even to date, have either fixed or tightly constrained the position and posture of ice,” they write. “Consequently, the relationships between ice dissolution rate and background fluid flow conditions inferred from these studies are meaningful only when a one-way interaction, from the liquid to the solid phase, dominates the melting dynamics.”

The problem, they continue, is that eliminating these approximations “introduces a significant technical challenge for both laboratory experiments and numerical simulations” thanks to a slew of interactions that would otherwise get swept under the rug. These interactions, in turn, lead to complex dynamics such as drifting, spinning and even flipping that must be incorporated into the model. Consequently, they write, “fundamental questions persist: ‘How long does an ice body last?’”

  • Tracking a melting iceberg: This side view of the experiment shows fluid motions as moving particles and temperature distributions as colours of the thermochromic liquid crystal particles. Meltplume (dark colour) formed beneath the floating ice plunges down, penetrating through the thermally stratified layer (red: cold, blue: warm). Note: this video has no sound. (Courtesy: Noto and Ulloa, Science Advances 12 5 DOI: 10.1126/sciadv.ady352)

To answer this question, Noto and Ulloa used their water-tank observations (see video) to develop a model that incorporates the thermodynamics of ice melting and mass balance conservation. Based on this model, they correctly predict both the melting rate and the lifespan of freely floating ice under self-driven convective flows that arise from interactions between the ice and the calm, fresh water surrounding it. Though the behaviour of ice in tempestuous salty seas is, they write, “beyond our scope”, their model nevertheless provides a useful upper bound on iceberg longevity, with applications for climate modelling as well as (presumably) shipping forecasts for otherwise-doomed ocean liners.

The tube that would not sink

In the unsinkable tube study, the big surprise is that a metal tube, divided in the middle but open at both ends, can continue to float after being submerged, corroded with salt, tossed about on a turbulent sea and peppered with holes. How is that even possible?

“The inside of the tube is superhydrophobic, so water can’t enter and wet the walls,” Guo explains. “As a result, air remains trapped inside, providing buoyancy.”

Importantly, this buoyancy persists even if the tube is damaged. “When the tube is punctured, you can think of it as becoming two, three, or more smaller sections,” Guo tells Physics World. “Each section will work in the same way of preventing water from entering inside, so no matter how many holes you punch into it, the tube will remain afloat.”

So, is there anything that could make these superhydrophobic structures sink?  “I can’t think of any realistic real-world challenges more severe than what we have put them through experimentally,” he says.

We aren’t in unsinkable ship territory yet: the largest structure in the Rochester study was a decidedly un-Titanic-like raft a few centimetres across. But Guo doesn’t discount the possibility. He points out that tubes are made from ordinary aluminium, with a simple fabrication process. “If suitable applications call for it, I believe [human-scale versions] could become a reality within a decade,” he concludes.

The post Saving the <em>Titanic</em>: the science of icebergs and unsinkable ships appeared first on Physics World.

  •  

Scientists quantify behaviour of micro- and nanoplastics in city environments

Abundance and composition of atmospheric plastics
Measuring atmospheric plastics Abundance and composition of microplastics (MP) and nanoplastics (NP) in aerosols and estimated fluxes across atmospheric compartments in semiarid (Xi’an) and humid subtropical (Guangzhou) urban environments. (TSP: total suspended particles) (Courtesy: Institute of Earth Environment, CAS)

Plastic has become a global pollutant concern over the last couple of decades: it is widespread in society, not often disposed of effectively, and generates both microplastics (1 µm to 5 mm in size) and nanoplastics (smaller than 1 µm) that have infiltrated many ecosystems – including being found inside humans and animals.

Over time, bulk plastics break down into micro- and nanoplastics through fragmentation mechanisms that create much smaller particles with a range of shapes and sizes. Their small size has become a problem because they are increasingly finding their way into waterways that pollute the environment, into cities and other urban environments, and are now even being transported to remote polar and high-altitude regions.

This poses potential health risks around the world. While the behaviour of micro- and nanoplastics in the atmosphere is poorly understood, it’s thought that they are transported by transcontinental and transoceanic winds, which causes the spread of plastic in the global carbon cycle.

However, the lack of data on the emission, distribution and deposition of atmospheric micro- and nanoplastic particles makes it difficult to definitively say how they are transported around the world. It is also challenging to quantify their behaviour, because plastic particles can have a range of densities, sizes and shapes that undergo physical changes in clouds, all of which affect how they travel.

A global team of researchers has developed a new semi-automated microanalytical method that can quantify atmospheric plastic particles present in air dustfall, rain, snow and dust resuspension. The research was performed across two Chinese megacities, Guangzhou and Xi’an.

“As atmospheric scientists, we noticed that microplastics in the atmosphere have been the least reported among all environmental compartments in the Earth system due to limitations in detection methods, because atmospheric particles are smaller and more complex to analyse,” explains Yu Huang, from the Institute of Earth Environment of the Chinese Academy of Sciences (IEECAS) and one of the paper’s lead authors. “We therefore set out to develop a reliable detection technique to determine whether microplastics are present in the atmosphere, and if so, in what quantities.”

Quantitative detection

For this new approach, the researchers employed a computer-controlled scanning electron microscopy (CCSEM) system equipped with energy-dispersive X-ray spectroscopy to reduce human bias in the measurements (which is an issue in manual inspections). They located and measured individual micro- and nanoplastic particles – enabling their concentration and physicochemical characteristics to be determined – in aerosols, dry and wet depositions, and resuspended road dust.

“We believe the key contribution of this work lies in the development of a semi‑automated method that identifies the atmosphere as a significant reservoir of microplastics. By avoiding the human bias inherent in visual inspection, our approach provides robust quantitative data,” says Huang. “Importantly, we found that these microplastics often coexist with other atmospheric particles, such as mineral dust and soot – a mixing state that could enhance their potential impacts on climate and the environment.”

The method could detect and quantify plastic particles as small as 200 nm, and revealed airborne concentrations of 1.8 × 105 microplastics/m3 and 4.2 × 104 nanoplastics/m3 in Guangzhou and 1.4 × 105 microplastics/m3 and 3.0 × 104 nanoplastics/m3 in Xi’an. This is two to six orders of magnitude higher for both microplastic and nanoplastic fluxes than reported previously via visual methods.

The team also found that the deposition samples were more heterogeneously mixed with other particle types (such as dust and other pollution particles) than aerosols and resuspension samples, which showed that particles tend to aggregate in the atmosphere before being removed during atmospheric transport.

The study revealed transport insights that could be beneficial for investigating the climate, ecosystem and human health impacts of plastic particles at all levels. The researchers are now advancing their method in two key directions.

“First, we are refining sampling and CCSEM‑based analytical strategies to detect mixed states between microplastics and biological or water‑soluble components, which remain invisible with current techniques. Understanding these interactions is essential for accurately assessing microplastics’ climate and health effects,” Huang tells Physics World. “Second, we are integrating CCSEM with Raman analysis to not only quantify abundance but also identify polymer types. This dual approach will generate vital evidence to support environmental policy decisions.”

The research was published in Science Advances.

The post Scientists quantify behaviour of micro- and nanoplastics in city environments appeared first on Physics World.

  •  

Michele Dougherty steps aside as president of the Institute of Physics

The space physicist Michele Dougherty has stepped aside as president of the Institute of Physics, which publishes Physics World. The move was taken to avoid any conflicts of interest given her position as executive chair of the Science and Technology Facilities Council (STFC) – one of the main funders of physics research in the UK.

Dougherty, who is based at Imperial College London, spent two years as IOP president-elect from October 2023 before becoming president in October 2025. Dougherty was appointed executive chair of the STFC in January 2025 and in July that year was also announced as the next Astronomer Royal – the first woman to hold the position.

The changes at the IOP come in the wake of UK Research and Innovation (UKRI) stating last month that it will be adjusting how it allocates government funding for scientific research and infrastructure. Spending on curiosity-driven research will remain flat from 2026 to 2030, with UKRI prioritising funding in three key areas or “buckets”.

The three buckets are: curiosity-driven research, which will be the largest; strategic government and societal priorities; and supporting innovative companies. There will also be a fourth “cross-cutting” bucket with funding for infrastructure, facilities and talent. In the four years to 2030, UKRI’s budget will be £38.6bn.

While the detailed implications of the funding changes are still to be worked out, the IOP says its “top priority” is understanding and responding to them. With the STFC being one of nine research councils within UKRI, Dougherty is stepping aside as IOP president to ensure the IOP can play what it says is “a leadership role in advocating for physics without any conflict of interest”.

In her role as STFC executive chair, Dougherty yesterday wrote to the UK’s particle physics, astronomy and nuclear physics community, asking researchers to identify by March how their projects would respond to flat cash as well as reductions of 20%, 40% and 60% – and to “identify the funding point at which the project becomes non-viable”. The letter says that a “similar process” will happen for facilities and labs.

In her letter, Dougherty says that the UK’s science minister Lord Vallance and UKRI chief executive Ian Chapman want to protect curiosity-driven research, which they say is vital, and grow it “as the economy allows”. However, she adds, “the STFC will need to focus our efforts on a more concentrated set of priorities, funded at a level that can be maintained over time”.

Tom Grinyer, chief executive officer of the IOP, says that the IOP is “fully focused on ensuring physics is heard clearly as these serious decisions are shaped”. He says the IOP is “gathering insight from across the physics community and engaging closely with government, UKRI and the research councils so that we can represent the sector with authority and evidence”.

Grinyer warns, however, that UKRI’s shift in funding priorities and the subsequent STFC funding cuts will have “severe consequences” for physics. “The promised investment in quantum, AI, semiconductors and green technologies is welcome but these strengths depend on a stable research ecosystem,” he says.

“I want to thank Michele for her leadership, and we look forward to working constructively with her in her capacity at STFC as this important period for physics unfolds,” adds Grinyer.

Next steps

The nuclear physicist Paul Howarth, who has been IOP president-elect since September, will now take on Dougherty’s responsibilities – as prescribed by the IOP’s charter – with immediate effect, with the IOP Council discussing its next steps at its February 2026 meeting.

With a PhD in nuclear physics, Howarth has had a long career in the nuclear sector working on the European Fusion Programme and at British Nuclear Fuels, as well as co-founding the Dalton Nuclear Institute at the University of Manchester.

He was a non-executive board director of the National Physical Laboratory and until his retirement earlier this year was chief executive officer of the National Nuclear Laboratory.

In response to the STFC letter, Howarth says that the projected cuts “are a devastating blow for the foundations of UK physics”.

“Physics isn’t a luxury we can afford to throw away through confusion,” says Howarth. “We urge the government to rethink these cuts, listen to the physics community, and deliver to a 10-year strategy to secure physics for the future.”

The post Michele Dougherty steps aside as president of the Institute of Physics appeared first on Physics World.

  •  

AI-based tool improves the quality of radiation therapy plans for cancer treatment

This episode of the Physics World Weekly podcast features Todd McNutt, who is a medical physicist at Johns Hopkins University and the founder of Oncospace. In a conversation with Physics World’s Tami Freeman, McNutt explains how an artificial intelligence-based tool called Plan AI can help improve the quality of radiation therapy plans for cancer treatments.

As well as discussing the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres, they examine its evolution from an idea developed by an academic collaboration to a clinical product offered today by Sun Nuclear, a US manufacturer of radiation equipment and software.

This podcast is sponsored by Sun Nuclear.

The post AI-based tool improves the quality of radiation therapy plans for cancer treatment appeared first on Physics World.

  •  

The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’

Last November I visited the CERN particle-physics lab near Geneva to attend the 4th International Symposium on the History of Particle Physics, which focused on advances in particle physics during the 1980s and 1990s. As usual, it was a refreshing, intellectually invigorating visit. I’m always inspired by the great diversity of scientists at CERN – complemented this time by historians, philosophers and other scholars of science.

As noted by historian John Krige in his opening keynote address, “CERN is a European laboratory with a global footprint. Yet for all its success it now faces a turning point.” During the period under examination at the symposium, CERN essentially achieved the “world laboratory” status that various leaders of particle physics had dreamt of for decades.

By building the Large Electron Positron (LEP) collider and then the Large Hadron Collider (LHC), the latter with contributions from Canada, China, India, Japan, Russia, the US and other non-European nations, CERN has attracted researchers from six continents. And as the Cold War ended in 1989–1991, two prescient CERN staff members developed the World Wide Web, helping knit this sprawling international scientific community together and enable extensive global collaboration.

The LHC was funded and built during a unique period of growing globalization and democratization that emerged in the wake of the Cold War’s end. After the US terminated the Superconducting Super Collider in 1993, CERN was the only game in town if one wanted to pursue particle physics at the multi-TeV energy frontier. And many particle physicists wanted to be involved in the search for the Higgs boson, which by the mid-1990s looked as if it should show up at accessible LHC energies.

Having discovered this long-sought particle at the LHC in 2012, CERN is now contemplating an ambitious construction project, the Future Circular Collider (FCC). Over three times larger than the LHC, it would study this all-important, mass-generating boson in greater detail using an electron–positron collider dubbed FCC-ee, estimated to cost $18bn and start operations by 2050.

Later in the century, the FCC-hh, a proton–proton collider, would go in the same tunnel to see what, if anything, may lie at much higher energies. That collider, the cost of which is currently educated guesswork, would not come online until the mid 2070s.

But the steadily worsening geopolitics of a fragmenting world order could make funding and building these colliders dicey affairs. After Russia’s expulsion from CERN, little in the way of its contributions can be expected. Chinese physicists had hoped to build an equivalent collider, but those plans seem to have been put on the backburner for now.

And the “America First” political stance of the current US administration is hardly conducive to the multibillion-dollar contribution likely required from what is today the world’s richest (albeit debt-laden) nation. The ongoing collapse of the rules-based world order was recently put into stark relief by the US invasion of Venezuela and abduction of its president Nicolás Maduro, followed by Donald Trump’s menacing rhetoric over Greenland.

While these shocking events have immediate significance for international relations, they also suggest how difficult it may become to fund gargantuan international scientific projects such as the FCC. Under such circumstances, it is very difficult to imagine non-European nations being able to contribute a hoped-for third of the FCC’s total costs.

But the mounting European populist right-wing parties are no great friends of physics either, nor of international scientific endeavours. And Europeans face the not-insignificant costs of military rearmament in the face of Russian aggression and likely US withdrawal from Europe.

So the other two thirds of the FCC’s many billions in costs cannot be taken for granted – especially not during the decades needed to construct its 91 km tunnel, 350 GeV electron–positron collider, the subsequent 100 TeV proton collider, and the massive detectors both machines require.

According to former CERN director-general Chris Llewellyn Smith in his symposium lecture, “The political history of the LHC“, just under 12% of the material project costs of the LHC eventually came from non-member nations. It therefore warps the imagination to believe that a third of the much greater costs of the FCC can come from non-member nations in the current “Wild West” geopolitical climate.

But particle physics desperately needs a Higgs factory. After the 1983 Z boson discovery at the CERN SPS Collider, it took just six years before we had not one but two Z factories – LEP and the Stanford Linear Collider – which proved very productive machines. It’s now been more than 13 years since the Higgs boson discovery. Must we wait another 20 years?

Other options

CERN therefore needs a more modest, realistic, productive new scientific facility – a “Plan B” – to cope with the geopolitical uncertainties of an imperfect, unpredictable world. And I was encouraged to learn that several possible ideas are under consideration, according to outgoing CERN director-general Fabiola Gianotti in her symposium lecture, “CERN today and tomorrow“.

Three of these ideas reflect the European Strategy for Particle Physics, which states that “an electron–positron Higgs factory is the highest-priority next CERN collider”. Two linear electron–positron colliders would require just 11–34 km of tunnelling and could begin construction in the mid-2030s, but would involve a fair amount of technical risk and cost roughly €10bn.

The least costly and risky option, dubbed LEP3, involves installing superconducting radio-frequency cavities in the existing LHC tunnel once the high-luminosity proton run ends. Essentially an upgrade of the 200 GeV LEP2, this approach is based on well-understood technologies and would cost less than €5bn but can reach at most 240 GeV. The linear colliders could attain over twice that energy, enabling research on Higgs-boson decays into top quarks and the triple-Higgs self-interaction.

Other proposed projects involving the LHC tunnel can produce large numbers of Higgs bosons with relatively minor backgrounds, but they can hardly be called “Higgs factories”. One of these, dubbed the LHeC, could only produce a few thousand Higgs bosons annually and would allow other important research on proton structure functions. Another idea is the proposed Gamma Factory, in which laser beams would be backscattered from LHC beams of partially stripped ions. If sufficient photon energies and intensity can be achieved, it will allow research on the γγ → H interaction. These alternatives would cost at most a few billion euros.

As Krige stressed in his keynote address, CERN was meant to be more than a scientific laboratory at which European physicists could compete with their US and Soviet counterparts. As many of its founders intended, he said, it was “a cultural weapon against all forms of bigoted nationalism and anti-science populism that defied Enlightenment values of critical reasoning”. The same logic holds true today.

In planning the next phase in CERN’s estimable history, it is crucial to preserve this cultural vitality, while of course providing unparalleled opportunities to do world-class science – lacking which, the best scientists will turn elsewhere.

I therefore urge CERN planners to be daring but cognizant of financial and political reality in the fracturing world order. Don’t for a nanosecond assume that the future will be a smooth extrapolation from the past. Be fairly certain that whatever new facility you decide to build, there is a solid financial pathway to achieving it in a reasonable time frame.

The future of CERN – and the bracing spirit of CERN – rests in your hands.

The post The Future Circular Collider is unduly risky – CERN needs a ‘Plan B’ appeared first on Physics World.

  •  

Ion-clock transition could benefit quantum computing and nuclear physics

Schematic showing how the shape of ytterbium-173 nucleus affects the clock transition
Nuclear effect The deformed shape of the ytterbium-173 nucleus (right) makes it possible to excite the clock transition with a relatively low-power laser. The same transition is forbidden (left) if the nucleus is not deformed. (Courtesy: Physikalisch-Technische Bundesanstalt (PTB))

An atomic transition in ytterbium-173 could be used to create an optical multi-ion clock that is both precise and stable. That is the conclusion of researchers in Germany and Thailand who have characterized a clock transition that is enhanced by the non-spherical shape of the ytterbium-173 nucleus. As well as applications in timekeeping, the transition could be used in quantum computing. Furthermore, the interplay between atomic and nuclear effects in the transition could provide insights into the physics of deformed nuclei.

The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. These clocks play crucial roles in technologies that require precision timing – such as global navigation satellite systems and communications networks. Currently, the international definition of the second is given by the frequency of caesium-based clocks, which deliver microwave time signals.

Today’s best clocks, however, work at higher optical frequencies and are therefore much more precise than microwave clocks. Indeed, at some point in the future metrologists will redefine the second in terms of an optical transition – but the international metrology community has yet to decide which transition will be used.

Broadly speaking, there are two types of optical clock. One uses an ensemble of atoms that are trapped and cooled to ultralow temperatures using lasers; the other involves a single atomic ion (or a few ions) held in an electromagnetic trap. Clocks that use one ion are extremely precise, but lack stability; whereas clocks that use many atoms are very stable, but sacrifice precision.

Optimizing performance

As a result, some physicists are developing clocks that use multiple ions with the aim of creating a clock that optimizes precision and stability.

Now, researchers at PTB and NIMT (the national metrology institutes of Germany and Thailand respectively) have characterized a clock transition in ions of ytterbium-173, and have shown that the transition could be used to create a multi-ion clock.

“This isotope has a particularly interesting transition,” explains PTB’s Tanja Mehlstäubler – who is a pioneer in the development of multi-ion clocks.

The ytterbium-173 nucleus is highly deformed with a shape that resembles a rugby ball. This deformation affects the electronic properties of the ion, which should make it much easier to use a laser to excite a specific transition that would be very useful for creating a multi-ion clock.

Stark effect

This clock transition can also be excited in ytterbium-171 and has already been used to create a single-ion clock. However, excitation in a ytterbium-171 clock requires an intense laser pulse, which creates a strong electric field that shifts the clock frequency (called the AC Stark effect). This is a particular problem for multi-ion clocks because the intensity of the laser (and hence the clock frequency) can vary across the region in which the ions are trapped.

To show that a much lower laser intensity can be used to excite the clock transition in ytterbium-173, the team studied a “Coulomb crystal” in which three ions were trapped in a line and separated by about 10 micron. They illuminated the ions with laser light that was not uniform in intensity across the crystal. They were able to excite the transition at a relatively low laser intensity, which resulted in very small AC Stark shifts between the frequencies of the three ions.

According to the team, this means that as many as 100 trapped ytterbium-173 ions could be used to create a clock that could be used as a time standard; to redefine the second; and also to make very precise measurements of the Earth’s gravitational field.

As well as being useful for creating an optical ion clock, this multi-ion capability could also be exploited to create quantum-computing architectures based on multiple trapped ions. And because the observed effect is a result of the shape of the ytterbium-173 nucleus, further studies could provide insights into nuclear physics.

The research is described in Physical Review Letters.

 

The post Ion-clock transition could benefit quantum computing and nuclear physics appeared first on Physics World.

  •  

The power of a poster

Most researchers know the disappointment of submitting an abstract to give a conference lecture, only to find that it has been accepted as a poster presentation instead. If this has been your experience, I’m here to tell you that you need to rethink the value of a good poster.

For years, I pestered my university to erect a notice board outside my office so that I could showcase my group’s recent research posters. Each time, for reasons of cost, my request was unsuccessful. At the same time, I would see similar boards placed outside the offices of more senior and better-funded researchers in my university. I voiced my frustrations to a mentor whose advice was, It’s better to seek forgiveness than permission.” So, since I couldn’t afford to buy a notice board, I simply used drawing pins to mount some unauthorized posters on the wall beside my office door.

Some weeks later, I rounded the corner to my office corridor to find the head porter standing with a group of visitors gathered around my posters. He was telling them all about my research using solar energy to disinfect contaminated drinking water in disadvantaged communities in Sub-Saharan Africa. Unintentionally, my illegal posters had been subsumed into the head porter’s official tour that he frequently gave to visitors.

The group moved on but one man stayed behind, examining the poster very closely. I asked him if he had any questions. “No, thanks,” he said, “I’m not actually with the tour, I’m just waiting to visit someone further up the corridor and they’re not ready for me yet. Your research in Africa is very interesting.” We chatted for a while about the challenges of working in resource-poor environments. He seemed quite knowledgeable on the topic but soon left for his meeting.

A few days later while clearing my e-mail junk folder I spotted an e-mail from an Asian “philanthropist” offering me €20,000 towards my research. To collect the money, all I had to do was send him my bank account details. I paused for a moment to admire the novelty and elegance of this new e-mail scam before deleting it. Two days later I received a second e-mail from the same source asking why I hadn’t responded to their first generous offer. While admiring their persistence, I resisted the urge to respond by asking them to stop wasting their time and mine, and instead just deleted it.

So, you can imagine my surprise when the following Monday morning I received a phone call from the university deputy vice-chancellor inviting me to pop up for a quick chat. On arrival, he wasted no time before asking why I had been so foolish as to ignore repeated offers of research funding from one of the college’s most generous benefactors. And that is how I learned that those e-mails from the Asian philanthropist weren’t bogus.

The gentleman that I’d chatted with outside my office was indeed a wealthy philanthropic funder who had been visiting our university. Having retrieved the e-mails from my deleted items folder, I re-engaged with him and subsequently received €20,000 to install 10,000-litre harvested-rainwater tanks in as many primary schools in rural Uganda as the money would stretch to.

Kevin McGuigan
Secret to success Kevin McGuigan discovered that one research poster can lead to generous funding contributions. (Courtesy: Antonio Jaen Osuna)

About six months later, I presented the benefactor with a full report accounting for the funding expenditure, replete with photos of harvested-rainwater tanks installed in 10 primary schools, with their very happy new owners standing in the foreground. Since you miss 100% of the chances you don’t take, I decided I should push my luck and added a “wish list” of other research items that the philanthropist might consider funding.

The list started small and grew steadily ambitious. I asked for funds for more tanks in other schools, a travel bursary, PhD registration fees, student stipends and so on. All told, the list came to a total of several hundred thousand euros, but I emphasized that they had been very generous so I would be delighted to receive funding for any one of the listed items and, even if nothing was funded, I was still very grateful for everything he had already done. The following week my generous patron deposited a six-figure-euro sum into my university research account with instructions that it be used as I saw fit for my research purposes, “under the supervision of your university finance office”.

In my career I have co-ordinated several large-budget, multi-partner, interdisciplinary, international research projects. In each case, that money was hard-earned, needing at least six months and many sleepless nights to prepare the grant submission. It still amuses me that I garnered such a large sum on the back of one research poster, one 10-minute chat and fewer than six e-mails.

So, if you have learned nothing else from this story, please don’t underestimate the power of a strategically placed and impactful poster describing your research. You never know with whom it may resonate and down which road it might lead you.

The post The power of a poster appeared first on Physics World.

  •  

ATLAS narrows the hunt for dark matter

Researchers at the ATLAS collaboration have been searching for signs of new particles in the dark sector of the universe, a hidden realm that could help explain dark matter. In some theories, this sector contains dark quarks (fundamental particles) that undergo a shower and hadronization process, forming long-lived dark mesons (dark quarks and antiquarks bound by a new dark strong force), which eventually decay into ordinary particles. These decays would appear in the detector as unusual “emerging jets”: bursts of particles originating from displaced vertices relative to the primary collision point.

Using 51.8 fb⁻¹ of proton–proton collision data at 13.6 TeV collected in 2022–2023, the ATLAS team looked for events containing two such emerging jets. They explored two possible production mechanisms, which are a vector mediator (Z′) produced in the s‑channel and a scalar mediator (Φ) exchanged in the t‑channel. The analysis combined two complementary strategies. A cut-based strategy relying on high-level jet observables, including track-, vertex-, and jet-substructure-based selections, enables a straightforward reinterpretation for alternative theoretical models. A machine learning approach employs a per-jet tagger using a transformer architecture trained on low-level tracking variables to discriminate emerging from Standard Model jets, maximizing sensitivity for the specific models studied.

No emerging‑jet signal excess was found, but the search set the first direct limits on emerging‑jet production via a Z′ mediator and the first constraints on t‑channel Φ production. Depending on the model assumptions, Z′ masses up to around 2.5 TeV and Φ masses up to about 1.35 TeV are excluded. These results significantly narrow the space in which dark sector particles could exist and form part of a broader ATLAS programme to probe dark quantum chromodynamics. The work sharpens future searches for dark matter and advances our understanding of how a dark sector might behave.

Read the full article

Search for emerging jets in pp collisions at √s = 13.6 TeV with the ATLAS experiment

The ATLAS Collaboration 2025 Rep. Prog. Phys. 88 097801

Do you want to learn more about this topic?

Dark matter and dark energy interactions: theoretical challenges, cosmological implications and observational signatures by B WangE AbdallaF Atrio-Barandela and D Pavón (2016)

The post ATLAS narrows the hunt for dark matter appeared first on Physics World.

  •  

How do bacteria produce entropy?

Active matter is matter composed of large numbers of active constituents, each of which consumes chemical energy in order to move or to exert mechanical forces.

This type of matter is commonly found in biology: swimming bacteria or migrating cells are both classic examples. In addition, a wide range of synthetic systems, such as active colloids or robotic swarms, can also fall into this umbrella.

Active matter has therefore been the focus of much research over the past decade, unveiling many surprising theoretical features and a suggesting a plethora of applications.

Perhaps most importantly, these systems’ ability to perform work leads to sustained non-equilibrium behaviour. This is distinctly different from that of relaxing equilibrium thermodynamic systems, commonly found in other areas of physics.

The concept of entropy production is often used to quantify this difference and to calculate how much useful work can be performed. If we want to harvest and utilise this work however, we need to understand the small-scale dynamics of the system. And it turns out this is rather complicated.

One way to calculate entropy production is through field theory, the workhorse of statistical mechanics. Traditional field theories simplify the system by smoothing out details, which works well for predicting densities and correlations. However, these approximations often ignore the individual particle nature, leading to incorrect results for entropy production.

The new paper details a substantial improvement on this method. By making use of Doi-Peliti field theory, they’re able to keep track of microscopic particle dynamics, including reactions and interactions.

The approach starts from the Fokker-Planck equation and provides a systematic way to calculate entropy production from first principles. It can be extended to include interactions between particles and produces general, compact formulas that work for a wide range of systems. These formulas are practical because they can be applied to both simulations and experiments.

The authors demonstrated their method with numerous examples, including systems of Active Brownian Particles, showing its broad usefulness. The big challenge going forward though is to extend their framework to non-Markovian systems, ones where future states depend on the present as well as past states.

Read the full article

Field theories of active particle systems and their entropy production – IOPscience

G. Pruessner and R. Garcia-Millan, 2025 Rep. Prog. Phys. 88 097601

The post How do bacteria produce entropy? appeared first on Physics World.

  •  

Einstein’s recoiling slit experiment realized at the quantum limit

Quantum mechanics famously limits how much information about a system can be accessed at once in a single experiment. The more precisely a particle’s path can be determined, the less visible its interference pattern becomes. This trade-off, known as Bohr’s complementarity principle, has shaped our understanding of quantum physics for nearly a century. Now, researchers in China have brought one of the most famous thought experiments surrounding this principle to the quantum limit, using a single atom as a movable slit.

The thought experiment dates back to the 1927 Solvay Conference, where Albert Einstein proposed a modification of the double-slit experiment in which one of the slits could recoil. He argued that if a photon caused the slit to recoil as it passed through, then measuring that recoil might reveal which path the photon had taken without destroying the interference pattern. Conversely, Niels Bohr argued that any such recoil would entangle the photon with the slit, washing out the interference fringes.

For decades, this debate remained largely philosophical. The challenge was not about adding a detector or a label to track a photon’s path. Instead, the question was whether the “which-path” information could be stored in the motion of the slit itself. Until now, however, no physical slit was sensitive enough to register the momentum kick from a single photon.

A slit that kicks back

To detect the recoil from a single photon, the slit’s momentum uncertainty must be comparable to the photon’s momentum. For any ordinary macroscopic slit, its quantum fluctuations are significantly larger than the recoil, washing out the which-path information. To give a sense of scale, the authors note that even a 1 g object modelled as a 100 kHz oscillator (for example, a mirror on a spring) would have a ground-state momentum uncertainty of about 10-16 kg m s-1, roughly 11 orders of magnitude larger than the momentum of an optical photon (approximately 10-27 kg m s-1).

Illustration showing the experimental realization
Experimental realization To perform Einstein’s thought experiment in the lab, the researchers used a single trapped atom as a movable slit. Photon paths become correlated with the atom’s motion, allowing researchers to probe the trade-off between interference and which-path information. (Courtesy: Y-C Zhang et al. Phys. Rev. Lett. 135 230202)

In their study, published in Physical Review Letters, Yu-Chen Zhang and colleagues from the University of Science and Technology of China overcame this obstacle by replacing the movable slit with a single rubidium atom held in an optical tweezer and cooled to its three-dimensional motional ground state. In this regime, the atom’s momentum uncertainty reaches the quantum limit, making the recoil from a single photon directly measurable.

Rather than using a conventional double-slit geometry, the researchers built an optical interferometer in which photons scattered off the trapped atom. By tuning the depth of this optical trap, the researchers were able to precisely control the atom’s intrinsic momentum uncertainty, effectively adjusting how “movable” the slit was.

Watching interference fade 

As the researchers decreased the atom’s momentum uncertainty, they observed a loss of interference in the scattered photons. Increasing the atom’s momentum uncertainty caused the interference to reappear.

This behaviour directly revealed the trade-off between interference and which-path information at the heart of the Einstein–Bohr debate. The researchers note that the loss of interference arose not from classical noise, but from entanglement between the photon and the atom’s motion.

“The main challenge was matching the slit’s momentum uncertainty to that of a single photon,” says corresponding author Jian-Wei Pan. “For macroscopic objects, momentum fluctuations are far too large – they completely hide the recoil. Using a single atom cooled to its motional ground state allows us to reach the fundamental quantum limit.”

Maintaining interferometric phase stability was equally demanding. The team used active phase stabilization with a reference laser to keep the optical path length stable to within a few nanometres (roughly 3 nm) for over 10 h.

Beyond settling a historical argument, the experiment offers a clean demonstration of how entanglement plays a key role in Bohr’s complementarity principle. As Pan explains, the results suggest that “entanglement in the momentum degree-of-freedom is the deeper reason behind the loss of interference when which-path information becomes available”.

This experiment opens the door to exploring quantum measurement in a new regime. By treating the slit itself as a quantum object, future studies could probe how entanglement emerges between light and matter. Additionally, the same set-up could be used to gradually increase the mass of the slit, providing a new way to study the transition from quantum to classical behaviour.

The post Einstein’s recoiling slit experiment realized at the quantum limit appeared first on Physics World.

  •  

European Space Agency unveils first images from Earth-observation ‘sounder’ satellite

The European Space Agency has released the first images from the Meteosat Third Generation-Sounder (MTG-S) satellite. They show variations in temperature and humidity over Europe and northern Africa in unprecedented detail with further data from the mission set to improve weather-forecasting models and improve measurements of air quality over Europe.

Launched on 1 July 2025 from the Kennedy Space Center in Florida aboard a SpaceX Falcon 9 rocket, MTG-S operates from a geostationary orbit, about 36 000 km above Earth’s surface and is able to provide coverage of Europe and part of northern Africa on a 15-minute repeat cycle.

The satellite carries a hyperspectral sounding instrument that uses interferometry to capture data on temperature and humidity as well as being able to measure wind and trace gases in the atmosphere. It can scan nearly 2,000 thermal infrared wavelengths every 30 minutes.

The data will eventually be used to generate 3D maps of the atmosphere and help improve the accuracy of weather forecasting, especially for rapidly evolving storms.

The “temperature” image, above, was taken in November 2025 and shows heat (red) from the African continent, while a dark blue weather front covers Spain and Portugal.

The “humidity” image, below, was captured using the sounder’s medium-wave infrared channel. Blue colours represent regions in the atmosphere with higher humidity, while red colours correspond to lower humidity.

Whole-Earth image showing cloud formation
(Courtesy: EUMETSAT)

“Seeing the first infrared sounder images from MTG-S really brings this mission and its potential to life,” notes Simonetta Cheli, ESA’s director of Earth observation programmes. “We expect data from this mission to change the way we forecast severe storms over Europe – and this is very exciting for communities and citizens, as well as for meteorologists and climatologists.”

ESA is expected to launch a second Meteosat Third Generation-Imaging satellite later this year following the launch of the first one – MTG-I1 – in December 2022.

The post European Space Agency unveils first images from Earth-observation ‘sounder’ satellite appeared first on Physics World.

  •  

Uranus and Neptune may be more rocky than icy, say astrophysicists

Our usual picture of Uranus and Neptune as “ice giant” planets may not be entirely correct. According to new work by scientists at the University of Zürich (UZH), Switzerland, the outermost planets in our solar system may in fact be rock-rich worlds with complex internal structures – something that could have major implications for our understanding of how these planets formed and evolved.

Within our solar system, planets fall into three categories based on their internal composition. Mercury, Venus, Earth and Mars are deemed terrestrial rocky planets; Jupiter and Saturn are gas giants; and Uranus and Neptune are ice giants.

An agnostic approach

The new work, which was led by PhD student Luca Morf in UZH’s astrophysics department, challenges this last categorization by numerically simulating the two planets’ interiors as a mixture of rock, water, hydrogen and helium. Morf explains that this modelling framework is initially “agnostic” – meaning unbiased – about what the density profiles of the planets’ interiors should be. “We then calculate the gravitational fields of the planets so that they match with observational measurements to infer a possible composition,” he says.

This process, Morf continues, is then repeated and refined to ensure that each model satisfies several criteria. The first criteria is that the planet should be in hydrostatic equilibrium, meaning that its internal pressure is enough to counteract its gravity and keep it stable. The second is that the planet should have the gravitational moments observed in spacecraft data. These moments describe the gravitational field of a planet, which is complex because planets are not perfect spheres.

The final criteria is that the modelled planets need to be thermodynamically and compositionally consistent with known physics. “For example, a simulation of the planets’ interiors must obey equations of state, which dictate how materials behave under given pressure and temperature conditions,” Morf explains.

After each iteration, the researchers adjust the density profile of each planet and test it to ensure that the model continues to adhere to the three criteria. “We wanted to bridge the gap between existing physics-based models that are overly constrained and empirical approaches that are too simplified,” Morf explains. Avoiding strict initial assumptions about composition, he says, “lets the physics and data guide the solution [and] allows us to probe a larger parameter space.”

A wide range of possible structures

Based on their models, the UZH astrophysicists concluded that the interiors of Uranus and Neptune could have a wide range of possible structures, encompassing both water-rich and rock-rich configurations. More specifically, their calculations yield rock-to-water ratios of between 0.04-3.92 for Uranus and 0.20-1.78 for Neptune.

Diagrams showing possible "slices" of Uranus and Neptune. Four slices are shown, two for each planet. Each slice is filled with brown areas representing silicon dioxide rock and blue areas representing water ice, plus smaller areas of tan colouring for hydrogen-helium mixtures and (for Neptune only) grey areas representing iron. Two slices are mostly blue, while the other two contain large fractions of brown.
Slices of different pies: According to models developed with “agnostic” initial assumptions, Uranus (top) and Neptune (bottom) could be composed mainly of water ice (blue areas), but they could also contain substantial amounts of silicon dioxide rock (brown areas). (Courtesy: Luca Morf)

The models, which are detailed in Astronomy and Astrophysics, also contain convective regions with ionic water pockets. The presence of such pockets could explain the fact that Uranus and Neptune, unlike Earth, have more than two magnetic poles, as the pockets would generate their own local magnetic dynamos.

Traditional “ice giant” label may be too simple

Overall, the new findings suggest that the traditional “ice giant” label may oversimplify the true nature of Uranus of Neptune, Morf tells Physics World. Instead, these planets could have complex internal structures with compositional gradients and different heat transport mechanisms. Though much uncertainty remains, Morf stresses that Uranus and Neptune – and, by extension, similar intermediate-class planets that may exist in other solar systems – are so poorly understood that any new information about their internal structure is valuable.

A dedicated space mission to these outer planets would yield more accurate measurements of the planets’ gravitational and magnetic fields, enabling scientists to refine the limited existing observational data. In the meantime, the UZH researchers are looking for more solutions for the possible interiors of Uranus and Neptune and improving their models to account for additional constraints, such as atmospheric conditions. “Our work will also guide laboratory and theoretical studies on the way materials behave in general at high temperatures and pressures,” Morf says.

The post Uranus and Neptune may be more rocky than icy, say astrophysicists appeared first on Physics World.

  •  

String-theory concept boosts understanding of biological networks

Many biological networks – including blood vessels and plant roots – are not organized to minimize total length, as long assumed. Instead, their geometry follows a principle of surface minimization, following a rule that is also prevalent in string theory. That is the conclusion of physicists in the US, who have created a unifying framework that explains structural features long seen in real networks but poorly captured by traditional mathematical models.

Biological transport and communication networks have fascinated scientists for decades. Neurons branch to form synapses, blood vessels split to supply tissues, and plant roots spread through soil. Since the mid-20th century, many researchers believed that evolution favours networks that minimize total length or volume.

“There is a longstanding hypothesis, going back to Cecil Murray from the 1940s, that many biological networks are optimized for their length and volume,” Albert-László Barabási of Northeastern University explains. “That is, biological networks, like the brain and the vascular systems, are built to achieve their goals with the minimal material needs.” Until recently, however, it had been difficult to characterize the complicated nature of biological networks.

Now, advances in imaging have given Barabási and colleagues a detailed 3D picture of real physical networks, from individual neurons to entire vascular systems. With these new data in hand, the researchers found that previous theories are unable to describe real networks in quantitative terms.

From graphs to surfaces

To remedy this, the team defined the problem in terms of physical networks, systems whose nodes and links have finite thickness and occupy space. Rather than treating them as abstract graphs made of idealized edges, the team models them as geometrical objects embedded in 3D space.

To do this, the researchers turned to an unexpected mathematical tool. “Our work relies on the framework of covariant closed string field theory, developed by Barton Zwiebach and others in the 1980s,” says team member Xiangyi Meng at Rensselaer Polytechnic Institute. This framework provides a correspondence between network-like graphs and smooth surfaces.

Unlike string theory, their approach is entirely classical. “These surfaces, obtained in the absence of quantum fluctuations, are precisely the minimal surfaces we seek,” Meng says. No quantum mechanics, supersymmetry, or exotic string-theory ingredients are required. “Those aspects were introduced mainly to make string theory quantum and thus do not apply to our current context.”

Using this framework, the team analysed a wide range of biological systems. “We studied human and fruit fly neurons, blood vessels, trees, corals, and plants like Arabidopsis,” says Meng. Across all these cases, a consistent pattern emerged: the geometry of the networks is better predicted by minimizing surface area rather than total length.

Complex junctions

One of the most striking outcomes of the surface-minimization framework is its ability to explain structural features that previous models cannot. Traditional length-based theories typically predict simple Y-shaped bifurcations, where one branch splits into two. Real networks, however, often display far richer geometries.

“While traditional models are limited to simple bifurcations, our framework predicts the existence of higher-order junctions and ‘orthogonal sprouts’,” explains Meng.

These include three- or four-way splits and perpendicular, dead-end offshoots. Under a surface-based principle, such features arise naturally and allow neurons to form synapses using less membrane material overall and enable plant roots to probe their environment more effectively.

Ginestra Bianconi of the UK’s Queen Mary University of London says that the key result of the new study is the demonstration that “physical networks such as the brain or vascular networks are not wired according to a principle of minimization of edge length, but rather that their geometry follows a principle of surface minimization.”

Bianconi, who was not involved in the study, also highlights the interdisciplinary leap of invoking ideas from string theory, “This is a beautiful demonstration of how basic research works”.

Interdisciplinary leap

The team emphasizes that their work is not immediately technological. “This is fundamental research, but we know that such research may one day lead to practical applications,” Barabási says. In the near term, he expects the strongest impact in neuroscience and vascular biology, where understanding wiring and morphology is essential.

Bianconi agrees that important questions remain. “The next step would be to understand whether this new principle can help us understand brain function or have an impact on our understanding of brain diseases,” she says. Surface optimization could, for example, offer new ways to interpret structural changes observed in neurological disorders.

Looking further ahead, the framework may influence the design of engineered systems. “Physical networks are also relevant for new materials systems, like metamaterials, who are also aiming to achieve functions at minimal cost,” Barabási notes. Meng points to network materials as a particularly promising area, where surface-based optimization could inspire new architectures with tailored mechanical or transport properties.

The research is described in Nature.

The post String-theory concept boosts understanding of biological networks appeared first on Physics World.

  •  

The secret life of TiO₂ in foams

Porous carbon foams are an exciting area of research because they are lightweight, electrically conductive, and have extremely high surface areas. Coating these foams with TiO₂ makes them chemically active, enabling their use in energy storage devices, fuel cells, hydrogen production, CO₂‑reduction catalysts, photocatalysis, and thermal management systems. While many studies have examined the outer surfaces of coated foams, much less is known about how TiO₂ coatings behave deep inside the foam structure.

In this study, researchers deposited TiO₂ thin films onto carbon foams using magnetron sputtering and applied different bias voltages to control ion energy, which in turn affects coating density, crystal structure, thickness, and adhesion. They analysed both the outer surface and the interior of the foam using microscopy, particle‑transport simulations, and X‑ray techniques.

They found that the TiO₂ coating on the outer surface is dense, correctly composed, and crystalline (mainly anatase with a small amount of rutile) ideal for catalytic and energy applications. They also discovered that although fewer particles reach deep inside the foam, those do retain the same energy, meaning particle quantity decreases with depth but particle energy does not. Because devices like batteries and supercapacitors rely on uniform coatings, variations in thickness or structure inside the foam can lead to poorer performance and faster degradation.

Overall, this research provides a much clearer understanding of how TiO₂ coatings grow inside complex 3D foams, showing how thickness, density, and crystal structure evolve with depth and how bias voltage can be used to tune these properties. By revealing how plasma particles move through the foam and validating models that predict coating behaviour, it enables the design of more reliable, higher‑performing foam‑based devices for energy and catalytic applications.

Read the full article

A comprehensive multi-scale study on the growth mechanisms of magnetron sputtered coatings on open-cell 3D foams

Loris Chavée et al 2026 Prog. Energy 8 015002

Do you want to learn more about this topic?

Advances in thermal conductivity for energy applications: a review Qiye Zheng et al. (2021)

The post The secret life of TiO₂ in foams appeared first on Physics World.

  •  

Laser processed thin NiO powder coating for durable anode-free batteries

Traditional lithium‑ion batteries use a thick graphite anode, where lithium ions move in and out of the graphite during charging and discharging. In an anode‑free lithium metal battery, there is no anode material at the start, only a copper foil. During the first charge, lithium leaves the cathode and deposits onto the copper as pure lithium metal, effectively forming the anode. Removing the anode increases energy density dramatically by reducing weight, and it also simplifies and lowers the cost of manufacturing. Because of this, anode‑free batteries are considered to have major potential for next‑generation energy storage. However, a key challenge is that lithium deposits unevenly on bare copper, forming long needle‑like dendrites that can pierce the separator and cause short circuits. This uneven growth also leads to rapid capacity loss, so anode‑free batteries typically fail after only a few hundred cycles.

In this research, the scientists coated the copper foil with NiO powder and used a CO₂ laser (l = 10.6 mm) to rapidly heat the same in a rapid scanning mode to transform it. The laser‑treated NiO becomes porous and strongly adherent to the copper, helping lithium spread out more evenly. The process is fast, energy‑efficient, and can be done in air. As a result, lithium ions diffuse or move more easily across the surface, reducing dendrite formation. The exchange current density also doubled compared to bare copper, indicating better charge‑transfer behaviour. Overall, battery performance improved dramatically. The modified cells lasted 400 cycles at room temperature and 700 cycles at 40°C, compared with only 150 cycles for uncoated copper.

This simple, rapid, and scalable technique offers a powerful way to improve anode‑free lithium metal batteries, one of the most promising next‑generation battery technologies.

Read the full article

Microgradient patterned NiO coating on copper current collector for anode-free lithium metal battery

Supriya Kadam et al 2025 Prog. Energy 7 045003

Do you want to learn more about this topic?

Lithium aluminum alloy anodes in Li-ion rechargeable batteries: past developments, recent progress, and future prospects by Tianye Zheng and Steven T Boles (2023)

The post Laser processed thin NiO powder coating for durable anode-free batteries appeared first on Physics World.

  •  
❌