↩ Accueil

Vue lecture

From rabbits and foxes to the human gut microbiome, physics is helping us understand the natural world

This episode of the Physics World Weekly podcast is a conversation with two physicists, Ada Altieri and Silvia De Monte, who are using their expertise in statistical physics to understand the behaviour of ecological communities.

A century ago, pioneering scientists such as Alfred Lotka and Vito Volterra showed that statistical physics techniques could explain – and even predict – patterns that ecologists observe in nature. At first, this work focused on simple ecosystems containing just one or two species (such as rabbits and foxes), which are relatively easy to model.

Nowadays, though, researchers such as Altieri and De Monte are turning their attention to far more complex communities. One example is the collection of unicellular organisms known as protists that live among plankton in the ocean. Another, closer to home, is the “microbiome” in the human gut, which may contain hundreds or even thousands of species of bacteria.

Modelling these highly interconnected communities is hugely challenging. But as Altieri and De Monte explain, the potential rewards – from identifying “tipping points” in fragile ecosystems to developing new treatments for gut disorders such as irritable bowel syndrome and Crohn’s disease – are great.

This discussion is based on a Perspective article that Altieri (an associate professor at the Laboratory for Matter and Complex Systems at the Université Paris Cité, France) and De Monte (a senior research scientist at the Institute of Biology in the École Normale Supérieure in Paris and the Max Planck Institute for Evolutionary Biology in Ploen, Germany) wrote for the journal EPL, which sponsors this episode of the podcast.

The post From rabbits and foxes to the human gut microbiome, physics is helping us understand the natural world appeared first on Physics World.

  •  

Scientists decry ‘scientific injustice’ over lack of climate data in developing regions

A shortage of data is hampering efforts to establish the role of climate change in extreme-weather events in the tropics and global south. So say an international team of scientists, who claim the current situation is a “scientific injustice” and call for more investment in climate science and weather monitoring in poorer countries.

The researchers, who are part of World Weather Attribution, have made the call after analysing the role of climate change in an episode of torrential rain in June that triggered a landslide in Columbia. It killed 27 people and triggered devastating floods in Venezuela that displaced thousands.

Their study reported that the Colombian Andes were unusually wet from April to June, while the part of Venezuela where the floods occurred experienced its five wettest days of the year. In the current climate, such weather events would be expected every 10 years in Colombia and every three years in Venezuela.

According to the researchers, there is a high level of uncertainty in the study due to a lack of long-term observational data in the region and high uncertainties in global climate models when assessing the tropics. Colombia and Venezuela have complex tropical climates that are under-researched, with some data even suggesting that rainfall in the region is becoming less intense.

But the group says that the possibility of heavier rainfall linked to climate change should not be ruled out in the region, particularly on shorter, sub-daily timescales, which they could not investigate. They add that Colombia and Venezuela are almost certainly facing increased heatwave, drought and wildfire risk.

Mariam Zachariah at the Centre for Environmental Policy at Imperial College London, who was involved with the work, says that the combination of mountains, coasts, rainforests and complex-weather systems in many tropical countries means “rainfall is varied, intense and challenging to capture in climate models”.

“Many countries with tropical climates have limited capacity to do climate science, meaning we don’t have a good understanding of how they are being affected by climate change,” says Zachariah. “Our recent study on the deadly floods in the Democratic Republic of Congo in May is another example. Once again, our results were inconclusive.”

Climate scientist Paola Andrea Arias Gómez at the Universidad of Antioquia in Colombia, who was also involved in the study, says that extreme weather is “non-stop” in Colombia and Venezuela. “One year we face devastating flash floods; the next, severe droughts and wildfires,” she adds. “Unfortunately, extreme weather is not well understood in northern South America. We urgently need more investment in climate science to understand shifting risks and prepare for what’s ahead. More science will save lives.”

The post Scientists decry ‘scientific injustice’ over lack of climate data in developing regions appeared first on Physics World.

  •  

Hints of a 3D quantum spin liquid revealed by neutron scattering

New experimental evidence for a quantum spin liquid – a material with spins that remain in constant fluctuation at extremely low temperatures – has been unveiled by an international team of scientists. The researchers used neutron scattering to reveal photon-like collective spin excitations in a crystal of cerium zirconate.

When most magnetic materials are cooled to nearly absolute zero, their spin magnetic moments will align into an ordered pattern to minimize the system’s energy. Yet in 1973, the future Nobel laureate Philip Anderson proposed an alternative class of magnetic materials in which this low temperature order does not emerge.

Anderson considered the spins of atoms that interact with each other in an antiferromagnetic way. This is when the spin of each atom seeks to point in the opposite direction of its nearest neighbours. If the spins in a lattice are able to adopt this orientation, the lowest energy state is an ordered antiferromagnet with zero overall magnetism.

Geometrical frustration

In a tetrahedral lattice, however, the geometrical arrangement of nearest neighbours means that it is impossible for spins to arrange themselves in this way. This is called frustration, and the result is a material with multiple low-energy spin configurations, which are disordered.

So far, this behaviour has been observed in materials called spin ices – where one of the many possible spin configurations is frozen into place at ultralow temperatures. However, Anderson envisioned that a related class of materials could exist in a more exotic phase that constantly fluctuates between different, equal-energy states, all the way down to absolute zero.

Called quantum spin liquids (QSLs), such materials have evaded experimental confirmation – until now. “They behave like a liquid form of magnetism – without any fixed ordering,” explains team member Silke Bühler-Paschen at Austria’s Vienna University of Technology. “That’s exactly why a real breakthrough in this area has remained elusive for decades.” “We studied cerium zirconate, which forms a three-dimensional network of spins and shows no magnetic ordering, even at temperatures as low as 20 mK.”. This material was chosen because it has a pyrochlore lattice, which is based on corner-sharing tetrahedra.

Collective magnetic excitations

The team looked for collective magnetic excitations that are predicted to exist in QSLs. These excitations are expected to have linear energy–momentum relationships, which is similar to how conventional photons propagate. As a result, these particle-like excitations are called emergent photons.

The team used polarized neutron scattering experiments to search for evidence of emergent photons. When neutrons strike a sample, they can exchange energy and momentum with the lattice. This exchange can involve magnetic excitations in the material and the team used scattering experiments to map-out the energy and momenta of these excitations at temperatures in the 33–50 mK range.

“For the first time, we were able to detect signals that strongly indicate a 3D quantum spin liquid – particularly, the presence of so-called emergent photons,” Bühler-Paschen says. “The discovery of these emergent photons in cerium zirconate is a very strong indication that we have indeed found a QSL.”

As well as providing evidence for Anderson’s idea, the research pave the way for the further exploration of other potential QSLs and their applications. “We plan to conduct further experiments, but from our perspective, cerium zirconate is currently the most convincing candidate for a quantum spin liquid,” Bühler-Paschen says.

The research could have important implications for our understanding of high-temperature superconductivity. In his initial theory, Anderson predicted that QSLs could be precursors to high-temperature superconductors.

The research is described in Nature Physics.

The post Hints of a 3D quantum spin liquid revealed by neutron scattering appeared first on Physics World.

  •  

Earth-shaking waves from Greenland mega-tsunamis imaged for the first time

In September 2023, seismic detectors around the world began picking up a mysterious signal. Something – it wasn’t clear what – was causing the entire Earth to shake every 90 seconds. After a period of puzzlement, and a second, similar signal in October, theoretical studies proposed an explanation. The tremors, these studies suggested, were caused by standing waves, or seiches, that formed after landslides triggered huge tsunamis in a narrow waterway off the coast of Greenland.

Engineers at the University of Oxford, UK, have now confirmed this hypothesis. Using satellite altimetry data from the Surface Water Ocean Topography (SWOT) mission, the team constructed the first images of the seiches, demonstrating that they did indeed originate from landslide-triggered mega-tsunamis in Dickson Fjord, Greenland. While events of this magnitude are rare, the team say that climate change is likely to increase their frequency, making continued investments in advanced satellite missions essential for monitoring and responding to them.

An unprecedented view into the fjord

Unlike other altimeters, SWOT provides two-dimensional measurements of sea surface height down to the centimetre across the entire globe, including hard-to-reach areas like fjords, rivers and estuaries. For team co-leader Thomas Monahan, who studied the seiches as part of his PhD research at Oxford, this capability was crucial. “It gave us an unprecedented view into Dickson Fjord during the seiche events in September and October 2023,” he says. “By capturing such high-resolution images of sea-surface height at different time points following the two tsunamis, we could estimate how the water surface tilted during the wave – in other words, gauge the ‘slope’ of the seiche.”

The maps revealed clear cross-channel slopes with height differences of up to two metres. Importantly, these slopes pointed in opposite directions, showing that water was moving backwards as well as forwards across the channel. But that wasn’t the end of the investigation. “Finding the ‘seiche in the fjord’ was exciting but it turned out to be the easy part,” Monahan says. “The real challenge was then proving that what we had observed was indeed a seiche and not something else.”

Enough to shake the Earth for days

To do this, the Oxford engineers approached the problem like a game of Cluedo, ruling out other oceanographic “suspects” one by one. They also connected the slope measurements with ground-based seismic data that captured how the Earth’s crust moved as the wave passed through it. “By combining these two very different kinds of observations, we were able to estimate the size of the seiches and their characteristics even during periods in which the satellite was not overhead,” Monahan says.

Although no-one was present in Dickson Fjord during the seiches, the Oxford team’s estimates suggest that the event would have been terrifying to witness. Based on probabilistic (Bayesian) machine-learning analyses, the team say that the September seiche was initially 7.9 m tall, while the October one measured about 3.9 m.

“That amount of water sloshing back and forth over a 10-km-section of fjord walls creates an enormous force,” Monahan says. The September seiche, he adds, produced a force equivalent to 14 Saturn V rockets launching at once, around 500 GN. “[It] was literally enough to shake the entire Earth for days,” he says.

What made these events so powerful was the geometry of the fjord, Monahan says. “A sharp bend near its outlet effectively trapped the seiches, allowing them to reverberate for days,” he explains. “Indeed, the repeated impacts of water against fjord walls acted like a hammer striking the Earth’s crust, creating long-period seismic waves that propagated around the globe and that were strong enough to be detected worldwide.”

Risk of tsunamigenic landslides will likely grow

As for what caused the seiches, Monahan suggests that climate change may have been a contributing factor. As glaciers thin, they undergo a process called de-buttressing wherein the loss of ice removes support from the surrounding rock, leading it to collapse. It was likely this de-buttressing that caused two enormous landslides in Dickson Fjord within a month, and continued global warming will only increase the frequency. “As these events become more common, especially in steep, ice-covered terrain, the risk of tsunamigenic landslides will likely grow,” Monahan says.

The researchers say they would now like to better understand how the seiches dissipated afterwards. “Although previous work successfully simulated how the megatsunamis stabilized into seiches, how they decayed is not well understood,” says Monahan. “Future research could make use of SWOT satellite observations as a benchmark to better constrain the processes behind disputation.”

The findings, which are detailed in Nature Communications, show how top-of-the-line satellites like SWOT can fill these observational gaps, he adds. To fully leverage these capabilities, however, researchers need better processing algorithms tailored to complex fjord environments and new techniques for detecting and interpreting anomalous signals within these vast datasets. “We think scientific machine learning will be extremely useful here,” he says.

The post Earth-shaking waves from Greenland mega-tsunamis imaged for the first time appeared first on Physics World.

  •  

Magnetically controlled microrobots show promise for precision drug delivery

Permanent magnetic droplet-derived microrobots
Multimodal locomotion Top panel: fabrication and magnetic assembly of permanent magnetic droplet-derived microrobots (PMDMs). Lower panel: magnetic fields direct PMDM chains through complex biological environments such as the intestine. (Courtesy: CC BY 4.0/Sci. Adv. 10.1126/sciadv.adw3172)

Microrobots provide a promising vehicle for precision delivery of therapeutics into the body. But there’s a fine balance needed between optimizing multifunctional cargo loading and maintaining efficient locomotion. A research collaboration headed up at the University of Oxford and the University of Michigan has now developed permanent magnetic droplet-derived microrobots (PMDMs) that meet both of these requirements.

The PMDMs are made from a biocompatible hydrogel incorporating permanent magnetic microparticles. The hydrogel – which can be tailored to each clinical scenario – can carry drugs or therapeutic cells, while the particles’ magnetic properties enable them to self-assemble into chains and perform a range of locomotion behaviours under external magnetic control.

“Our motivation was to design a microrobot system with adaptable motion capabilities for potential applications in drug delivery,” explains Molly Stevens from the University of Oxford, experimental lead on this study. “By using self-assembled magnetic particles, we were able to create reconfigurable, modular microrobots that could adapt their shape on demand – allowing them to manoeuvre through complex biological terrains to deliver therapeutic payloads.”

Building the microrobots

To create the PMDMs, Stevens and collaborators used cascade tubing microfluidics to rapidly generate ferromagnetic droplets (around 300 per minute) from the hydrogel and microparticles. Gravitational sedimentation of the 5 µm-diameter microparticles led to the formation of Janus droplets with distinct hydrogel and magnetic phases. The droplets were then polymerized and magnetized to form PMDMs of roughly 0.5 mm in diameter.

The next step involved self-assembly of the PMDMs into chains. The researchers demonstrated that exposure to a precessing magnetic field caused the microrobots to rapidly assemble into dimers and trimers before forming a chain of eight, with their dipole moments aligned. Exposure to various dynamic magnetic fields caused the chains to move via different modalities, including walking, crawling, swinging and lateral movement.

The microrobots were able to ascend and descend stairs, and navigate obstacles including a 3-mm high railing, a 3-mm diameter cylinder and a column array. The reconfigurable PMDM chains could also adapt to confined narrow spaces by disassembling into shorter fragments and overcome tall obstacles by merging into longer chains.

Towards biomedical applications

By tailoring the hydrogel composition, the researchers showed that the microrobots could deliver different types of cargo with controlled dosage. PMDMs made from rigid polyethylene glycol diacrylate (PEGDA) could deliver fluorescent microspheres, for example, while soft alginate/gelatin hydrogels can be used for cell delivery.

PMDM chains also successfully transported human mesenchymal stem cell (hMSC)-laden Matrigel without compromising cell viability, highlighting their potential to deliver cells to specific sites for in vivo disease treatment.

To evaluate intestinal targeting, the researchers delivered PMDMs to ex vivo porcine intestine. Once inside, the microrobots assembled into chains and exhibited effective locomotion on the intestine surface. Importantly, the viscous and unstructured tissue surface did not affect chain assembly or motion. After navigation to the target site, exposing the PMDMs to the enzyme collagenase instigated controlled cargo release. Even after full degradation of the hydrogel phase, the chains retained integrity and locomotion capabilities.

The team also demonstrated programmable release of different cargoes, using hybrid chains containing rigid PEGDA segments and degradable alginate/gelatin segments. Upon exposure to collagenase, the cargo from the degradable domains exhibited burst release, while the slower degradation of PEGDA delayed the release of cargo in the PEGDA segments.

Delivery of microrobots into a human cartilage model
Biological environment Delivery of preassembled PMDM chains into a printed human cartilage model. The procedure consists of injections and assembly, locomotion, drug release and retrieval of PMDMs. Scale bars: 5 mm. (Courtesy: CC BY 4.0/Sci. Adv. 10.1126/sciadv.adw3172)

In another potential clinical application, the researchers delivered microrobots to 3D-printed human cartilage with an injury site. This involved catheter-based injection of PMDMs followed by application of an oscillating magnetic field to assemble the PMDMs into a chain. The chains could be navigated by external magnetic fields to the targeted injury site, where the hydrogel degraded and released the drug cargo.

After drug delivery, the team guided the microrobots back to the initial injection site and retrieved them using a magnetic catheter. This feature offers a major advantage over traditional microrobots, which often struggle to retrieve magnetic particles after cargo release, potentially triggering immune responses, tissue damage or other side effects.

“For microrobots to be clinically viable, they must not only perform their intended functions effectively but also do so safely,” explains co-first author Yuanxiong Cao from the University of Oxford. “The ability to retrieve the PMDM chains after they completed the intended therapeutic delivery enhances the biosafety of the system.”

Cao adds that while the focus for the intestine model was to demonstrate navigation and localized delivery, the precise control achieved over the microrobots suggests that “extraction is also feasible in this and other biomedically relevant environments”.

Predicting PMDM performance

Alongside the experiments, the team developed a computational platform, built using molecular dynamics simulations, to provide further insight into the collective behaviour of the PMDMs.

“The computational model was instrumental in predicting how individual microrobot units would self-assemble and respond to dynamic magnetic fields,” says Philipp Schoenhoefer, co-first author from the University of Michigan. “This allowed us to understand and optimize the magnetic interactions between the particles and anticipate how the robots would behave under specific actuation protocols.”

The researchers are now using these simulations to design more advanced microrobot structures with enhanced multifunctionality and mechanical resilience. “The next-generation designs aim to handle the more challenging in vivo conditions, such as high fluid shear and irregular tissue architectures,” Sharon Glotzer from the University of Michigan, simulation lead for the project, tells Physics World.

The microrobots are described in Science Advances.

The post Magnetically controlled microrobots show promise for precision drug delivery appeared first on Physics World.

  •  

Entangled expressions: where quantum science and art come together

What happens when you put a visual artist in the middle of a quantum physics lab? This month’s Physics World Stories podcast explores that very question, as host Andrew Glester dives into the artist-in-residence programme at the Yale Quantum Institute in the US.

Serena Scapagnini
Serena Scapagnini, 2025. (Credit: Filippo Silvestris)

Each year, the institute welcomes an artist to explore the intersections of art and quantum science, bridging the ever-fuzzy boundary between the humanities and the sciences. You will hear from the current artist-in-residence Serena Scapagnini, a visual artist and art historian from Italy. At Yale, she’s exploring the nature of memory, both human and quantum, through her multidisciplinary projects.

You’ll also hear from Florian Carle, managing director of the institute and the co-ordinator of the residency. Once a rocket scientist, Carle has always held a love of theatre and the arts alongside his scientific work. He believes art–science collaborations open new possibilities for engaging with quantum ideas, and that includes music – which you’ll hear in the episode.

Discover more about quantum art and science in the free-to-read Physics World Quantum Briefing 2025

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Entangled expressions: where quantum science and art come together appeared first on Physics World.

💾

  •  

Exographer: a scientific odyssey in pixel form

In an era where video games often prioritize fast-paced action and instant gratification, Exographer offers a refreshing change. With a contemplative journey that intertwines the realms of particle physics and interactive storytelling, this beautifully pixelated game invites players to explore a decaying alien civilization through the lens of scientific discovery while challenging them with dexterity and intellect.​

Exographer was developed by particle physicist and science-fiction author Raphaël Granier de Cassagnac and his video-game studio SciFunGames. At its core, it is a puzzle-platformer – where the player’s character has to move around an environment using platforms while solving puzzles. The character in question is Ini, an alien explorer who discovers a multifunctional camera in the opening scenes of the game’s narrative. Stranded on a seemingly deserted planet, Ini is tasked with unlocking the mystery of the world’s fallen civilization.

The camera quickly becomes central to gameplay, allowing for environmental analysis, teleportation to previously visited locations and, most intriguingly, the discovery of subatomic particles through puzzles inspired by Feynman diagrams. These challenges require players to match particle trajectories using various analytical tools, mirroring the investigative processes of real-world physicists. ​

It is in these games where the particle physics really shines through. Beamlines have to be tracked and redirected to unveil greater understanding of the particles that make up this strange land and, with that, Ini’s abilities to understand the world.

As you crack one puzzle, a door opens and off you pootle to another blockage or locked door. Players will doubtless, as I did, find themselves wandering around areas pondering how to unlock it. A tip for those a little stuck: use the camera wherever a background seems a little different. In most circumstances, clues and cues will be waiting there.

Pixels and particles

The game’s environments are meticulously crafted, drawing inspiration from actual laboratories and observatories. I played the game on Nintendo Switch, but it is also available on several other platforms – including PS5, Xbox and Steam – and it looks pretty much identical on each. The pixel art style is not merely a visual choice but a thematic one, symbolizing the fundamental “pixels” of the universe of elementary particles. As players delve deeper, they encounter representations of particles including electrons, gluons and muons, each unlocking new abilities that alter gameplay and exploration. ​

Meanwhile, the character of Ini moves in a smooth and – for those gamers among us with a love of physics – realistic way. There is even a hint of lighter gravity as you hold down the button to activate a longer jump.

Computer game pixel art representation of an underwater neutrino observatory
Game with depth An undersea puzzle in Exographer features a Km3Net-inspired neutrino observatory. (Courtesy: SciFunGames)

What sets Exographer apart is its ability to educate without compromising entertainment. The integration of scientific concepts is seamless, offering players a glimpse into the world of particle physics without overwhelming them. However, it’s worth noting that some puzzles may present a steep learning curve, potentially posing challenges for those less familiar with scientific reasoning.

Complementing the game’s visual and intellectual appeal is its atmospheric soundtrack, composed by Yann Van Der Cruyssen, known for his work on the game Stray. As with Stray – where you take the role of a stray cat with a backpack – the music enhances the sense of wonder and discovery, underscoring the game’s themes of exploration and scientific inquiry. ​

Exographer is more than just a game; it’s an experience that bridges the gap between science and (pixelated) art. It challenges players to think critically, to explore patiently, and to appreciate the intricate beauty of the universe’s building blocks. For those willing to engage with its depth, Exographer offers a rewarding journey that lingers after the console is turned off.

The post <em>Exographer</em>: a scientific odyssey in pixel form appeared first on Physics World.

  •  

Scientists image excitons in carbon nanotubes for the first time

Researchers in Japan have directly visualized the formation and evolution of quasiparticles known as excitons in carbon nanotubes for the first time. The work could aid the development of nanotube-based nanoelectronic and nanophotonic devices.

Carbon nanotubes (CNTs) are rolled-up hexagonal lattices of carbon just one atom thick. When exposed to light, they generate excitons, which are bound pairs of negatively-charged electrons and positively-charged “holes”. The behaviour of these excitons governs processes such as light absorption, emission and charge carrier transport that are crucial for CNT-based devices. However, because excitons are confined to extremely small regions in space and exist for only tens of femtoseconds (fs) before annihilating, they are very difficult to observe directly with conventional imaging techniques.

Ultrafast and highly sensitive

In the new work, a team led by Jun Nishida and Takashi Kumagai at the Institute for Molecular Science (IMS)/SOKENDAI, together with colleagues at the University of Tokyo and RIKEN, developed a technique for imaging excitons in CNTs. Known as ultrafast infrared scattering-type scanning near-field optical microscopy (IR s-SNOM), it first illuminates the CNTs with a short visible laser pulse to create excitons and then uses a time-delayed mid-infrared pulse to probe how these excitons behave.

“By scanning a sharp gold-coated atomic force microscope (AFM) tip across the surface and detecting the scattered infrared signal with high sensitivity, we can measure local changes in the optical response of the CNTs with 130-nm spatial resolution and around 150-fs precision,” explains Kumagai. “These changes correspond to where and how excitons are formed and annihilated.”

According to the researchers, the main challenge was to develop a measurement that was ultrafast and highly sensitive while also having a spatial resolution high enough to detect a signal from as few as around 10 excitons. “This required not only technical innovations in the pump-probe scheme in IR s-SNOM, but also a theoretical framework to interpret the near-field response from such small systems,” Kumagai says.

The measurements reveal that local strain and interactions between CNTs (especially in complex, bundled nanotube structures) govern how excitons are created and annihilated. Being able to visualize this behaviour in real time and real space makes the new technique a “powerful platform” for investigating ultrafast quantum dynamics at the nanoscale, Kumagai says. It also has applications in device engineering: “The ability to map where excitons are created and how they move and decay in real devices could lead to better design of CNT-based photonic and electronic systems, such as quantum light sources, photodetectors, or energy-harvesting materials,” Kumagai tells Physics World.

Extending to other low-dimensional systems

Kumagai thinks the team’s approach could be extended to other low-dimensional systems, enabling insights into local dynamics that have previously been inaccessible. Indeed, the researchers now plan to apply their technique to other 1D and 2D materials (such as semiconducting nanowires or transition metal dichalcogenides) and to explore how external stimuli like strain, doping, or electric fields affect local exciton dynamics.

“We are also working on enhancing the spatial resolution and sensitivity further, possibly toward single-exciton detection,” Kumagai says. “Ultimately, we aim to combine this capability with in operando device measurements to directly observe nanoscale exciton behaviour under realistic operating conditions.”

The technique is detailed in Science Advances.

The post Scientists image excitons in carbon nanotubes for the first time appeared first on Physics World.

  •  

A new path to robust edge states using heat and disorder

Topological insulators are materials that behave as insulators in their interior but support the flow of electrons along their edges or surfaces. These edge states are protected against weak disorder, such as impurities, but can be disrupted by strong disorder. Recently, researchers have explored a new class of materials known as topological Anderson insulators. In these systems, strong disorder leads to Anderson localization, which prevents wave propagation in the bulk while still allowing robust edge conduction.

The Fermi energy is the highest energy an electron can have in a material at absolute zero temperature. If the Fermi energy lies in a conductive region, the material will conduct; if it lies in a ‘gap’, the material will be insulating. In a conventional topological insulator, the Fermi energy sits within the band gap. In topological Anderson insulators, it sits within the mobility gap rather than the conventional band gap, making the edge states highly stable. Electrons can exist in the mobility gap (unlike in the band gap), but they are localized and trapped. Until now, the transition from a topological insulator to a topological Anderson insulator has only been achieved through structural modifications, which limits the ability to tune the material’s properties.

In this study, the authors present both theoretical and experimental evidence that this phase transition can be induced by applying heat. Heating introduces energy exchange with the environment, making the system non-Hermitian. This approach provides a new way to control the topological state of a material without altering its structure. Further heating prompts a second phase transition, from a topological Anderson insulator to an Anderson insulator, where all electronic states become localized, and the material becomes fully insulating with no edge conduction.

This research deepens our understanding of how disorder influences topological phases and introduces a novel method for engineering and tuning these phases using thermal effects. It also provides a powerful tool for modulating electron conductivity through a simple, non-invasive technique.

Read the full article

Topological Anderson phases in heat transport

He Gao et al 2024 Rep. Prog. Phys. 87 090501

Do you want to learn more about this topic?

Interacting topological insulators: a review by Stephan Rachel (2018)

The post A new path to robust edge states using heat and disorder appeared first on Physics World.

  •  

Another win for lepton flavour universality

Lepton flavour universality is a principle in particle physics that concerns how all leptons (electrons, muons and taons) should interact with the fundamental forces of nature. The only difference between these interactions should be due to the different masses of the three particles.

This idea is a crucial testable prediction of the Standard Model and any deviations might suggest new physics beyond it.

Although many experimental results have generally supported this claim, some recent experimental results have shown tensions with its predictions.

Therefore the CMS collaboration at CERN set out to analyse data from proton-proton collisions, this time using a special high-rate data stream, designed for collecting around 10 billion proton decays.

They looked for signs of the decay of B mesons (a bottom quark and an up antiquark) into electron-positron or muon-antimuon pairs.

If lepton flavour universality is true, the likelihood of these two outcomes should be almost equal.

The authors found exactly that. To within their experimental uncertainty, there was no evidence of one decay being more likely than the other.

These results provide further support for this principle and suggest that different avenues ought to be studied to seek physics beyond the Standard Model.

Read the full article

Test of lepton flavor universality in and decays in proton-proton collisions at – IOPscience

CMS Collaboration 2024 Rep. Prog. Phys. 87 077802

The post Another win for lepton flavour universality appeared first on Physics World.

  •  

Tensions rise between US administration and science agencies

Large group of people stood on grass outside a tall building holding a "75" sign
Stormy times Hundreds of staff at the National Science Foundation marked the agency’s 75th birthday in May with a group photo. (CC BY SA 4.0 Matthew Herron)

A total of 139 employees at the US Environmental Protection Agency (EPA) have been suspended after signing a “declaration of dissent” accusing Donald Trump’s administration of “undermining” the agency’s mission. The letter, dated 1 July, stated that the signatories “stand together against the current administration’s focus on harmful deregulation, mischaracterization of previous EPA activities, and disregard for scientific expertise”.

Addressed to EPA administrator Lee Zeldin, the letter was signed by a total of more than 400 EPA workers, of whom 170 put their names to the document, with the rest choosing to remain anonymous. Zeldin suspended the employees on 3 July, with EPA officials telling them to provide contact information so the agency could be in touch with them while they are on leave.

Copied to leaders of the US Senate and House of Representatives, the letter was organized by the Stand Up For Science pressure group. The letter states that “EPA employees join in solidarity with employees across the Federal government in opposing this administration’s policies, including those that undermine the EPA mission of protecting human health and the environment.”

The document lists five “primary concerns”, including the scientific consensus being ignored to benefit polluters, and undermining public trust by EPA workers being distracted from protecting public health and the environment through objective science-based policy.

The letter adds that the EPA’s progress in the US’s most vulnerable communities is being reversed through the cancellation of environmental justice programmes, while budget cuts to the Office of Research and Development, which helps support the agency’s rules on environmental protection and human health, mean it cannot meet the EPA’s science needs. The letter also points to a culture of fear at the EPA, with staff being forced to choose between their livelihood and well-being.

In response to the letter, Zeldin said he had a “ZERO tolerance policy for agency bureaucrats unlawfully undermining, sabotaging and undercutting the agenda of this administration”. An EPA statement, sent to Physics World, notes that the letter “contains information that misleads the public about agency business”, adding that the letter’s signatories “represent a small fraction of the thousands of [agency] employees”. On 18 July Zeldin then announced a plan to eliminate the EPA’s Office of Research and Development, which could lead to more than 1000 agency scientists being sacked.

Climate concerns

In late July, more than 280 NASA employees signed a similar declaration of dissent protesting against staff cuts at the agency as well as calling on the acting head of NASA not to make the budget cuts Trump proposed. Another example of the tension in US science took place in May when hundreds of staff from the National Science Foundation (NSF) gathered in front of NSF headquarters for a photo marking the agency’s 75th birthday. NSF officials, who had been criticized for seeking to cut the agency’s budget and staff, and slash the proportion of scientific grants’ costs allowed for ancillary expenses, refused to support the event with an official photographer.

Staff then used their own photographer, but they could only take a shot from a public space at the side of the building. In late June, the administration announced that the NSF will have to quit the building, which it has occupied since 2017. No new location for the headquarters has been announced, with NSF spokesperson Michelle Negrón declining to comment on the issue. The new tenant will be the Department of Housing and Urban Development.

The Department of Energy, meanwhile, has announced that it will hire three scientists who have expressed doubts about the scientific consensus on climate change – although details of the trio’s job descriptions remain unknown. They are Steven Koonin, a physicist at Stanford University’s Hoover Institution, along with atmospheric scientist John Christy, director of the Earth System Science Center at the University of Alabama in Huntsville, and Alabama meteorologist Roy Spencer.

The appointments come as the administration is taking steps to de-emphasize government research on climate and weather science. The proposed budget for financial year 2026 would close 10 labs belonging to the National Oceanic and Atmospheric Administration (NOAA). The NOAA’s National Weather Service has already lost 600 of its 4200 employees this year, while NASA has announced that it will no longer host the National Climate Assessment website globalchange.gov.

The post Tensions rise between US administration and science agencies appeared first on Physics World.

  •  

Making science go boom: Big Manny’s outreach journey

When lockdown hit, school lab technician Emanual Wallace started posting videos of home science experiments on social media. Now, as Big Manny, he’s got over three million followers on Instagram and TikTok; won TikTok’s Education Creator of the Year 2024; and has created videos with celebrities like Prince William and Brian Cox. Taking his science communication beyond social media, he’s been on CBBC’s Blue Peter and Horrible Science; has made TV appearances on shows like This Morning and BBC Breakfast; and has even given talks at Buckingham Palace and the Houses of Parliament.

But he’s not stopped there. Wallace has also recently published a second book in his Science is Lit series, Awesome Electricity and Mad Magnets, which is filled with physics experiments that children can do at home. He talks to Sarah Tesh about becoming the new face of science communication, and where he hopes this whirlwind journey will go next.

'This Morning' TV show with Big Manny doing science experiments
Making science fun Big Manny (right) on ITV show This Morning with host Alison Hammond and Paddy McGuiness. (Courtesy: Ken McKay/ITV/Shutterstock)

What sparked your interest in science?

I’ve always been really curious. Ever since I was young, I had a lot of questions. I would, for example, open up my toys just so I could see what was inside and how they worked. Then when I was in year 8 I had a science teacher called Mr Carter, and in every lesson he was doing experiments, like exciting Bunsen burner ones. I would say that’s what ignited my passion for science. And naturally, I just gravitated towards science because it answered all the questions that I had.

Growing up, what were the kind of science shows that you were really interested in?

When I was about 11 the show that I used to love was How it’s Made? And there’s a science creator called Nile Red – he creates chemistry videos, and he inspired me a lot. I used to watch him when I was growing up and then I actually got to meet him as well. He’s from Canada so when he came over, he came to my house and we did some experiments. To be inspired by him and then to do experiments with him, that was brilliant. I also used to watch a lot of Brian Cox when I was younger, and David Attenborough – I still watch Attenborough’s shows now.

You worked in a school for a while after your degrees at the University of East London – what made you go down that path rather than, say, staying in academia or going into industry?

Well, my bachelor’s and master’s degrees are in biomedical science, and my aspiration was to become a biomedical scientist working in a hospital lab, analysing patient samples. When I came out of university, I thought that working as a science technician at a school would be a great stepping stone to working as a biomedical scientist because I needed to gain some experience within a lab setting. So, the school lab was my entry point, then I was going to go into a hospital lab, and then work as a biomedical scientist.

Big Manny's science book
Sparking interest Big Manny has now written his own series of children’s science books. (Courtesy: Penguin Books)

But my plans have changed a bit now. To become a registered biomedical scientist you need to do nine months in a hospital lab, and at the moment, I’m not sure if I can afford to take nine months off from my work doing content creation. I do still want to do it, but maybe in the future, who knows.

What prompted you to start making the videos on social media?

When I was working in schools, it was around the time of lockdown. There were school closures, so students were missing out on a lot of science – and science is a subject where to gain a full understanding, you can’t just read the textbook. You need to actually do the experiments so you can see the reactions in front of you, because then you’ll be more likely to retain the information.

I started to notice that students were struggling because of all the science that they had missed out on. They were doing a lot of Google classrooms and Zoom lessons, but it just wasn’t having the full impact. That’s when I took it upon myself to create science demonstration videos to help students catch up with everything they’d missed. Then the videos started to take off.

How do you come up with the experiments you feature in your videos?  If you’re hoping to help students, do you follow the school curriculum?

I would say right now there’s probably three main types of videos that I make. The first includes experiments that pertain to the national curriculum – the experiments that might come up in, say, the GCSE exams. I focus on those because that’s what’s going to be most beneficial to young people.

Secondly, I just do fun experiments. I might blow up some fruit or use fire or blow up a hydrogen balloon. Just something fun and visually engaging, something to get people excited and show them the power of science.

And then the third type of video that I make is where I’m trying to promote a certain message. For example, I did a video where I opened up a lithium battery, put it into water and we got an explosion, because I wanted to show people the dangers of not disposing of batteries correctly. I did another one where I showed people the effects of vaping on the lungs, and one where I melted down a knife and I turned it into a heart to persuade people to put down their knives and spread love instead.

Who would you say is your primary audience?

Well, I would say that my audience is quite broad. I get all ages watching my videos on social media, while my books are focused towards primary school children, aged 8 to 12 years. But I’ve noticed that those children’s parents are also interested in the experiments, and they might be in their 30s. So it’s quite a wide age range, and I try to cater for everyone.

In your videos, which of the sciences would you say is the easiest to demonstrate and which is the hardest?

I’d say that chemistry is definitely the easiest and most exciting because I can work with all the different elements and show how they react and interact with each other. I find that biology can sometimes be a bit tricky to demonstrate because, for example, a lot of biology involves the human body – things like organ systems, the circulatory system and the nervous system are all inside the body, while cells are so small we can’t really see them. But there’s a lot that I can do with physics because there’s forces, electricity, sound and light. So I would say chemistry is the easiest, then physics, and then biology is the hardest.

Do you have a favourite physics experiment that you do?

I would say my favourite physics experiment is the one with the Van de Graff generator. I love that one – how the static electricity makes your hair stand up and then you get a little electric shock, and you can see the little electric sparks. 

You’re becoming a big name in science communication – what does an average day look like for you now?

On an average day, I’m doing content creation. I will research some ideas, find some potential experiments that I might want to try. Then after that I will look at buying the chemicals and equipment that I need. From there, I’ll probably do some filming, which I normally just do in my garden. Straight after, I will edit all the clips together, add the voiceover, and put out the content on social media. One video can easily take the whole day – say about six or seven hours – especially if the experiment doesn’t go as planned and I need to tweak the method or pop out and get extra supplies.

In your videos you have a load of equipment and chemicals. Have you built up quite a laboratory of kit in your house now?

Yeah, I’ve got a lot of equipment. And some of it is restricted too, like there’s some heavily regulated substances. I had to apply for a licence to obtain certain chemicals because they can be used to make explosives, so I had to get clearance.

What are you hoping to achieve with your work?

I’ve got two main goals at the moment. One of them is bringing science to a live audience. Most people, they just see my content online, but I feel like if they see it in person and they see the experiments live, it could have an even bigger impact. I could excite even more people with science and get them interested. So that’s one thing that I’m focusing on at the moment, getting some live science events going.

I also want to do some longer-form videos because my current ones are quite short – they’re normally about a minute long. I realize that everyone learns in different ways. Some people like those short, bite-sized videos because they can gain a lot of information in a short space of time. But some people like a bit more detail – they like a more lengthy video where you flesh out scientific concepts. So that’s something that I would like to do in the form of a TV science show where I can present the science in more detail.

The post Making science go boom: Big Manny’s outreach journey appeared first on Physics World.

  •  

Spin-qubit control circuit stays cool

Researchers in Australia say that they have created the first CMOS chip that can control the operation of multiple spin qubits at ultralow temperatures. Through an advanced approach to generating the voltage pulses needed to control the qubits, a team led by David Reilly at the University of Sydney showed that control circuits can be integrated with qubits in a heterogeneous chip architecture. The design is a promising step towards a scalable platform for quantum computing.

Before practical quantum computers can become a reality, scientists and engineers must work out how to integrate large numbers (potentially millions) of qubits together – while preserving the quantum information as it is processed and exchanged. This is currently very difficult because the quantum nature of qubits (called coherence) tends to be destroyed rapidly by heat and other environmental noise.

One potential candidate for integration are the silicon spin qubits, which have advantages that include their tiny size, their relatively long coherence times, and their compatibility with large-scale electronic control circuits.

To operate effectively, however, these systems need to be cooled to ultralow temperatures. “A decade or more ago we realized that developing cryogenic electronics would be essential to scaling-up quantum computers,” Reilly explains. “It has taken many design iterations and prototype chips to develop an approach to custom silicon that operates at 100 mK using only a few microwatts of power.”

Heat and noise

When integrating multiple spin qubits onto the same platform, each of them must be controlled and measured individually using integrated electronic circuits. These control systems not only generate heat, but also introduce electrical noise – both of which are especially destructive to quantum logic gates based on entanglement between pairs of qubits.

Recently, researchers have addressed this challenge by separating the hot, noisy control circuits from the delicate qubits they control. However, when the two systems are separated, long cables are needed to connect each qubit individually to the control system. This creates a dense network of interconnects that would prove extremely difficult and costly to scale up to connect millions of qubits.

For over a decade, Reilly’s team have worked towards a solution to this control problem. Now, they have shown that the voltage pulses needed to control spin qubits can be generated directly on a CMOS chip by moving small amounts of charge between closely spaced capacitors. This is effective at ultralow temperatures, allowing the on-board control of qubits.

CMOS chiplet

“We control spin qubits using a tightly integrated CMOS chiplet, addressing the interconnect bottleneck challenge that arises when the control is not integrated with qubits,” Reilly explains. “Via careful design, we show that the qubits hardly notice the switching of 100,000 transistors right next door.“

The result is a two-part chip architecture that, in principle, could host millions of silicon spin qubits. As a benchmark, Reilly’s created two-qubit entangling gates on their chip. When they cooled their chip to the millikelvin temperatures required by the qubits, its control circuits carried out the operation just as flawlessly as previous systems with separated control circuits.

While the architecture is still some way from integrating millions of qubits onto the same chip, the team believes that this goal is a step closer.

“This work now opens a path to scaling up spin qubits since control systems can now be tightly integrated,” Reilly says. “The complexity of the control platform has previously been a major barrier to reaching the scale where these machines can be used to solve interesting real-world problems.”

The research is described in Nature.

The post Spin-qubit control circuit stays cool appeared first on Physics World.

  •  

A cosmic void may help resolve the Hubble tension

A large, low density region of space surrounding the Milky Way may explain one of the most puzzling discrepancies in modern cosmology. Known as the Hubble tension, the issue arises from conflicting measurements of how fast the universe is expanding. Now, a new study suggests that the presence of a local cosmic void could explain this mismatch, and significantly improves agreement with observations compared to the Standard Model of cosmology.

“Numerically, the local measurements of the expansion rate are 8% higher than expected from the early universe, which amounts to over six times the measurement uncertainty,” says Indranil Banik, a cosmologist at the University of Portsmouth and a collaborator on the study. “It is by far the most serious issue facing cosmology.”

The Hubble constant describes how fast the universe is expanding and it can be estimated in two main ways. One method involves looking far into the past by observing the cosmic microwave background (CMB). This is radiation that was created shortly after the Big Bang and permeates the universe to this day. The other method relies on the observation of relatively nearby objects, such as supernovae and galaxies, to measure how fast space is expanding in our own cosmic neighbourhood.

If the Standard Model of cosmology is correct, these two approaches should yield the same result. But, they do not. Instead, local measurements suggest the universe is expanding faster than the expansion given by early-universe data. Furthermore, this disagreement is too large to dismiss as experimental error.

Local skewing

One possible explanation is that something about our local environment is skewing the results. “The idea is that we are in a region of the universe that is about 20% less dense than average out to a distance of about one billion light years,” Banik explains. “There is actually a lot of evidence for a local void from number counts of various kinds of sources across nearly the whole electromagnetic spectrum, from radio to X-rays.”

Such a void would subtly affect how we interpret the redshifts of galaxies. This is the stretching of the wavelength of galactic light that reveals how quickly a galaxy is receding from us. In an underdense (of relatively low density) region, galaxies are effectively pulled outward by the gravity of surrounding denser areas. This motion adds to the redshift caused by the universe’s overall expansion, making the local expansion rate appear faster than it actually is.

“The origin of such a [void] would trace back to a modest underdensity in the early universe, believed to have arisen from quantum fluctuations in density when the universe was extremely young and dense,” says Banik. However, he adds, “A void as large and deep as observed is not consistent with the standard cosmological model. You would need structure to grow faster than it predicts on scales larger than about one hundred million light–years”.

Testing the theory

To evaluate whether the void model holds up against data, Banik and his collaborator Vasileios Kalaitzidis at the UK’s University of St Andrews compared it with one of cosmology’s most precise measurement tools: baryon acoustic oscillations (BAOs). These are subtle ripples in the distribution of galaxies that were created by sound waves in the early universe and then frozen into the large-scale structure of space as it cooled.

Because these ripples provide a characteristic distance scale, they can be used as a “standard ruler” to track how the universe has expanded over time. By comparing the apparent size of this ruler as observed at different distances, cosmologists can map the universe’s expansion history. Crucially, if our galaxy lies inside a void, that would alter how the ruler appears locally, in a way that can be tested.

The researchers compared the predictions of their model with twenty years of BAO observations, and the results are striking. “BAO observations over the last twenty years show the void model is about one hundred million times more likely than the Standard Model of cosmology without any local void,” says Banik. “Importantly, the parameters of all these models were fixed without considering BAO data, so we were really just testing the predictions of each model.”

What lies ahead

While the void model appears promising, Banik says that more data are needed. “Additional BAO observations at relatively short distances would help a lot because that is where a local void would have the greatest impact.” Other promising avenues include measuring galaxy velocities and refining galaxy number counts. “I would suggest that it can be essentially confirmed in the next five to ten years, since we are talking about the nearby universe after all.”

Banik is also analysing supernovae data to explore whether the Hubble tension disappears at greater distances. “We are testing if the Hubble tension vanishes in the high-redshift or more distant universe, since a local void would not have much effect that far out,” he says.

Despite the challenges, Banik remains optimistic. With improved surveys and more refined models, cosmologists may be closing in on a solution to the Hubble tension.

The research is described in Monthly Notices of the Royal Astronomical Society.

The post A cosmic void may help resolve the Hubble tension appeared first on Physics World.

  •  

Lee Packer: ‘There’s no fundamental physical reason why fusion energy won’t work’

The Cockcroft Walton lecture series is a bilateral exchange between the Institute of Physics (IOP) and the Indian Physics Association (IPA). Running since 1998, it aims to promote dialogue on global challenges through physics.

Lee Packer, who has over 25 years of experience in nuclear science and technology and is an IOP Fellow, delivered the 2025 Cockcroft Walton Lecture Series in April. Packer gave a series of lectures at the Homi Bhabha Research Centre (BARC) in Mumbai, the Institute for Plasma Research (IPR) in Ahmedabad and the Inter-University Accelerator Centre (IUAC) in Delhi.

Packer is a fellow of the UK Atomic Energy Authority (UKAEA), in which he works on nuclear aspects of fusion technology. He also works as consultant to the International Atomic Energy Agency (IAEA) in Vienna, where he is based in the physics section of the department of nuclear sciences and applications.

Packer also holds an honorary professorship at the University of Birmingham, UK, where he lectures on nuclear fusion as part of its long-running MSc course in the physics and technology of nuclear reactors.

Below, Packer talks to Physics World about the trip, his career in fusion and what advice he has for early-career researchers.

When did you first become interested in physics?

I was fortunate to have some inspiring teachers at school who made physics feel both exciting and full of possibility. It really brought home how important teachers are in shaping future careers and they deserve far more recognition than they often receive. I went on to study physics at Salford University and during that time spent a year on industrial placement at the ISIS Neutron and Muon Source based at the Rutherford Appleton Laboratory (RAL). That year deepened my interest in applied nuclear science and highlighted the immense value of neutrons across real-world applications – from materials research and medicine to nuclear energy.

Can you tell me about your career to date?

I’ve specialized in applied nuclear science throughout my career, with a particular focus on neutronics – the analysis of neutron transport – and radiation detection applied to nuclear technologies. Over the past 25 years, I’ve worked across the nuclear sector – in spallation, fission and fusion – beginning in analytical and research roles before progressing to lead technical teams supporting a broad range of nuclear programmes.

When did you start working in fusion?

While I began my career in spallation and fission, the expertise I developed in neutronics made it a natural transition into fusion in 2008. It’s important to recognize that deuterium-tritium fuelled fusion power is a neutron-rich energy source – in fact, 80% of the energy released comes from neutrons. That means every aspect of fusion technology must be developed with the nuclear environment firmly in mind.

Why do you like about working in fusion energy?

Fusion is an inherently interdisciplinary challenge and there are many interesting and difficult problems to solve, which can make it both stimulating and rewarding. There’s also a strong and somewhat refreshing international spirit in fusion – the hard challenges mean collaboration is essential. I also like working with early-career scientists and engineers to share knowledge and experience. Mentoring and teaching is rewarding, and it’s crucial that we continue building the pipelines of talent needed for fusion to succeed.

Tell me about your trip to India to deliver the Cockcroft Walton lecture series?

I was honoured to be selected to deliver the Cockcroft-Walton lecture series. Titled “Perspectives and challenges within the development of nuclear fusion energy”, the lectures explored the current global landscape of fusion R&D, technical challenges in areas such as neutronics and tritium breeding, and the importance of international collaboration. I shared some insights from activities within the UK and gave a global perspective. The reception was very positive – there’s strong enthusiasm within the Indian fusion community and they are making excellent contributions to global progress in fusion. The hosts were extremely welcoming, and I’d like to thank them for their hospitality and the fascinating technical tours at each of the institutes. It was an experience I won’t forget.

What are India’s strengths in fusion?

India has several strengths including a well-established technical community, major national laboratories such as IPR, IUAC and BARC, and significant experience in fusion through its domestic programme and direct involvement in ITER as one of the seven member states. There is strong expertise in areas such as nuclear physics, neutronics, materials, diagnostics and plasma physics.

Lee Packer meeting officials at BARC
Meeting points: Lee Packer meeting senior officials at the Homi Bhabha Research Centre in Mumbai. (Courtesy: Indian Physics Association)

What could India improve?

Where India might improve could be in building further on its amazing potential – particularly its broader industrial capacity and developing its roadmap towards power plants. Common to all countries pursuing fusion, sustained investment in training and developing talented people will be key to long-term success.

When do you think we will see the first fusion reactor supplying energy to the grid?

I can’t give a definitive answer for when fusion will supply electricity to the grid as it depends on resolving some tough, complex technical challenges alongside sustained political commitment and long-term investment. There’s a broad range of views and industrial strategies being developed within the field. For example, the UK government’s recently published clean energy industrial strategy mentions the Spherical Tokamak for Energy Production programme, which aims to deliver a prototype fusion power plant by 2040 at West Burton, Nottinghamshire, at the site of a former coal power station. The Fusion Industry Association’s survey of private fusion companies reports that many are aiming for fusion-generated electricity by the late 2030s, though time projections vary.

There are others who say it may never happen?

Yes. On the other hand, some point to several critical hurdles to address and offer more cautious perspectives and call for greater realism. One such problem, close to my own interest in neutronics, is the need to demonstrate tritium-breeding blanket-technology systems and to develop lithium-6 supplies at the required scale for the industry.

What are the benefits of doing so?

The potential benefits for society are too significant to disregard on the grounds of difficulty alone. There’s no fundamental physical reason why fusion energy won’t work and the journey itself brings substantial value. The technologies developed along the way have potential for broader applications, and a highly skilled and adaptable workforce is developed with this.

What advice do you have for early-career physicists thinking about working in the field?

Fusion needs strong collaboration between people from across the board – physicists, engineers, materials scientists, modellers and more. It’s an incredibly exciting time to get involved. My advice would be to keep an open mind and seek out opportunities to work across these disciplines. Look for placements, internships, graduate or early-career positions and mentorship – and don’t be afraid to ask questions. There’s a brilliant international community in fusion, and a willingness to support those with kick-starting their careers in this field. Join the effort to develop this technology and you’ll be part of something that’s not only intellectually stimulating and technically challenging but is also important for the future of the planet.

The post Lee Packer: ‘There’s no fundamental physical reason why fusion energy won’t work’ appeared first on Physics World.

  •  

UK ‘well positioned’ to exploit space manufacturing opportunities, says report

The UK should focus on being a “responsible, intelligent and independent leader” in space sustainability and can make a “major contribution” to the area. That’s the verdict of a new report from the Institute of Physics (IOP), which warns, however, that such a move is possible only with significant investment and government backing.

The report, published together with the Frazer-Nash Consultancy, examines the physics that underpins the space science and technology sector. It also looks at several companies that work on services such as position, navigation and timing (PNT), Earth observation as well as satellite communications.

In 2021/22 PNT services contributed over 12%, or about £280bn, to the UK’s gross domestic product – and without them many critical national infrastructures such as the financial and emergency systems would collapse. The report says, however, that while the UK depends more than ever on global navigation satellite systems (GNSS) that reliance also exposes the country to its weaknesses.

“The scale and sophistication of current and potential PNT attacks has grown (such as increased GPS signal jamming on aeroplanes) and GNSS outages could become commonplace,” the report notes. “Countries and industries that address the issue of resilience in PNT will win the time advantage.”

Telecommunication satellite services contributed £116bn to the UK in 2021/22, while Earth observation and meteorological satellite services supported industries contributing an estimated £304bn. The report calls the future of Earth observation “bold and ambitious”, with satellite data resolving “the disparities with the quality and availability of on-the-ground data, exacerbated by irregular dataset updates by governments or international agencies”.

Future growth

As for future opportunities, the report highlights “in-space manufacturing”, with companies seeing “huge advantages” in making drugs, harvesting stem cells and growing crystals through in-orbit production lines. The report says that In-Orbit Servicing and Manufacturing could be worth £2.7bn per year to the UK economy but central to that vision is the need for “space sustainability”.

The report adds that the UK is “well positioned” to lead in sustainable space practices due to its strengths in science, safety and sustainability, which could lead to the creation of many “high-value” jobs. Yet this move, the report warns, demands an investment of time, money and expertise.

“This report captures the quiet impact of the space sector, underscoring the importance of the physics and the physicists whose endeavours underpin it, and recognising the work of IOP’s growing network of members who are both directly and indirectly involved in space tech and its applications,” says Alex Davies from the Rutherford Appleton Laboratory, who founded the IOP Space Group and is currently its co-chair.

Particle physicist Tara Shears from the University of Liverpool, who is IOP vice-president for science and innovation, told Physics World that future space tech applications are “exciting and important”. “With the right investment, and continued collaboration between scientists, engineers, industry and government, the potential of space can be unlocked for everyone’s benefit,” she says. “The report shows how physics hides in plain sight; driving advances in space science and technology and shaping our lives in ways we’re often unaware of but completely rely on.”

The post UK ‘well positioned’ to exploit space manufacturing opportunities, says report appeared first on Physics World.

  •  

Accounting for skin colour increases the accuracy of Cherenkov dosimetry

Cherenkov dosimetry is an emerging technique used to verify the dose delivered during radiotherapy, by capturing Cherenkov light generated when X-ray photons in the treatment beam interact with tissue in the patient. The initial intensity of this light is proportional to the deposited radiation dose – providing a means of non-contact in vivo dosimetry. The intensity emitted at the skin surface, however, is highly dependent on the patient’s skin colour, with increasing melanin absorbing more Cherenkov photons.

To increase the accuracy of dose measurements, researchers are investigating ways to calibrate the Cherenkov emission according to skin pigmentation. A collaboration headed up at Dartmouth College and Moffitt Cancer Center has now studied Cherenkov dosimetry in patients with a wide spectrum of skin tones. Reporting their findings in Physics in Medicine & Biology, they show how such a calibration can mitigate the effect of skin pigmentation.

“Cherenkov dosimetry is an interesting prospect because it gives us a completely passive, fly-on-the-wall approach to radiation dose verification. It does not require taping of detectors or wires to the patient, and allows for a broader sampling of the treatment area,” explains corresponding author Jacqueline Andreozzi. “The hope is that this would allow for safer, verifiable radiation dose delivery consistent with the treatment plan generated for each patient, and provide a means of assessing the clinical impact when treatment does not go as planned.”

Illustration of Cherenkov dosimetry
Cherenkov dosimetry The intensity of Cherenkov light detected during radiotherapy is influenced by the individual’s melanin concentration. (Courtesy: Phys. Med. Biol.10.1088/1361-6560/aded68)

A diverse patient population

Andreozzi, first author Savannah Decker and their colleagues examined 24 patients undergoing breast radiotherapy using 6 or 15 MV photon beams, or a combination of both energies.

During routine radiotherapy at Moffitt Cancer Center the researchers measured the Cherenkov emission from the tissue surface (roughly 5 mm deep) using a time-gated, intensified CMOS camera installed in the bunker ceiling. To minimize effects from skin reactions, they analysed the earliest fraction of each patient’s treatment.

Medical physicist Savannah Decker
First author Medical physicist Savannah Decker. (Courtesy: Jacob Sunnerberg)

Patients with darker skin exhibited up to five times lower Cherenkov emission than those with lighter skin for the same delivered dose – highlighting the significant impact of skin pigmentation on Cherenkov-based dose estimates.

To assess each patient’s skin tone, the team used standard colour photography to calculate the relative skin luminance as a metric for pigmentation. A colour camera module co-mounted with the Cherenkov imaging system simultaneously recorded an image of each patient during their radiation treatments. The room lighting was standardized across all patient sessions and the researchers only imaged skin regions directly facing the camera.

In addition to skin pigmentation, subsurface tissue properties can also affect the transmission of Cherenkov light. Different tissue types – such as dense fibroglandular or less dense adipose tissue – have differing optical densities. To compensate for this, the team used routine CT scans to establish an institution-specific CT calibration factor (independent of skin pigmentation) for the diverse patient dataset, using a process based on previous research by co-author Rachael Hachadorian.

Following CT calibration, the Cherenkov intensity per unit dose showed a linear relationship with relative skin luminance, for both 6 and 15 MV beams. Encouraged by this observed linearity, the researchers generated linear calibration factors based on each patient’s skin pigmentation, for application to the Cherenkov image data. They note that the calibration can be incorporated into existing clinical workflows without impacting patient care.

Improving the accuracy

To test the impact of their calibration factors, the researchers first plotted the mean uncalibrated Cherenkov intensity as a function of mean surface dose (based on the projected dose from the treatment planning software for the first 5 mm of tissue) for all patients. For 6 MV beams, this gave an R2 value (a measure of data variance from the linear fit) of 0.81. For 15 MV treatments, R2 was 0.17, indicating lower Cherenkov-to-dose linearity.

Applying the CT calibration to the diverse patient data did not improve the linearity. However, applying the pigmentation-based calibration had a significant impact, improving the R2 values to 0.91 and 0.64, for 6 and 15 MV beams, respectively. The highest Cherenkov-to-dose linearity was achieved after applying both calibration factors, which resulted in R2 values of 0.96 and 0.91 for 6 and 15 MV beams, respectively.

Using only the CT calibration, the average dose errors (the mean difference between the estimated and reference dose) were 38% and 62% for 6 and15 MV treatments, respectively. The pigmentation-based calibration reduced these errors to 21% and 6.6%.

“Integrating colour imaging to assess patients’ skin luminance can provide individualized calibration factors that significantly improve Cherenkov-to-dose estimations,” the researchers conclude. They emphasize that this calibration is institution-specific – different sites will need to derive a calibration algorithm corresponding to their specific cameras, room lighting and beam energies.

Bringing quantitative in vivo Cherenkov dosimetry into routine clinical use will require further research effort, says Andreozzi. “In Cherenkov dosimetry, the patient becomes their own dosimeter, read out by a specialized camera. In that respect, it comes with many challenges – we usually have standardized, calibrated detectors, and patients are in no way standardized or calibrated,” Andreozzi tells Physics World. “We have to characterize the superficial optical properties of each individual patient in order to translate what the cameras see into something close to radiation dose.”

The post Accounting for skin colour increases the accuracy of Cherenkov dosimetry appeared first on Physics World.

  •  

Physicists take ‘snapshots’ of quantum gases in continuous space

Three teams of researchers in the US and France have independently developed a new technique to visualize the positions of atoms in real, continuous space, rather than at discrete sites on a lattice. By applying this method, the teams captured “snapshots” of weakly interacting bosons, non-interacting fermions and strongly interacting fermions and made in-situ measurements of the correlation functions that characterize these different quantum gases. Their work constitutes the first experimental measurements of these correlation functions in continuous space – a benchmark in the development of techniques for understanding fermionic and bosonic systems, as well as for studying strongly interacting systems.

Quantum many-body systems exhibit a rich and complex range of phenomena that cannot be described by the single-particle picture. Simulating such systems theoretically is thus rather difficult, as their degrees of freedom (and the corresponding size of their quantum Hilbert spaces) increase exponentially with the number of particles. Highly controllable quantum platforms like ultracold atoms in optical lattices are therefore useful tools for capturing and visualizing the physics of many-body phenomena.

The three research groups followed similar “recipes” in producing their atomic snapshots. First, they prepared a dilute quantum gas in an optical trap created by a lattice of laser beams. This lattice was configured such that the atoms experienced strong confinement in the vertical direction but moved freely in the xy-plane of the trap. Next, the researchers suddenly increased the strength of the lattice in the plane to “freeze” the atoms’ motion and project their positions onto a two-dimensional square lattice. Finally, they took snapshots of the atoms by detecting the fluorescence they produced when cooled with lasers. Importantly, the density of the gases was low enough that the separation between two atoms was larger than the spacing between the sites of the lattice, facilitating the measurement of correlations between atoms.

What does a Fermi gas look like in real space?

One of the three groups, led by Tarik Yefsah in Paris’ Kastler Brossel Laboratory (KBL), studied a non-interacting two-dimensional gas of fermionic lithium-6 (6Li) atoms. After confining a low-density cloud of these atoms in a two-dimensional optical lattice, Yefsah and colleagues registered their positions by applying a technique called Raman sideband laser cooling.

The KBL team’s experiment showed, for the first time, the shape of a parameter called the two-point correlator (g2) in continuous space. These measurements clearly demonstrated the existence of a “fermi hole”: at small interatomic distances, the value of this two-point correlator tends to zero, but as the distance increases, it tends to one. This behaviour was expected, since the Pauli exclusion principle makes it impossible for two fermions with the same quantum numbers to occupy the same position. However, the paper’s first author Tim de Jongh, who is now a postdoctoral researcher at the University of Colorado Boulder in the US, explains that being able to measure “the exact shape of the correlation function at the percent precision level” is new, and a distinguishing feature of their work.

The KBL team’s measurement also provides both two-body and three-body correlation functions for the atoms, making it possible to compare them directly. In principle, the technique could even be extended to correlations of arbitrarily high order.

What about a Bose gas?

Meanwhile, researchers directed by Wolfgang Ketterle of the Massachusetts Institute of Technology (MIT) developed and applied quantum gas microscopy to study how bosons bunch together. Unlike fermions, bosons do not obey the Pauli exclusion principle. In fact, if the temperature is low enough, they can enter a phase known as a Bose-Einstein condensate (BEC) in which their de Broglie wavelengths overlap and they occupy the same quantum state.

By confining a dilute bosonic gas of approximately 100 rubidium atoms in a sheet trap and cooling them to just above the critical temperature (Tc) for the onset of BEC, Ketterle and colleagues were able to make the first in situ measurement of the correlation length in a two-dimensional ultracold bosonic gas.  In contrast to Yefsah’s group, Ketterle and colleagues employed polarization cooling to detect the atoms’ positions. They also focused on a different correlation function; specifically, the second-order correlation function of bosonic bunching at T>Tc.

When the system’s temperature is high enough (54 nK above absolute zero, in this experiment), the correlation function is nearly 1, meaning that the atoms’ thermal de-Broglie waves are too short to “notice” each other. But when the sample is cooled to a lower temperature of 6.4 nK, the thermal de-Broglie wavelength becomes commensurate with the interparticle spacing r, and the correlation function exhibits the bunching behavior expected for bosons in this regime, decreasing from its maximum value at r = 0 down to 1 as the interparticle spacing increases.

In an ideal system, the maximum value of the correlation function would be 2. However, in this experiment, the spatial resolution of the grid and the quasi-two-dimensional nature of the trapped gas reduce the maximum to 1.3. Enid Cruz Colón, a PhD student in Ketterle’s group, explains that this experiment is sensitive to parity projection, meaning that the count number of atoms per site is either even or odd. This implies that doubly occupied sites are registered as empty sites, which directly shrinks the measured value of g2

What does an interacting quantum gas look like in real space?

With Yefsah and colleagues focusing on fermionic correlations, and Ketterle’s group focusing on bosons, a third team led by MIT’s Martin Zwierlein found its niche by studying mixtures of bosons and fermions. Specifically, the team measured the pair correlation function for a mixture of a thermal Bose gas composed of sodium-23 (23Na) atoms and a degenerate Fermi gas of 6Li. As expected, they found that the probability of finding two particles together is enhanced for bosons and diminished for fermions.

In a further experiment, Zwierlein and colleagues studied a strongly interacting Fermi gas and measured its density-density correlation function. By increasing the strength of the interactions, they caused the atoms in this gas to pair up, triggering a transition into the BCS (Bardeen-Cooper-Schriefer) regime associated with paired electrons in superconductors. For atoms in a BEC, the density-density correlation function shows a strong bunching tendency at short distances; in the BCS regime, in contrast, the correlation depicts a long-range pairing where atoms form so-called Cooper pairs as the strength of their interactions increases.

By applying the new quantum gas microscopy technique to the study of strongly interacting Fermi gases, Ruixiao Yao, a PhD student in Zwierlein’s group and the paper’s first author, notes that they have opened the door to applications in quantum simulation. Such strongly correlated systems, Yao highlights, are especially difficult to simulate on classical computers.

The three teams describe their work in separate papers in Physical Review Letters.

The post Physicists take ‘snapshots’ of quantum gases in continuous space appeared first on Physics World.

  •  

Diversity in UK astronomy and geophysics declining, finds survey

Women and ethnic-minority groups are still significantly underrepresented in UK astronomy and geophysics, with the fields becoming more white. That is according to the latest demographic survey conducted by the Royal Astronomical Society (RAS), which concludes that decades of initiatives to improve representation have “failed”.

Based on data collected in 2023, the survey reveals more people working in astronomy and solar-system science than ever before, although the geophysics community has shrunk since 2016. According to university admissions data acquired by the RAS, about 80% of students who started undergraduate astronomy and geophysics courses in 2022 were white, slightly less than the 83% overall proportion of white people in the UK.

However, among permanent astronomy and geophysics staff, 97% of British respondents to the RAS survey are white, up form 95% in 2016. The makeup of postgraduate students was similar, with 92% of British students – who accounted for 70% of postgraduate respondents – stating they are white, up from 87% in 2016.

The survey also finds that the proportion of women in professor, senior lecturer or reader roles increased from 2010 to 2023 in astronomy and solar-system science, but has stagnated at lecturer level in astronomy since 2016 and dropped in “solid Earth” geophysics to 19%. The picture is better at more junior levels, with women making up 28% of postdocs in astronomy and solar-system science and 34% in solid Earth geophysics.

A redouble of efforts

“I very much want to see far more women and people from minority ethnic groups working as astronomers and geophysicists, and we have to redouble our efforts to make that happen,” says Robert Massey, deputy executive director of the RAS, who co-authored the survey and presented its results at the National Astronomy Meeting 2025 in Durham last week.

RAS president Mike Lockwood agrees, stating that effective policies and strategies are now  needed. “One only has to look at the history of science and mathematics to understand that talent can, has, and does come from absolutely anywhere in society, and our concern is that astronomy and geophysics in the UK is missing out on some of the best natural talent available to us,” Lockwood adds.

The post Diversity in UK astronomy and geophysics declining, finds survey appeared first on Physics World.

  •  

CP violation in baryons is seen for the first time at CERN

The first experimental evidence of the breaking of charge–parity (CP) symmetry in baryons has been obtained by CERN’s LHCb Collaboration. The result is consistent with the Standard Model of particle physics and could lead to constraints on theoretical attempts to extend the Standard Model to explain the excess of matter over antimatter in the universe.

Current models of cosmology say that the Big Bang produced a giant burst of matter and antimatter, the vast majority of which recombined and annihilated shortly afterwards. Today however, the universe appears to be made almost exclusively of matter with very little antimatter in evidence. This excess of matter is not explained by the Standard Model and it existence is an important mystery in physics.

In 1964, James Cronin, Valentine Fitch and colleagues at Princeton University in the US conducted an experiment on the decay of neutral K mesons. This showed that the weak interaction violated CP symmetry, indicating that matter and antimatter could behave differently. Fitch and Cronin bagged the 1980 Nobel Prize for Physics and the Soviet physicist Andrei Sakharov subsequently suggested that, if amplified at very high mass scales in the early universe, CP violation could have induced the matter–antimatter asymmetry shortly after the Big Bang.

Numerous observations of CP violation have subsequently been made in other mesonic systems. The phenomenon is now an accepted part of the Standard Model is parametrized by the Cabibbo–Kobayashi–Maskawa (CKM) matrix. This describes the various probabilities of quarks of different generations changing into each other through the weak interaction – a process called mixing.

Tiny effect

However, the CP violation produced through the CKM mechanism is much smaller effect than would have been required to create the matter left over by the Big Bang, as Xueting Yang of China’s Peking University explains.

“The number of baryons remaining divided by the number of photons produced when the baryons and antibaryons met and produced two photons is required to be about 10-10 in Big Bang theory…whereas this kind of quantity is only 10-18 in the Standard Model prediction.”

What is more, CP violation had never been observed in baryons. “Theoretically the prediction for baryon decay is very imprecise,” says Yang, who is a member of the LHCb collaboration. “It’s much more difficult to calculate it than the meson decays because there’s some interaction with the strong force.” Baryons (mostly protons and neutrons) make up almost all the hadronic matter in the universe, so this left open the slight possibility that the explanation might lie in some inconsistency between baryonic CP violation and the Standard Model prediction.

In the new work, Yang and colleagues at LHCb looked at the decays of beauty (or bottom) baryons and antibaryons. These heavy cousins of neutrons contain an up quark, a down quark and a beauty quark and were produced in proton–proton collisions at the Large Hadron Collider in 2011–2018. These baryons and antibaryons can decay via multiple channels. In one, a baryon decays to a proton, a positive K-meson and a pair of pions – or, conversely, an antibaryon decays to an antiproton, a negative K-meson and a pair of pions. CP violation should create an asymmetry between these processes, and the researchers looked for evidence of this asymmetry in the numbers of particles detected at different energies from all the collisions.

Standard Model prevails

The team found that the CP violation seen was consistent with the Standard Model and inconsistent with zero by 5.2σ. “The experimental result is more precise than what we can get from theory,” says Yang. Other LHCb researchers scrutinized alternative decay channels of the beauty baryon: “Their measurement results are still consistent with CP symmetry…There should be CP violation also in their decay channels, but we don’t have enough statistics to claim that the deviation is more than 5σ.”

The current data do not rule any extensions to the current Standard Model out, says Yang, simply because none of those extensions make precise predictions about the overall degree of CP violation expected in baryons. However, the LHC is now in its third run, and the researchers hope to acquire information on, for example, the intermediate particles involved in the decay: “We may be able to provide some measurements that are more comparable for theories and which can provide some constraints on the Standard Model predictions for CP violation,” says Yang.

The research is described in a paper in Nature.

“It’s an important paper – an old type of CP violation in a new system,” says Tom Browder of  the University of Hawaii. “Theorists will try to interpret this within the context of the Standard Model, and there have already been some attempts, but there are some uncertainties due to the strong interaction that preclude making a precise test.” He says the results could nevertheless potentially help to constrain extensions of the Standard Model, such as CP violating decays involving dark matter proposed by the late Ann Nelson at the University of Washington in Seattle and her colleagues.

The post CP violation in baryons is seen for the first time at CERN appeared first on Physics World.

  •  

Oak Ridge’s Quantum Science Center takes a multidisciplinary approach to developing quantum materials and technologies

This episode of the Physics World Weekly podcast features Travis Humble, who is director of the Quantum Science Center at Oak Ridge National Laboratory.

Located in the US state of Tennessee, Oak Ridge is run by the US Department of Energy (DOE). The Quantum Science Center links Oak Ridge with other US national labs, universities and companies.

Humble explains how these collaborations ensure that Oak Ridge’s powerful facilities and instruments are used to create new quantum technologies. He also explains how the lab’s expertise in quantum and conventional computing is benefiting the academic and industrial communities.

Courtesy: American ElementsThis podcast is supported by American Elements, the world’s leading manufacturer of engineered and advanced materials. The company’s ability to scale laboratory breakthroughs to industrial production has contributed to many of the most significant technological advancements since 1990 – including LED lighting, smartphones, and electric vehicles.

The post Oak Ridge’s Quantum Science Center takes a multidisciplinary approach to developing quantum materials and technologies appeared first on Physics World.

  •  

Leprechauns on tombstones: your favourite physics metaphors revealed

Physics metaphors don’t work, or so I recently claimed. Metaphors always fail; they cut corners in reshaping our perception. But are certain physics metaphors defective simply because they cannot be experimentally confirmed? To illustrate this idea, I mentioned the famous metaphor for how the Higgs field gives particles mass, which is likened to fans mobbing – and slowing – celebrities as they walk across a room.

I know from actual experience that this is false. Having been within metres of filmmaker Spike Lee, composer Stephen Sondheim, and actors Mia Farrow and Denzel Washington, I’ve seen fans have many different reactions to the presence of nearby celebrities in motion. If the image were strictly true, I’d have to check which celebrities were about each morning to know what the hadronic mass would be that day.

I therefore invited Physics World readers to propose other potentially empirically defective physics metaphors, and received dozens of candidates. Technically, many are similes rather than metaphors, but most readers, and myself, use the two terms interchangeably. Some of these metaphors/similes were empirically confirmable and others not.

Shoes and socks

Michael Elliott, a retired physics lecturer from Oxford Polytechnic, mentioned a metaphor from Jakob Schwichtenberg’s book No-Nonsense Quantum Mechanics that used shoes and socks to explain the meaning of “commutation”. It makes no difference, Schwichtenberg wrote, if you put your left sock on first and then your right sock; in technical language the two operations are said to commute. However, it does make a difference which order you put your sock and shoe on.

“The ordering of the operations ‘putting shoes on’ and ‘putting socks on’ therefore matters,” Schwichtenberg had written, meaning that “the two operations do not commute.” Empirically verifiable, Elliott concluded triumphantly.

A metaphor that was used back in 1981 by CERN physicist John Bell in a paper addressed to colleagues requires more footgear and imagination. Bell’s friend and colleague Reinhold Bertlmann from the University of Vienna was a physicist who always wore mismatched socks, and in the essay “Bertlmann’s socks and the nature of reality” Bell explained the Einstein–Podolsky–Rosen (EPR) paradox and Bell’s theorem in terms of those socks.

If Bertlmann stepped into a room and an observer noticed that the sock on his first foot was pink, one could be sure the other was not-pink, illustrating the point of the EPR paper. Bell then suggested that, when put in the wash, pairs of socks and washing temperatures could behave analogously to particle pairs and magnet angles in a way that conveyed the significance of his theorem. Bell bolstered this conclusion with a scenario involving correlations between spikes of heart attacks in Lille and Lyon. I am fairly sure, however, that Bell never empirically tested this metaphor, and I wonder what the result would be.

Out in space, the favourite cosmology metaphor of astronomer and astrophysicist Michael Rowan-Robinson is the “standard candle” that’s used to describe astronomical objects of roughly fixed luminosity. Standard candles can be used to determine astronomical distances and are thus part of the “cosmological distance ladder” – Rowan-Robinson’s own metaphor – towards measuring the Hubble constant.

Retired computer programmer Ian Wadham, meanwhile, likes Einstein’s metaphor of being in a windowless spacecraft towed by an invisible being who gives the ship a constant acceleration. “It is impossible for you to tell whether you are standing in a gravitational field or being accelerated,” Wadham writes. Einstein used the metaphor effectively – even though, as an atheist, he was convinced that he would be unable to test it.

I was also intrigued by a comment from Dilwyn Jones, a consultant in materials science and engineering, who cited a metaphor from the 1939 book The Third Policeman by Irish novelist Flann O’Brien. Jones first came across O’Brien’s metaphor in Walter J Moore’s 1962 textbook Physical Chemistry. Atoms, says a character in O’Brien’s novel, are “never standing still or resting but spinning away and darting hither and thither and back again, all the time on the go”, adding that “they are as lively as twenty leprechauns doing a jig on top of a tombstone”.

But as Jones pointed out, that particular metaphor “can only be tested on the Emerald Isle”.

Often metaphors entertain as much as inform. Clare Byrne, who teaches at a high school in St Albans in the UK, tells her students that delocalized electrons are like stray dogs – “hanging around the atoms, but never belonging to any one in particular”. They could, however, she concedes “be easily persuaded to move fast in the direction of a nice steak”.

Giving metaphors legs

I ended my earlier column on metaphors by referring to poet Matthew Arnold’s fastidious correction of a description in his 1849 poem ”The Forsaken Merman”. After it was published, a friend pointed out to Arnold his mistaken use of the word “shuttle” rather than “spindle” when describing “the king of the sea’s forlorn wife at her spinning-wheel” as she lets the thing slip in her grief.

The next time the poem was published, Arnold went out of his way to correct this. Poets, evidently, find it imperative to be factual in metaphors, and I wondered, why shouldn’t scientists? The poet Kevin Pennington was outraged by my remark.

“Metaphors in poetry are not the same as metaphors used in science,” he insisted. “Science has one possible meaning for a metaphor. Poetry does not.” Poetic metaphors, he added are “modal”, having many possible interpretations at the same time – “kinda like particles can be in a superposition”.

I was dubious. “Superposition” suggests that poetic meanings are probabilistic, even arbitrary. But Arnold, I thought, was aiming at something specific when the king’s wife drops the spindle in “The Forsaken Merman”. After all, wouldn’t I be misreading the poem to imagine his wife thinking, “I’m having fun and in my excitement the thing slipped out of my hand!”

My Stony Brook colleague Elyse Graham, who is a professor of English, adapted a metaphor used by her former Yale professor Paul Fry. “A scientific image has four legs”, she said, “a poetic image three”. A scientific metaphor, in other words, is as stable as a four-legged table, structured to evoke a specific, shared understanding between author and reader.

A poetic metaphor, by contrast, is unstable, seeking to evoke a meaning that connects with the reader’s experiences and imagination, which can be different from the author’s within a certain domain of meaning. Graham pointed out, too, that the true metaphor in Arnold’s poem is not really the spinning wheel, the wife and the dropped spindle but the entirety of the poem itself, which is what Arnold used to evoke meaning in the reader.

That’s also the case with O’Brien’s atom-leprechaun metaphor. It shows up in the novel not to educate the reader about atomic theory but to invite a certain impression of the worldview of the science-happy character who speaks it.

The critical point

In his 2024 book Waves in an Impossible Sea: How Everyday Life Emerges from the Cosmic Ocean, physicist Matt Strassler coined the term “physics fib” or ”phib”. It refers to an attempted “short, memorable tale” that a physicist tells an interested non-physicist that amounts to “a compromise between giving no answer at all and giving a correct but incomprehensible one”.

The criterion for whether a metaphor succeeds or fails does not depend on whether it can pass empirical test, but on the interaction between speaker or author and audience; how much the former has to compromise depends on the audience’s interest and understanding of the subject. Metaphors are interactions. Byrne was addressing high-school students; Schwichtenberg was aiming at interested non-physicists; Bell was speaking to physics experts. Their effectiveness, to use one final metaphor, does not depend on empirical grounding but impedance matching; that is, they step down the “load” so that the “signal” will not be lost.

The post Leprechauns on tombstones: your favourite physics metaphors revealed appeared first on Physics World.

  •  

How to keep the second law of thermodynamics from limiting clock precision

The second law of thermodynamics demands that if we want to make a clock more precise – thereby reducing the disorder, or entropy, in the system – we must add energy to it. Any increase in energy, however, necessarily increases the amount of waste heat the clock dissipates to its surroundings. Hence, the more precise the clock, the more the entropy of the universe increases – and the tighter the ultimate limits on the clock’s precision become.

This constraint might sound unavoidable – but is it? According to physicists at TU Wien in Austria, Chalmers University of Technology, Sweden, and the University of Malta, it is in fact possible to turn this seemingly inevitable consequence on its head for certain carefully designed quantum systems. The result: an exponential increase in clock accuracy without a corresponding increase in energy.

Solving a timekeeping conundrum

Accurate timekeeping is of great practical importance in areas ranging from navigation to communication and computation. Recent technological advancements have brought clocks to astonishing levels of precision. However, theorist Florian Meier of TU Wien notes that these gains have come at a cost.

“It turns out that the more precisely one wants to keep time, the more energy the clock requires to run to suppress thermal noise and other fluctuations that negatively affect the clock,” says Meier, who co-led the new study with his TU Wien colleague Marcus Huber and a Chalmers experimentalist, Simone Gasparinetti. “In many classical examples, the clock’s precision is linearly related to the energy the clock dissipates, meaning a clock twice as accurate would produce twice the (entropy) dissipation.”

Clock’s precision can grow exponentially faster than the entropy

The key to circumventing this constraint, Meier continues, lies in one of the knottiest aspects of quantum theory: the role of observation. For a clock to tell the time, he explains, its ticks must be continually observed. It is this observation process that causes the increase in entropy. Logically, therefore, making fewer observations ought to reduce the degree of increase – and that’s exactly what the team showed.

“In our new work, we found that with quantum systems, if designed in the right way, this dissipation can be circumvented, ultimately allowing exponentially higher clock precision with the same dissipation,” Meier says. “We developed a model that, instead of using a classical clock hand to show the time, makes use of a quantum particle coherently travelling around a ring structure without being observed. Only once it completes a full revolution around the ring is the particle measured, creating an observable ‘tick’ of the clock.”

The clock’s precision can thus be improved by letting the particle travel through a longer ring, Meier adds. “This would not create more entropy because the particle is still only measured once every cycle,” he tells Physics World. “The mathematics here is of course much more involved, but what emerges is that, in the quantum case, the clock’s precision can grow exponentially faster than the entropy. In the classical analogue, in contrast, this relationship is linear.”

“Within reach of our technology”

Although such a clock has not yet been realized in the laboratory, Gasparinetti says it could be made by arranging many superconducting quantum bits in a line.

“My group is an experimental group that studies superconducting circuits, and we have been working towards implementing autonomous quantum clocks in our platform,” he says. “We have expertise in all the building blocks that are needed to build the type of clock proposed in in this work: generating quasithermal fields in microwave waveguides and coupling them to superconducting qubits; detecting single microwave photons (the clock ‘ticks’); and building arrays of superconducting resonators that could be used to form the ‘ring’ that gives the proposed clock its exponential boost.”

While Gasparinetti acknowledges that demonstrating this advantage experimentally will be a challenge, he isn’t daunted. “We believe it is within reach of our technology,” he says.

Solving a future problem

At present, dissipation is not the main limiting factor for when it comes to the performance of state-of-the-art clocks. As clock technology continues to advance, however, Meier says we are approaching a point where dissipation could become more significant. “A useful analogy here is in classical computing,” he explains. “For many years, heat dissipation was considered negligible, but in today’s data centres that process vast amounts of information, dissipation has become a major practical concern.

“In a similar way, we anticipate that for certain applications of high-precision clocks, dissipation will eventually impose limits,” he adds. “Our clock highlights some fundamental physical principles that can help minimize such dissipation when that time comes.”

The clock design is detailed in Nature Physics.

The post How to keep the second law of thermodynamics from limiting clock precision appeared first on Physics World.

  •  

Spacecraft can navigate using light from just two stars

NASA’s New Horizons spacecraft has been used to demonstrate simple interstellar navigation by measuring the parallax of just two stars. An international team was able to determine the location and heading of the spacecraft using observations made from space and the Earth.

Developed by an international team of researchers, the technique could one day be used by other spacecraft exploring the outermost regions of the solar system or even provide navigation for the first truly interstellar missions.

New Horizons visited the Pluto system in 2015 and has now passed through the Kuiper Belt in the outermost solar system.

Now, NOIRLab‘s Tod Lauer and colleagues have created a navigation technique for the spacecraft by choosing two of the nearest stars for parallax measurements. These are Proxima Centauri, which is just 4.2 light–years away, and Wolf 359 at 7.9 light–years. On 23 April 2020, New Horizons imaged star-fields containing the two stars, while on Earth astronomers did the same.

At that time, New Horizons was 47.1 AU (seven billion kilometres) from Earth, as measured by NASA’s Deep Space Network. The intention was to replicate that distance determination using parallax.

Difficult measurement

The 47.1 AU separation between Earth and New Horizons meant that each vantage point observed Proxima and Wolf 359 in a slightly different position relative to the background stars. This displacement is the parallax angle, which the observations showed to be 32.4 arcseconds for Proxima and 15.7 arcseconds for Wolf 359 at the time of measurement.

By applying simple trigonometry using the parallax angle and the known distance to the stars, it should be relatively straightforward to triangulate New Horizons’ position. In practice, however, the team struggled to make it work. It was the height of the COVID-19 pandemic, and finding observatories that were still open and could perform the observations on the required night was not easy.

Edward Gomez, of the UK’s Cardiff University and the international Las Cumbres Observatory, recalls the efforts made to get the observations. “Tod Lauer contacted me saying that these two observations were going to be made, and was there any possibility that I could take them with the Las Cumbres telescope network?” he tells Physics World.

In the end, Gomez was able to image Proxima with Las Cumbres’ telescope at Siding Spring in Australia. Meanwhile, Wolf 359 was observed by the University of Louisville’s Manner Telescope at Mount Lemmon Observatory in Arizona. At the same time, New Horizons’ Long Range Reconnaissance Imager (LORRI) took pictures of both stars, and all three observations were analysed using a 3D model of the stellar neighbourhood based on data from the European Space Agency’s star-measuring Gaia mission.

The project was more a proof-of-concept than an accurate determination of New Horizons’ position and heading, with the team describing the measurements as “educational”.

“The reason why we call it an educational measurement is because we don’t have a high degree of, first, precision, and secondly, reproducibility, because we’ve got a small number of measurements, and they weren’t amazingly precise,” says Gomez. “But they still demonstrate the parallax effect really nicely.”

New Horizons position was calculated to within 0.27 AU, which is not especially useful for navigating towards a trans-Neptunian object. The measurements were also able to ascertain New Horizon’s heading to an accuracy of 0.4°, relative to the precise value derived from Deep Space Network signals.

Just two stars

But the fact that only two stars were needed is significant, explains Gomez. “The good thing about this method is just having two close stars as our reference stars. The handed-down wisdom normally is that you need loads and loads [of stars], but actually you just need two and that’s enough to triangulate your position.”

There are more accurate ways to navigate, such as pulsar measurements, but these require more complex and larger instrumentation on a spacecraft – not just an optical telescope and a camera. While pulsar navigation has been demonstrated on the International Space Station in low-Earth orbit, this is the first time that any method of interstellar navigation has been demonstrated for a much more distant spacecraft.

Today, more than five years after the parallax observations, New Horizons is still speeding out of the solar system. It has cleared the Kuiper Belt and today is 61 AU from Earth.

When asked if the parallax measurements will be made again under better circumstances Gomez replied. “I hope so. Now that we’ve written a paper in The Astronomical Journal that’s getting some interest, hopefully we can reproduce it, but nothing has been planned so far.”

In a way, the parallax measurements have brought Gomez full-circle. “When I was doing [high school] mathematics more years ago than I care to remember, I was a massive Star Trek fan and I did a three-dimensional interstellar navigation system as my mathematics project!”

Now here he is, as part of a team using the stars to guide our own would-be interstellar emissary.

 

The post Spacecraft can navigate using light from just two stars appeared first on Physics World.

  •