↩ Accueil

Vue lecture

Nickel-enhanced biomaterial becomes stronger when wet

Synthetic materials such as plastics are designed to be durable and water resistant. But the processing required to achieve these properties results in a lack of biodegradability, leading to an accumulation of plastic pollution that affects both the environment and human health. Researchers at the Institute for Bioengineering of Catalonia (IBEC) are developing a possible replacement for plastics: a novel biomaterial based on chitin, the second most abundant natural polymer on Earth.

“Every year, nature produces on the order of 1011 tonnes of chitin, roughly equivalent to more than three centuries of today’s global plastic production,” says study leader Javier G Fernández. “Chitin and [its derivative] chitosan are the ultimate natural engineering polymers. In nature, variations of this material produce stiff insect wings enabling flight, elastic joints enabling extraordinary jumping in grasshoppers, and armour-like protective exoskeletons in lobsters or clams.”

But while biomaterials provide a more environmentally friendly alternative to conventional plastics, most biological materials weaken when exposed to water. In this latest work, Fernández and first author Akshayakumar Kompa took inspiration from nature and developed a new biomaterial that increases its strength when in contact with water, while maintaining its natural biodegradability.

Metal matters

In the exoskeletons of insects and crustaceans, chitin it is secreted in a gel-like form into water and then transitions into a hard structure. Following a chance observation that removing zinc from a sandworm’s fangs caused them to soften in water, Kompa and Fernández investigated whether adding a different transition metal, nickel, to chitosan could have the opposite effect.

By mixing nickel chloride solution (at concentrations from 0.6 to 1.4 M) with dispersions of chitosan extracted from discarded shrimp shells, the researchers entrapped varying amounts of nickel within the chitosan structure. Fourier-transform infrared spectra of resulting chitosan films revealed the presence of nickel ions, which form weak hydrogen bonds with water molecules and increase the biomaterial’s capacity to bond with water.

“In our films, water molecules form reversible bridges between polymer chains through weak interactions that can rapidly break and reform under load,” Fernández explains. “That fast reconfiguration is what gives the material high strength and toughness under wet conditions: essentially a built-in, stress-activated ‘self-rearrangement’ mechanism. Nickel ions act as stabilizing anchors for these water-mediated bridges, enabling more and longer-range connections and making inter-chain connectivity more robust”.

The nickel-doped chitosan samples had tensile strengths of between 30 and 40 MPa, similar to that of standard plastics. Adding low concentrations of nickel did not significantly impact the mechanical properties of the films. Concentrations of 1 M or more, however, preserved the material’s strength while increasing its toughness (the ability to stretch before breaking) – a key goal in the field of structural materials and a feature unique to biological composites.

Testing a nickel-doped chitosan film
Increased strength Testing a nickel-doped chitosan film using a 20 kg dumbbell. (Courtesy: Institute for Bioengineering of Catalonia)

Upon immersion in water, the nickel-doped films exhibited greater tensile strength, increasing from 36.12±2.21 MPa when dry to 53.01±1.68 MPa, moving into the range of higher-performance engineering plastics. In particular, samples created from an optimal 0.8 M nickel concentration almost doubled in strength when wet (and were used for the remainder of the team’s experiments).

Scaling production

The manufacturing process involves an initial immersion in water, followed by drying for 24 h and then re-wetting. During the first immersion, any nickel ions that are not incorporated into the material’s functional bridging network are released into the water, ensuring that nickel is present only where it is structurally useful.

The researchers developed a zero-waste production cycle in which this water is used as a primary component for fabricating the next object. “The expelled nickel is recovered and used to make the next batch of material, so the process operates at essentially 100% nickel utilization across batches,” says Fernández.

Nickel-doped chitosan structures
Zero waste production The team created structures including a 3 m2 nickel-doped chitosan film and a cup that can retain water as effectively as common plastics. (Courtesy: Institute for Bioengineering of Catalonia)

They used this process to produce various nickel-doped chitosan objects, including watertight containers and a 1 m2 film that could support a 20 kg weight after 24 h of water immersion. They also created a 244 x 122 cm film with similar mechanical behaviour to the smaller samples, demonstrating the potential for rapid scaling to ecologically relevant scales. A standard half-life test revealed that after approximately four months buried in garden soil, half of the material had biodegraded.

The researchers suggest that the biomaterial’s first real-world use may be in sectors such as agriculture and fishing that require strong, water-compatible and ultimately biodegradable materials, likely for packaging, coatings and other water-exposed applications. Both nickel and chitosan are already employed within biomedicine, making medicine another possible target, although any new medical product will require additional regulatory and performance validation.

The team is currently setting up a 1000 m2 lab facility in Barcelona, scheduled to open in 2028, for academia–industry collaborations in sustainable bioengineering research. Fernández suggests that we are moving towards a “biomaterial age”, defined by the ability to “control, integrate, and broadly use biomaterials and biological principles within engineering applications”.

“Over the last 20 years, working on bioinspired manufacturing, we have been able to produce the largest bioprinted objects in the world, demonstrated pathways for resource-secure and sustainable production in urban environments, and even explored how these approaches can support interplanetary colonization,” he tells Physics World. “Now we are achieving material properties that were considered out of reach by designing the material to work with its environment, rather than isolating itself from it.”

The researchers report their findings in Nature Communications.

The post Nickel-enhanced biomaterial becomes stronger when wet appeared first on Physics World.

  •  

2D materials help spacecraft electronics resist radiation damage

Electronics made from certain atomically thin materials can survive harsh radiation environments up to 100 times longer than traditional silicon-based devices. This finding, which comes from researchers at Fudan University in Shanghai, China, could bring significant benefits for satellites and other spacecraft, which are prone to damage from intense cosmic radiation.

Cosmic radiation consists of a mixture of heavy ions and cosmic rays, which are high-energy protons, electrons and atomic nuclei. The Earth’s magnetic field protects us from 99.9% of this ionizing radiation, and our atmosphere significantly attenuates the rest. Space-based electronics, however, have no such protection, and this radiation can damage or even destroy them.

Adding radiation shielding to spacecraft mitigates these harmful effects, but the extra weight and power consumption increases the spacecraft’s costs. “This conflicts with the requirements of future spacecraft, which call for lightweight and cost-effective architectures,” says team leader Peng Zhou, a physicist in Fudan’s College of Integrated Circuits and Micro-Nano Electronics. “Implementing radiation tolerant electronic circuits is therefore an important challenge and if we can find materials that are intrinsically robust to this radiation, we could incorporate these directly into the design of onboard electronic circuits.”

Promising transition-metal dichalcogenides

Previous research had suggested that 2D materials might fit the bill, with transistors based on transition-metal dichalcogenides appearing particularly promising. Within this family of materials, 2D molybdenum disulphide (MoS2) proved especially robust to irradiation-induced defects, and Zhou points out that its electrical, mechanical and thermal properties are also highly attractive for space applications.

The studies that revealed these advantages were, however, largely limited to simulations and ground-based experiments. This meant they were unable to fully replicate the complex and dynamic radiation fields such circuits would encounter under real space conditions.

Better than NMOS transistors

In their work, Zhou and colleagues set out to fill this gap. After growing monolayer 2D MoS2 using chemical vapour deposition, they used this material to fabricate field-effect transistors. They then exposed these transistors to 10 Mrad of gamma-ray irradiation and looked for changes to their structure using several techniques, including cross-sectional transmission electron microscopy (TEM) imaging and corresponding energy-dispersive spectroscopy (EDS) mapping.

These measurements indicated that the 2D MoS2 in the transistors was about 0.7 nm thick (typical for a monolayer structure) and showed no obvious signs of defects or damage. Subsequent Raman characterization on five sites within the MoS2 film confirmed the devices’ structural integrity.

The researchers then turned their attention to the transistors’ electrical properties. They found that even after irradiation, the transistors’ on-off ratios remained ultra-high, at about 108. They note that this is considerably better than a similarly-sized Si N-channel metal–oxide–semiconductor (NMOS) transistors fabricated through a CMOS process, for which the on-off ratio decreased by a factor of more than 4000 after the same 10 Mrad irradiation.

The team also found that MoS2 system consumes only about 49.9 mW per channel, making its power requirement at least five times lower than the NMOS one. This is important owing to the strict energy limitations and stringent power budgets of spacecraft, Zhou says.

Surviving the space environment

In their final experiment, the researchers tested their MoS2 structures on a spacecraft orbiting at an altitude of 517 km, similar to the low-Earth orbit of many communication satellites. These tests showed that the bit-error rate in data transmitted from the structures remained below 10-8 even after nine months of operation, which Zhou says indicates significant radiation and long-term stability. Indeed, based on test data, electronic devices made from these 2D materials could operate for 271 years in geosynchronous orbit – 100 times longer than conventional silicon electronics.

“The discovery of intrinsic radiation tolerance in atomically thin 2D materials, and the successful on-orbit validation of the atomic-layer semiconductor-based spaceborne radio-frequency communication system have opened a uniquely promising pathway for space electronics leveraging 2D materials,” Zhou says. “And their exceptionally long operational lifetimes and ultra-low power consumption establishes the unique competitiveness of 2D electronic systems in frontier space missions, such as deep-space exploration, high-Earth-orbit satellites and even interplanetary communications.”

The researchers are now working to optimize these structures by employing advanced fabrication processes and circuit designs. Their goal is to improve certain key performance parameters of spaceborne radio-frequency chips employed in inter-satellite and satellite-to-ground communications. “We also plan to develop an atomic-layer semiconductor-based radiation-tolerant computing platform, providing core technological support for future orbital data centres, highly autonomous satellites and deep-space probes,” Zhou tells Physics World.

The researchers describe their work in Nature.

The post 2D materials help spacecraft electronics resist radiation damage appeared first on Physics World.

  •  

Rethinking how quantum phases change

In this work, the researchers theoretically explore how quantum materials can transition continuously from one ordered state to another, for example, from a magnetic phase to a phase with crystalline or orientational order. Traditionally, such order‑to‑order transitions were thought to require fractionalisation, where particles effectively split into exotic components. Here, the team identifies a new route that avoids this complexity entirely.

Their mechanism relies on two renormalisation‑group fixed points in the system colliding and annihilating, which reshapes the flow of the system and removes the usual disordered phase. A separate critical fixed point, unaffected by this collision, then becomes the new quantum critical point linking the two ordered phases. This allows for a continuous, seamless transition without invoking fractionalised quasiparticles.

The authors show that this behaviour could occur in several real or realistic systems, including rare‑earth pyrochlore iridates, kagome quantum magnets, quantum impurity models and even certain versions of quantum chromodynamics. A striking prediction of the mechanism is a strong asymmetry in energy scales on the two sides of the transition, such as a much lower critical temperature and a smaller order parameter where the order emerges from fixed‑point annihilation.

This work reveals a previously unrecognised kind of quantum phase transition, expands the landscape beyond the usual Landau-Ginzburg-Wilson framework, which is the standard theory for phase transitions, and offers new ways to understand and test the behaviour of complex quantum systems.

Read the full article

Continuous order-to-order quantum phase transitions from fixed-point annihilation

David J Moser and Lukas Janssen 2025 Rep. Prog. Phys. 88 098001

Do you want to learn more about this topic?

Dynamical quantum phase transitions: a review by Markus Heyl (2018)

The post Rethinking how quantum phases change appeared first on Physics World.

  •  

How a Single Parameter Reveals the Hidden Memory of Glass

Unlike crystals, whose atoms arrange themselves in tidy, repeating patterns, glass is a non‑equilibrium material. A glass is formed when a liquid is cooled so quickly that its atoms never settle into a regular pattern, instead forming a disordered, unstructured arrangement.

In this process, as temperature decreases, atoms move more and more slowly. Near a certain temperature –the glass transition temperature – the atoms move so slowly that the material effectively stops behaving like a liquid and becomes a glass.

This isn’t a sharp, well‑defined transition like water turning to ice. Instead, it’s a gradual slowdown: the structure appears solid long before the atoms would theoretically cease to rearrange.

This slowdown can be extrapolated and be used to predict the temperature at which the material’s internal rearrangement would take infinitely long. This hypothetical point is known as the ideal glass transition. It cannot be reached in practice, but it provides an important reference for understanding how glasses behave.

Despite years of research, it’s still not clear exactly how glass properties depend on how it was made – how fast it was cooled, how long it aged, or how it was mechanically disturbed. Each preparation route seems to give slightly different behaviour.

For decades, scientists have struggled to find a single measure that captures all these effects. How do you describe, in one number, how disordered a glass is?

Recent research has emerged that provides a compelling answer: a configurational distance metric. This is a way of measuring how far the internal structure of a piece of glass is from a well‑defined reference state.

When the researchers used this metric, they could neatly collapse data from many different experiments onto a single curve. In other words, they found a single physical parameter controlling the behaviour.

This worked across a wide range of conditions: glasses cooled at different rates, allowed to age for different times, or tested under different strengths and durations of mechanical probing.

As long as the experiments were conducted above the ideal glass transition temperature, the metric provided a unified description of how the material dissipates energy.

This insight is significant. It suggests that even though glass never fully reaches equilibrium, its behaviour is still governed by how close it is to this idealised transition point. In other words, the concept of the kinetic ideal glass transition isn’t just theoretical, it leaves a measurable imprint on real materials.

This research offers a powerful new way to understand and predict the mechanical behaviour of glasses in everyday technologies, from smartphone screens to industrial coatings.

Read the full article

Order parameter for non-equilibrium dissipation and ideal glass – IOPscience

Junying Jiang, Liang Gao and Hai-Bin Yu, 2025 Rep. Prog. Phys. 88 118002

The post How a Single Parameter Reveals the Hidden Memory of Glass appeared first on Physics World.

  •  

Challenges in CO2 Reduction Selectivity Measurements by Hydrodynamic Methods

 

Electrochemical CO­2 reduction converts CO­2 to higher-value products using an electrocatalyst and could pave the way for electrification of the chemical industry. A key challenge for CO­2 reduction is its poor selectivity (faradaic efficiency) due to competition with the hydrogen evolution reaction in aqueous electrolytes. Rotating ring-disk electrode (RRDE) experiments have become a popular method to quantify faradaic efficiencies, especially for gold electrocatalysts. However, such measurements suffer from poor inter-laboratory reproducibility. This work identifies the causes of variability in RRDE selectivity measurements by comparing protocols with different electrochemical methods, reagent purities, and glassware cleaning procedures. Electroplating of electrolyte impurities onto the disk and ring surfaces were identified as major contributors to electrocatalyst deactivation. These results highlight the need for standardized and cross-laboratory validation of CO2RR selectivity measurements using RRDE. Researchers implementing this technique for CO2RR selectivity measurements need to be cognizant of electrode deactivation and its potential impacts on faradaic efficiencies and overall conclusions of their work.

maria-kelly-headshot-image
Maria Kelly

Maria Kelly is a Jill Hruby Postdoctoral Fellow at Sandia National Laboratories. She earned her PhD in Professor Wilson Smith’s research group at the University of Colorado Boulder and the National Renewable Energy Laboratory. Her doctoral work focused on characterization of carbon dioxide conversion interfaces using analytical electrochemical and in situ scanning probe methods. Her research interests broadly encompass advancing experimental measurement techniques to investigate the near-electrode environment during electrochemical reactions.

 

 

The post Challenges in CO<sub>2</sub> Reduction Selectivity Measurements by Hydrodynamic Methods appeared first on Physics World.

  •  

Time crystal emerges in acoustic tweezers

Photograph of a particle being help in acoustic tweezers
Acoustic tweezers A purple bead is suspended in mid-air by sound waves emanating from the black circular speakers. (Courtesy: NYU’s Center for Soft Matter Research)

Pairs of nonidentical particles trapped in adjacent nodes of a standing wave can harvest energy from the wave and spontaneously begin to oscillate, researchers in the US have shown. What is more, these interactions appear to violate Newton’s third law. The researchers believe their system, which is a simple example of a classical time crystal, could offer an easy way to measure mass with high precision. It might also, they hope, provide insights into emergent periodic phenomena in nature.

Acoustic tweezers use sound waves to create a potential-energy well that can hold an object in place – they are the acoustic analogue of optical tweezers. In the case of a single trapped object, this can be treated as a dissipationless process, in which the particle neither gains nor loses energy from the trapping wave.

In the new work, David Grier of New York University, together with graduate student Mia Morrell and undergraduate Leela Elliott, created an ultrasound standing wave in a cavity and levitated two objects (beads) in adjacent nodes.

“Ordinarily, you’d say ‘OK, they’re just going to sit there quietly and do nothing’,” says Grier; “And if the particles are identical, that’s exactly what’s going to happen.”

Breaking the law

If the two particles differ in size, material or any other property that affects acoustic scattering, they can spontaneously begin to oscillate. Even more surprisingly, this motion appears unconstrained by the requirement that momentum be conserved – Newton’s third law.

“Who ordered that?”, muses Grier.

The periodic oscillation, which has a frequency parametrized only by the properties of the particles and independent of the trapping frequency, forms a very simple type of emergent active matter called a time crystal.

The trio analysed the behaviour of adjacent particles trapped in this manner using the laws of classical mechanics, and discovered an important subtlety had been missed. When identical particles are trapped in nearby nodes, they interact by scattering waves, but the interactions are equal and opposite and therefore cancel.

“The part that had never been worked out before in detail is what happens when you have two particles with different properties interacting with each other,” says Grier. “And if you put in the hard work, which Mia and Leela did, what you find is that to the first approximation there’s nothing out of the ordinary.” At the second order, however, the expansion contains a nonreciprocal term. “That opens up all sorts of opportunities for new physics, and one of the most striking and surprising outcomes is this time crystal.”

Stealing energy

This nonreciprocity arises because, if one particle is more strongly affected by the mutual scattering than the other, it can be pushed farther away from the node of the standing wave and pick up potential energy, which can then be transferred through scattering to the other particle. “The unbalanced forces give the levitated particles the opportunity to steal some energy from the wave that they ordinarily wouldn’t have had access to,” explains Grier. The wave also carries away the missing momentum, resolving the apparent violation of Newton’s third law.

If it were acting in isolation, this energy input would make the oscillations unstable and throw the particles out of the nodes. However, energy is removed by viscosity: “If everything is absolutely right, the rate at which the particles consume energy exactly balances the rate at which they lose energy to viscous drag, and if you get that perfect, delicious balance, then the particles can jiggle in place forever, taking the fuel from the wave and dumping it back into the system as heat.” This can be stable indefinitely.

The researchers have filed a patent application for the use of the system to measure particle masses with microgram-scale precision from the oscillation frequency. Beyond this, they hope the phenomenon will offer insights into emergent periodic phenomena across timescales in nature: “Your neurons fire at kilohertz, but the pacemaker in your heart hopefully goes about once per second,” explains Grier.

The research is described in Physical Review Letters.

“When I read this I got somehow surprised,” says Glauber Silva of The Federal University of Alagoas in Brazil; “The whole thing of how to get energy from the surrounding fields and produce motion of the coupled particles is something that the theoretical framework of this field didn’t spot before.”

“I’ve done some work in the past, both in simulations and in optical systems that are analogous to this, where similar things happen, but not nearly as well controlled as in this particular experiment,” says Dustin Kleckner of University of California, Merced. He believes this will open up a variety of further questions: “What happens if you have more than two? What are the rules? How do we understand what’s going on and can we do more interesting things with it?” he says. 

The post Time crystal emerges in acoustic tweezers appeared first on Physics World.

  •  

Giant barocaloric cooling effect offers a new route to refrigeration

A new cooling technique based on the principles of dissolution barocaloric cooling could provide an environmentally friendly alternative to existing refrigeration methods. With a cooling capacity of 67 J/g and an efficiency of nearly 77%, the method developed by researchers from the Institute of Metal Research of the Chinese Academy of Sciences can reduce the temperature of a sample by 27 K in just 20 seconds – far more than is possible with standard barocaloric materials.

Traditional refrigeration relies on vapour-compression cooling. This technology has been around since the 19th century, and it relies on a fluid changing phase. Typically, an expansion valve allows a liquid refrigerant to evaporate into a gas, absorbing heat from its surroundings as it does so. A compressor then forces the refrigerant back into the liquid state, releasing the heat.

While this process is effective, it consumes a lot of electricity, and there is not much room for improvement. After more than a century of improvements, the vapour-compression cycle is fast approaching the maximum efficiency set by the Carnot limit. The refrigerants are also often toxic, contributing to environmental damage.

In recent years, researchers have been exploring caloric cooling as a possible alternative. Caloric cooling works by controlling the entropy, or disorder, within a material using magnetic or electric fields, mechanical forces or applied pressure. The last option, known as barocaloric cooling, is in some ways the most promising. However, most of the known barocaloric materials are solids, which suffer from poor heat transfer efficiency and limited cooling capacity. Transferring heat in and out of such materials is therefore slow.

A liquid system

The new technique overcomes this limitation thanks to a fundamental thermodynamic process called endothermic dissolution. The principle of endothermic dissolution is that when a salt dissolves in a solvent, some of the bonds in the solvent break. Breaking those bonds takes energy, and so the solvent cools down – sometimes dramatically.

In the new work, researchers led by metallurgist and materials scientist Bing Li discovered a way to reverse this process by applying pressure. They began by dissolving a salt, ammonium thiocyanate (NH4SCN), in water. When they applied pressure to the resulting solution, the salt precipitated out (an exothermic process) in line with Le Chatelier’s principle, which states that when a system in chemical equilibrium is disturbed, it will adjust itself to a new equilibrium by counteracting as far as possible the effect of the change.

When they then released the pressure, the salt re-dissolved almost immediately. This highly endothermic process absorbs a massive amount of heat, causing the temperature of the solution to drop by nearly 27 K at room temperature, and by up to 54 K at higher temperatures.

A chaotropic salt

Li and colleagues did not choose NH4SCN by chance. The material is a chaotropic agent, meaning that it disrupts hydrogen bonding, and it is highly soluble in water, which helps to maximize the amount present in the solution during that part of the cooling cycle. It also has a large enthalpy of solution, meaning that its temperature drops dramatically when it dissolves. Finally, and most importantly, it is highly sensitive to applied pressures in the range of hundreds of megapascals, which is within the capacity of conventional hydraulic systems.

Li says that he and his colleagues’ approach, which they detail in Nature, could encourage other researchers to find similar techniques that likewise do not rely on phase transitions. As for applications, he notes that because aqueous NH4SCN barocaloric cooling works well at high temperatures, it could be suited to the demanding thermal management requirements of AI data centres. Other possibilities include air conditioning in domestic and industrial vehicles and buildings.

There are, however, some issues that need to be resolved before such cooling systems find their way onto the market. NH4SCN and similar salts are corrosive, which could damage refrigerator components. The high pressures required in the current system could also prove damaging over the long run, Li adds.

To address these and other drawbacks, the researchers now plan to study other such near-saturated solutions at the atomic level, with a particular focus on how they respond to pressure. “Such fundamental studies are vital if we are to optimize the performance of these fluids as refrigerants,” Li tells Physics World.

The post Giant barocaloric cooling effect offers a new route to refrigeration appeared first on Physics World.

  •  

The hidden footprint of hydrogen

Hydrogen is considered a clean fuel because it produces water rather than carbon dioxide when burned, and it is seen as a promising route toward lower emissions. It is especially valuable for replacing fossil fuels in industrial processes that require extremely high temperatures and are difficult to electrify. Although hydrogen itself is not a greenhouse gas like carbon dioxide, methane, or nitrous oxide (gases that trap heat in the Earth’s atmosphere), it can still indirectly contribute to warming. Normally, hydroxyl radicals, which are highly reactive atmospheric molecules made of one oxygen and one hydrogen atom with an unpaired electron, break down methane into carbon dioxide and water. But when hydroxyl radicals react with hydrogen instead, fewer radicals are available to remove methane, allowing methane to persist longer in the atmosphere and increasing its warming effect.

This study examines how hydrogen leakage in hydrogen‑based energy systems could influence the planet. The researchers analysed 23 different U.S. future scenarios, including some that eliminate fossil fuels entirely. They estimated how much hydrogen might leak in each scenario, compared those leaks to the remaining carbon dioxide and methane emissions, and calculated how much additional emissions reductions and/or carbon removal would be needed to offset the warming from hydrogen under low, medium, and high leak rates, and over both short‑term and long‑term warming timescales.

They found that although hydrogen leaks do contribute to warming, their impact is much smaller than the warming from the remaining carbon dioxide and methane in all scenarios. Hydrogen’s warming effect appears much larger over a 20 year period because its short‑lived chemical interactions amplify methane and ozone quickly, even though its long‑term impact remains relatively modest. Only small increases in carbon dioxide removal or small reductions in other emissions are needed to offset the warming caused by hydrogen leaks. However, because estimates of hydrogen leakage rates vary widely in the scientific literature, improved measurement and monitoring are essential.

Read the full article

Estimating the climate impacts of hydrogen emissions in a net-zero US economy

Ansh N Nasta et al 2025 Prog. Energy 7 045001

Do you want to learn more about this topic?

Hydrogen storage in liquid hydrogen carriers: recent activities and new trends Tolga Han Ulucan et al. (2023)

The post The hidden footprint of hydrogen appeared first on Physics World.

  •  

Transfer learning could help muon tomography identify illicit nuclear materials

Machine-learning could help us use cosmic muons to peer inside large objects such as nuclear reactors. Developed by researchers in China, the technique is capable of identifying target materials such as uranium even if they are coated with other materials.

The muon is a subatomic particle that is essentially a heavier version of the electron. Huge numbers of cosmic muons are created in Earth’s atmosphere when cosmic rays collide with gas molecules. Thousands of cosmic muons per second rain down on every square metre of Earth’s surface and these particles can penetrate tens to hundreds of metres through solid materials.

As a result, cosmic muons are used to peer inside large objects such as nuclear reactors, volcanoes and ancient pyramids. This involves placing detectors next to an object and detecting muons that have passed through or scattered within the object. Detector data are then processed using a tomography algorithm to create a 3D image of the object’s interior.

Illicit nuclear materials

Muons tend to scatter more from high-atomic-number materials, so the technique is particularly sensitive to the presence of materials such as uranium. As a result, it has been used to create systems for the detection of illicit nuclear materials hidden in freight containers.

Muon tomography is relatively straightforward when the object is of simple construction – such as a pyramid built of stone and containing voids. Producing useful images of more complex target – such as a freight container full of unknown objects – is much more difficult. The conventional computational approach is to calculate the muon-scattering physics of many different materials and combine these data with muon-tracking algorithms. This, however, tends to require huge computational resources.

Supervised machine learning has been used to reduce the computational overhead, but this requires prior knowledge of the target materials – limiting efficacy when imaging unknown and concealed materials. What is more, many materials in complex objects are coated with other materials and these coatings can affect muon scattering.

Now, Liangwen Chen at the Institute of Modern Physics of the Chinese Academy of Sciences and colleagues have used a technique called transfer learning to improve cosmic muon tomography of objects that contain coated materials. The idea of transfer learning is to begin with knowledge of the muon-scattering parameters of bare, uncoated materials and use machine learning to predict the parameters of coated materials. Chen and colleagues believe that this is the first application of transfer learning to muon tomography.

Monte Carlo simulations

The team began by creating a database describing how cosmic muons interact with representative materials with a wide range of atomic numbers. This was done by using Geant4 to do Monte Carlo simulations of how muons interact as they pass through materials. Geant4 is the most recent incarnation of the GEANT series of computer simulations, which have been used for over 50 years to design particle detectors and interpret the data that they produce.

Chen and colleagues used Geant4 to calculate how muons are scattered within nine materials ranging from magnesium (atomic number 12) to uranium (atomic number 92). These included common elements such as aluminium, copper and iron. The geometry of the scattering involves incoming cosmic muons with energies of 1 GeV and incident angles that are typical of cosmic muons. After scattering from a material target, the simulation assumes that the muons travel though two successive detectors, which measures the scattering angles. Data were generated for bare targets of the nine materials, as well as the nine materials coated with aluminium and polyethylene. Each simulation involved 500,000 muons passing through a target.

These data were then sampled using an inverse cumulative distribution function, as well as integration and interpolation. This is done to convert the data to a form that is optimal for training a neural network.

To use these data, the team created two lightweight neural-network frameworks for transfer learning: one based on fine tuning; and the other a domain-adversarial neural network. According to the team, both frameworks were able to identify correlations between muon scattering-angle distributions and different target materials. Crucially, this was the case even when the target materials were coated in aluminium or polyethylene.

Chen explains, “Transfer learning allows us to preserve the fundamental physical characteristics of muon scattering while efficiently adapting to unknown environments under shielding”.

Chen and colleagues are now trying to apply their process to more complicated scattering geometries. The also plan to include detector effects and targets made of several materials.

“By integrating simulation, physics, and data-driven learning, this research opens new pathways for applying artificial intelligence to nuclear science and security technologies,” says Chen.

The research is described in Nuclear Science and Techniques.

The post Transfer learning could help muon tomography identify illicit nuclear materials appeared first on Physics World.

  •  

Ask me anything: Katie Perry – ‘I’d tell my younger self to network like crazy’

Katie Perry studied physics at the University of Surrey in the UK, staying on there to do a PhD. While at Surrey, she worked with the nuclear physicist Daphne Jackson, who was the first female physics professor in the UK. Perry later worked in science communication – both as a science writer and in public relations.

She is currently chief executive of the Daphne Jackson Trust – a charity that supports returners to research careers after a break of at least two years for family, caring or health reasons. It offers fellowships to support people to overcome the challenges of returning, ensuring that their skills, talent, training and career promise are not lost.

What skills do you use every day in your job?

One of the most important skills is multitasking and working in an agile and flexible way. I’m often travelling to meetings, conferences and other events so I have to work wherever I am, whether it’s on a train, in a hotel or at the office. How I work reminds me of a moment I had towards the end of my physics degree when suddenly everything I’d been learning seemed to fit together; I could see both the detail and the bigger picture. It’s the same now. I have to switch quickly from one project or task to another, while keeping oversight of the overall direction and operation of the charity.

I am a strong advocate for part time and flexible working, not just for me, but for all my staff and the Daphne Jackson fellows. As a manager, a key skill is to see the person and their value – not just the hours they are working. Communication and networking skills are also vital as much of my role involves developing collaborations and working with stakeholders. I could be meeting a university vice chancellor, attending a networking reception, talking to our fellows or ensuring the trust complies with charity governance – all in one day.

What do you like best and least about your job?

I love my current role, and at the risk of sounding a little cheesy, it’s because of the trust’s amazing staff and the inspiring returners we support. The fact that I knew Daphne Jackson means that leading the organization is personal to me. I’m always blown away by how inspirational, dedicated, motivated and talented our fellows are and I love supporting them to return to successful research careers. It’s a privilege to lead the charity, helping to understand the challenges and barriers that returners face – and finding ways to overcome them.

Leading a small charity requires a broad set of skills. I enjoy the variety but it’s a challenge because you’re not so much a “chief executive officer” as a “chief everything officer”. I don’t have huge teams of people to help me with, say, human resources, finance or health and safety, which makes it struggle to do them as well as I’d like. It’s therefore important to have a good work-life balance, which is why I recently took up golf. I’ve yet to have a work meeting while out practising my swing, but one day my diary might say I’m “on a course”!

What do you know today, that you wish you knew when you were starting out in your career?

If I could go back in time, I’d tell myself – like I now tell my daughter – that it’s fine not to have a defined career path or plan. Sure, it helps to have an idea of what you want to do, but you have to live and work a little to discover what you like and – more importantly – don’t like. Careers these days are highly non-linear. Unexpected life events happen so you have to adapt, just as our Daphne Jackson fellows have done.

If someone had said to me in my 20s, when I was planning a career in science communication, that I’d be a charity chief executive I wouldn’t have believed them. But here I am running a charity founded in memory of the physicist who was such a great mentor to me during my PhD. When one door closes, a window often opens – so don’t be afraid to take set off in a new direction. It can be scary, but it’s often worth the effort.

I’d also tell my younger self to network like crazy. So many opportunities have opened up because I love speaking to people. You never know who you might meet at events or what making new connections can lead to. Finally, I wish I’d known that “impostor syndrome” will always be with you – and that it’s okay to feel that way provided you recognize it and manage it. Chances are, you may never defeat it completely.

The post Ask me anything: Katie Perry – ‘I’d tell my younger self to network like crazy’ appeared first on Physics World.

  •  

Quantum scientists release ‘manifesto’ opposing the militarization of quantum research

More than 250 quantum scientists have signed a “manifesto” opposing the use of quantum research for military purposes. The statement – quantum scientists for disarmament –  expresses a “deep concern” about the current geopolitical situation and “categorically rejects” the militarization of quantum research or its use in population control and surveillance. The signatories now call for an open debate about the ethical implications of quantum research.

While quantum science has the potential to improve many different areas – from sensors and medicine to computing – some are concerned about its applications for military purposes. They includes quantum key distribution and cryptographic networks for communication as well as quantum clocks and sensing for military navigation and positioning.

Marco Cattaneo from the University of Helsinki in Finland, who co-authored the manifesto, says that even the potential applications of quantum technologies in warfare can be used to militarize universities and research agendas, which he says is already happening. He notes it is not unusual for scientists to openly discuss military applications at conferences or to include such details in scientific papers.

“We are already witnessing restrictions on research collaborations with fellow quantum scientists from countries that are geopolitically opposed or ambiguous with respect to the European Union, such as Russia or China,” says Cattaneo. “When talking with our non-European colleagues, we also realized that these concerns are global and multifaceted.”

Long-term aims

The idea for a manifesto originated during a quantum-information workshop that was held in Benasque in Spain between June and July 2025.

“During a session on science policy, we realized that many of us shared the same concerns about the growing militarization of quantum science and academia,” Cattaneo recalls. “As physicists, we have a strong – and terrible – historical example that can guide our actions: the development of nuclear weapons, and the way the physics community organized to oppose them and to push for their control and abolition.”

Cattaneo says that the first goal of the manifesto is to address the militarization of quantum research, which he calls “the elephant in the room”. The document also aims to raise awareness and open a debate within the community and create a forum where concerns can be shared.

“A longer-term goal is to prevent, or at least to limit and critically address, research on quantum technologies for military purposes,” says Cattaneo. He notes that “one concrete proposal” is to push public universities and research institutes to publish a database of all projects with military goals or military funding, which, he says,  “would be a major step forward.”

Cattaneo claims the group is “not naïve” and understands that stopping the technology’s military application completely will not be possible. “Even if military uses of some quantum technologies cannot be completely stopped, we can still advocate for excluding them from public universities, for abolishing classified quantum research in public research institutions, and for creating associations and committees that review and limit the militarization of quantum technologies,” he adds.

The post Quantum scientists release ‘manifesto’ opposing the militarization of quantum research appeared first on Physics World.

  •  

India announces three new telescopes in the Himalayan desert

India has unveiled plans to build two new optical-infrared telescopes and a dedicated solar telescope in the Himalayan desert region of Ladakh. The three new facilities, expected to cost INR 35bn (about £284m), were announced by the Indian finance minister Nirmala Sitharaman on 1 February.

First up is a 3.7 m optical-infrared telescope, which is expected to come online by 2030. It will be built near the existing 2 m Himalayan Chandra Telescope (HCT) at Hanle, about 4500 m above sea level. Astronomers use the HCT for a wide range of investigations, including stellar evolution, galaxy spectroscopy, exoplanet atmospheres and time-domain studies of supernovae, variable stars and active galactic nuclei.

“The arid and high-altitude Ladakh desert is firmly established as among the world’s most attractive sites for multiwavelength astronomy,” Annapurni Subramaniam, director of the Indian Institute of Astrophysics (IIA) in Bangalore, told Physics World. “HCT has demonstrated both site quality and opportunities for sustained and competitive science from this difficult location.”

The 3.7 m telescope is a stepping stone towards a proposed 13.7 m National Large Optical-Infrared Telescope (NLOT), which is expected to open in 2038. “NLOT is intended to address contemporary astronomy goals, working in synergy with major domestic and international facilities,” says Maheswar Gopinathan, a scientist at the IIA, which is leading all three projects.

Gopinathan says NLOT’s large collecting area will enable research on young stellar systems, brown dwarfs and exoplanets, while also allowing astronomers to detect faint sources and to rapidly follow up extreme cosmic events and gravitational wave detections.

Along with India’s upgraded Giant Metrewave Radio Telescope, a planned gravitational-wave observatory in the country and the Square Kilometre Array in Australasia and South Africa, Gopinathan says that NLOT “will usher in a new era of multimessenger and multiwavelength astronomy.”

The third telescope to be supported is the 2m National Large Solar Telescope (NLST), which will be built near Pangong Tso lake 4350 m above sea level. Also expected to come online by 2030, the NLST is an advance on India’s existing 50 cm telescope at the Udaipur Solar Observatory, which provides a spatial resolution of about 100 km. Scientists also plan to combine NLST observations with data from Aditya-L1, India’s space-based solar observatory, which launched in 2023.

“We have two key goals [with NLST],” says Dibyendu Nandi, an astrophysicist at the Indian Institute of Science Education and Research in Kolkata, “to probe small-scale perturbations that cascade into large flares or coronal mass ejections and improve our understanding of space weather drivers and how energy in localised plasma flows is channelled to sustain the ubiquitous magnetic fields.”

While bolstering India’s domestic astronomical capabilities, scientists say the Ladakh telescopes – located between observatories in Europe, the Americas, East Asia and Australia – would significantly improve global coverage of transient and variable phenomena.

The post India announces three new telescopes in the Himalayan desert appeared first on Physics World.

  •  

Black hole is born with an infrared whimper

A faint flash of infrared light in the Andromeda galaxy was emitted at the birth of a stellar-mass black hole – according to a team of astronomers in the US. Kishalay De at Columbia University and the Flatiron Institute, and colleagues, noticed that the flash was followed by the rapid dimming of a once-bright star. They say that the star collapsed, with almost all of its material falling into a newly forming black hole. Their analysis suggests that there may be many more such black holes in the universe than previously expected.

When a massive star runs out of fuel for nuclear fusion it can no longer avoid gravitational collapse. As it implodes, such a star is believed to emit an intense burst of neutrinos, whose energy can be absorbed by the star’s outer layers.

In some cases, this energy is enough to tear material away from the core, triggering spectacular explosions known as core-collapse supernovae. Sometimes, however, this energy transfer is insufficient to halt the collapse, which continues until a stellar-mass black hole is created. These stellar deaths are far less dramatic than supernovae, and are therefore very difficult to observe.

Observational evidence for these stellar-mass black holes include their gravitational influence on the motions of stars; and the gravitational waves emitted when they merge together. So far, however, their initial formation has proven far more difficult to observe.

Mysterious births

“While there is consensus that these objects must be formed as the end products of the lives of likely very massive stars, there has remained little convincing observational evidence of watching stars turn into black holes,” De explains. “As a result, we don’t even have constraints on questions as fundamental as which stars can turn into black holes.”

The main problem is the low key nature of the stellar implosions. While core-collapse supernovae shine brightly in the sky, “finding an individual star disappearing in a galaxy is remarkably difficult,” De says. “A typical galaxy has a 100 billion stars in it, and being able to spot one that disappears makes it very challenging.”

Fortunately, it is believed that these stars do not vanish without a trace. “Whenever a black hole does form from the near complete inward collapse of a massive star, its very outer envelope must be still ejected because it is too loosely bound to the star,” De explains. As it expands and cools, models predict that this ejected material should emit a flash of infrared radiation – vastly dimmer than a supernova, but still bright enough for infrared surveys to detect.

To search for these flashes, De’s team examined data from NASA’s NEOWISE infrared survey and several other telescopes. They identified a near-infrared flash that was observed in 2014 and closely matched their predictions for a collapsing star. That flash was emitted by a supergiant star in the Andromeda galaxy.

Nowhere to be seen

Between 2017 and 2022, the star dimmed rapidly before disappearing completely across all regions of the electromagnetic spectrum.  “This star used to be one of the most luminous stars in the Andromeda Galaxy, and now it was nowhere to be seen,” says De.

“Astronomers can spot supernovae billions of light years away – but even at this remarkable proximity, we didn’t see any evidence of an explosive supernova,” De says. “This suggests that the star underwent a near pure implosion, forming a black hole.”

The team also examined a previously-observed dimming in a galaxy 10 times more distant. While several competing theories had emerged to explain that disappearance, the pattern of dimming bore a striking resemblance to their newly-validated model, strongly suggesting that this event too signalled the birth of a stellar-mass black hole.

Because these events occurred so recently in ordinary galaxies like Andromeda, De’s team believe that similar implosions must be happening routinely across the universe – and they hope that their work will trigger a new wave of discoveries.

“The estimated mass of the star we observed is about 13 times the mass of the Sun, which is lower than what astronomers have assumed for the mass of stars that turn into black holes,” De says. “This fundamentally changes out understanding of the landscape of black hole formation – there could be many more black holes out there than we estimate.”

The research is described in Science.

The post Black hole is born with an infrared whimper appeared first on Physics World.

  •  

International Year of Quantum Science and Technology draws to a close

The International Year of Quantum Science and Technology (IYQ) has officially closed following a two-day event in Accra, Ghana. The year has seen hundreds of events worldwide celebrating the science and applications of quantum physics.

Officially launched in February at the headquarters of the UN Educational, Scientific and Cultural Organization (UNESCO) in Paris, IYQ has involved hundreds of organizations – including the Institute of Physics, which publishes Physics World.

The year 2025 was chosen for an international year dedicated to quantum physics as it marks the centenary of the initial development of quantum mechanics by Werner Heisenberg. A range of international and national events have been held touching on quantum in everything from communications and computing to medicine and the arts.

One of the highlights of the year was a workshop on 9–14 June 2025 in Helgoland – the island off the coast of Germany where Heisenberg made his breakthrough exactly 100 years earlier. It was attended by more than 300 top quantum physicists, including four Nobel prize-winners, who gathered for talks, poster sessions and debates.

Another was the IOP’s two-day conference – Quantum Science and Technology: The First 100 Years; Our Quantum Future – held at the Royal Institution in London in November.

The closing event in Ghana, held on 10–11 February, was attended by government officials, UNESCO directors, physicists and representatives from international scientific societies, including the IOP. They discussed UNESCO’s official 2025 IYQ report as well as heard a reading of the IYQ 2025 poetry contest winning entry and attended an exhibition with displays from IYQ sponsors.

Organizers behind the IYQ hope its impact will be felt for many years to come. “The entire 2025 year was filled with impactful events happening all over the world. It has been a wonderful experience working alongside such dedicated and distinguished colleagues,” notes Duke University physicist Emily Edwards, who is a member of the IYQ steering committee. “We are thrilled to see the enthusiasm continue through to 2026 with the closing ceremony and are proud that a strong foundation has been laid for the years ahead.”

The UN has declared “international years” since 1959, to draw attention to topics deemed to be of worldwide importance. In recent years, there have been a number of successful science-based themes, including physics (2005), astronomy (2009), chemistry (2011), crystallography (2014) and light and light-based technologies (2015).

  • Read our two free-to-read quantum briefings, published in May and October, which feature articles on the history, mystery and industry of quantum mechanics.
  • Rewatch our Physics World Live: Quantum held in June that included a discussion of how technological developments have created a whole new ecosystem of “quantum 2.0” businesses

The post International Year of Quantum Science and Technology draws to a close appeared first on Physics World.

  •  

Asteroid deflection: why we need to get it right the first time

Science fiction became science fact in 2022 when NASA’s DART mission took the first steps towards creating a planetary defence system that could someday protect Earth from a catastrophic asteroid collision. However, much more work on asteroid deflection is needed from the latest generation of researchers – including Rahil Makadia, who has just completed a PhD in aerospace engineering at University of Illinois at Urbana-Champaign.

In this episode of the Physics World Weekly podcast, Makadia talks about his work on how we could deflect asteroids away from Earth. We also chat about the potential threats posed by near-Earth asteroids – from shattered windows to global destruction.

Makadia’s stresses the importance of getting a deflection right the first time, because his calculations reveal that a poorly deflected asteroid could return to Earth someday. In November, he published a paper that explored how a bad deflection could send an asteroid into a “keyhole” that guarantees its return.

But it is not all gloom and doom, Makadia points out that our current understanding of near-Earth asteroids suggests that no major collision will occur for at least 100 years. So even if there is a threat on the horizon, we have lots of time to develop deflection strategies and technologies.

The post Asteroid deflection: why we need to get it right the first time appeared first on Physics World.

  •  

Fluid gears make their debut

Flowing fluids that act like the interlocking teeth of mechanical gears offer a possible route to novel machines that suffer less wear-and-tear than traditional devices. This is the finding of researchers at New York University (NYU) in the US, who have been studying how fluids transmit motion and force between two spinning solid objects. Their work sheds new light on how one such object, or rotor, causes another object to rotate in the liquid that surrounds it – sometimes with counterintuitive results.

“The surprising part in our work is that the direction of motion may not be what you expect,” says NYU mathematician Leif Ristroph, who led the study together with mathematical physicist Jun Zhang. “Depending on the exact conditions, one rotor can cause a nearby rotor to spin in the opposite direction, like a pair of gears pressed together. For other cases, the rotors spin in the same direction, as if they are two pulleys connected by a belt that loops around them.”

Making gear teeth using fluids

Gears have been around for thousands of years, with the first records dating back to 3000 BC. While they have advanced over time, their teeth are still made from rigid materials and are prone to wearing out and breaking.

Ristroph says that he and Zhang began their project with a simple question: might it possible to avoid this problem by making gears that don’t have teeth, and in fact don’t even touch, but are instead linked together by a fluid? The idea, he points out, is not unprecedented. Flowing air and water are commonly used to rotate structures such as turbines, so developing fluid gears to facilitate that rotation is in some ways a logical next step.

To test their idea, the researchers carried out a series of measurements aimed at determining how parameters like the spin rate and the distance between spinning objects affect the motion produced. In these measurements, they immersed the rotors – solid cylinders – in an aqueous glycerol solution with a controllable viscosity and density. They began by rotating one cylinder while allowing the other one to spin in response. Then they placed the cylinders at varying distances from each other and rotated the active cylinder at different speeds.

“The active cylinder should generate fluid flows and could therefore in principle cause rotation of the passive one,” says Ristroph, “and this is exactly what we observed.”

When the cylinders were very close to each other, the NYU team found that the fluid flows functioned like gear teeth – in effect, they “gripped” the passive rotor and caused it to spin in the opposite direction as the active one. However, when the cylinders were spaced farther apart and the active cylinder spun faster, the flows looped around the outside of the passive cylinder like a belt around a pulley, producing rotation in the same direction as the active cylinder.

A model involving gear-like- and belt-like modes

Ristroph says the team’s main difficulty was figuring out how to perform such measurements with the necessary precision. “Once we got into the project, an early challenge was to make sure we could make very precise measurements of the rotations, which required a special way to hold the rotors using air bearings,” he explains. Team member Jesse Smith, a PhD student and first author of a paper in Physical Review Letters about the research, was “brilliant in figuring out every step in this process”, Ristroph adds.

Another challenge the researchers faced was figuring out how to interpret their findings. This led them to develop a model involving “gear-like” and “belt-like” modes of induced rotations. Using this model, they showed that, at least in principle, a fluid gear could replace regular gears and pulley-and-belt systems in any system – though Ristroph suggests that transmitting rotations in a machine or keep timing via a mechanical device might be especially well-suited.

In general, Ristroph says that fluid gears offer many advantages over mechanical ones. Notably, they cannot become jammed or wear out due to grinding. But that isn’t all: “There has been a lot of recent interest in designing new types of so-called active materials that are composed of many particles, and one class of these involves spinning particles in a fluid,” he explains. “Our results could help to understand how these materials behave based on the interactions between the particles and the flows they generate.”

The NYU researchers say their next step will be to study more complex fluids. “For example, a slurry of corn starch is an everyday example of a shear-thickening fluid and it would be interesting to see if this helps the rotors better ‘grip’ one another and therefore transmit the motions/forces more effectively,” Ristroph says. “We are also numerically simulating the processes, which should allow us to investigate things like non-circular shapes of the rotors or more than just two rotors,” he tells Physics World.

The post Fluid gears make their debut appeared first on Physics World.

  •  

New quantum-enabled proteins could improve biosensing

A new class of biomolecules called magneto-sensitive fluorescent proteins, or MFPs, could improve imaging of biological processes inside living cells and potentially underpin innovative therapies.

The fluorescent proteins commonly used in biological studies respond solely to light being shone at them. But because that light gets scattered by tissues there are inaccuracies in determining exactly where the resulting fluorescence originates. By contrast, the MFPs created by a team led by Harrison Steel, head of the Engineered Biotechnology Research Group at the University of Oxford in the UK, fluoresce partly in response to highly predictable magnetic fields and radio waves that pass through biological tissues without deflection.

Schematic showing MFP sensor operation
Sensor schematic An MFP excited by blue light emits green fluorescence, the intensity of which can be modulated by applying appropriate magnetic or radiofrequency fields. (Courtesy: Gabriel Abrahams)

To detect where MFPs are located within living cells, the researchers apply both a static magnetic field with a precisely known gradient and a radiofrequency (RF) signal, which modulate the fluorescence triggered via excitation by a light-emitting diode (LED).

The emitted fluorescence is brightest whenever the RF is in resonance with a transition energy of the entangled electron system present within the MFP. Since the resonance frequency depends on the surrounding magnetic field strength, the brightness reveals the protein’s location.

As detailed in their recent Nature paper, the researchers engineered the MFPs by “directed evolution”: starting with a DNA sequence, making two to three thousand variants of it, and selecting the variants with the best fluorescence response to magnetic fields before repeating the entire process multiple times. The resulting proteins were tested via ODMR (optically detected magnetic resonance) and MFE (magnetic-field effect) experiments, revealing that they could be detected in single living cells and sense their local microenvironment.

Importantly, these MFPs can be made in research labs using a straightforward biological technique. “This is a totally different way of coming up with new quantum materials compared to other engineering efforts for quantum sensors like nitrogen vacancies [in diamonds] which need to be manufactured in highly specialized facilities,” explains first author Gabriel Abrahams, a doctoral student in Steel’s research group. Abrahams helped develop quantum diamond microscopes during his master’s in physics at the Quantum Nano Sensing Lab in Melbourne, Australia before moving onto the Oxford Interdisciplinary Bioscience Doctoral Training Programme.

The MFPs were inspired by the work of study co-authors Maria Ingaramo and Andy York, both then working for Calico Life Sciences. They had observed a small change in fluorescence when a magnet interacted with a quantum-enabled protein, explains Abrahams. “That was really cool! I hadn’t seen anything like that, and there were clearly potential applications if it could be made better,” he says.

Steel tells Physics World that “a lot of the past work in quantum biology was with fragile proteins, often at cryogenic temperatures. Surprisingly you could easily measure these MFPs in single living cells every few minutes as they can work for a long time at room temperature”. Furthermore, using MFPs only requires adding a magnet to existing fluorescence microscopy equipment, allowing new data to be cost-effectively obtained.

“For instance, you might use three or four fluorescent proteins to tag natural processes in a mammalian cell in a petri dish to see when they are being used and where they go. We could instead tag with 10 or 15 MFPs, allowing you to measure extra targets by just applying a magnetic field,” Steel explains.

Quantum engineer Peter Maurer from the University of Chicago in the US, who was not involved in the study, is enthusiastic about these new MFPs. “By combining magnetic fields and fluorescence, this work establishes an exciting new imaging modality with broad potential for future evolution. Notably, similar approaches could be directly applicable to qubits [quantum bits], such as the fluorescent protein qubits our team published in Nature last year,” he says.

Next, Steel intends to improve their instrumentation for using MFPs – much of which was adopted from researchers investigating how birds navigate via the earth’s magnetic field. Future MFP applications could include microbiome studies sensing where bacteria travel in our bodies, and the development of highly controllable actuators for drug delivery. “If you would like to turn on the protein’s ability to bind to a cancer cell, for example, you could simply put a magnet on the outside of a person in the right location,” he concludes.

The post New quantum-enabled proteins could improve biosensing appeared first on Physics World.

  •  

Duke of Edinburgh informed about physics and the green economy at visit to Institute of Physics in London

Four photos of Prince Edward at the IOP's "Physics Powering the Green Economy" event
Royal approval (Clockwise from top left) The Duke of Edinburgh with IOP group chief executive Tom Grinyer; talking to Selina Ambrose from Promethean Particles; the exhibition he toured; and speaking after the panel debate. (Courtesy: Carmen Valino)

The Duke of Edinburgh visited the headquarters of the Institute of Physics (IOP) in central London on 5 February to learn about the role that physics plays in supporting the green economy.

The event was attended by about 100 business leaders, policy chiefs, senior physicists, and IOP and IOP Publishing staff. It highlighted how physics research is helping to deliver clean energy solutions and support economic growth.

A total of 12 companies took part in an exhibition that was visited by the duke. They included two carbon-capture firms – Nellie Technologies and Promethean Particles – as well as the fusion firm Tokamak Energy and Sunamp, which makes non-flammable “thermal batteries”.

The other firms were Intelligent Energy, Matoha Instrumentation, NESO, Oxford Instruments, Inductive Power Projection, QBA, Reclinker and Treeconomy.

The event included a panel debate chaired by Tara Shears, the IOP’s vice-president for science and innovation.

It featured ex-BP boss John Browne, who now works in green energy, along with Sizewell C energy-strategy director David Cole, Nellie Technologies founder Stephen Millburn, solar-cell physicist Jenny Nelson from Imperial College, and Emily Nurse from the UK’s Climate Change Committee.

After the debate, the duke said the event had showcased “some of the brilliant ideas that are trying to solve some really challenging issues through creativity and imagination”. He expressed particular delight that people are central to that mission.

“Our ability to evolve the right skills for the future has been well demonstrated here,” he said. “It comes down to creating the right climate to allow these ideas to flourish and come to market. We simply cannot drop this issue.”

Tom Grinyer, group chief executive of the IOP, reminded delegates that physics is fundamental to the UK economy. “We’re seeing how research is translating into real-world solutions that matter today, from clean power and climate intelligence, to advanced materials and future technologies,” he said.

But he warned that long-term investment in young people will be vital to create the physicists and business leaders who can tackle those challenges.

The post Duke of Edinburgh informed about physics and the green economy at visit to Institute of Physics in London appeared first on Physics World.

  •  

Earthquake-sensing network detects space debris as it falls to Earth

Graphic showing the path of space debris re-entering the atmosphere. The re-entry path is shown in bright yellow with contours fading to dark purple, and is superimposed on a brown-green-and-black contour plot of the terrain beneath it. A curved horizon in the background indicates that the image is shown from the perspective of someone looking down on the planet from space
Re-entry of space debris. Courtesy: S Economon and B Fernando

When chunks of space debris make their fiery descent through the Earth’s atmosphere, they leave a trail of shock waves in their wake. Geophysicists have now found a way to exploit this phenomenon, using open-source seismic data from a network of earthquake sensors to monitor the waves produced by China’s Shenzhou-15 module as it fell to Earth in April 2024. The method is valuable, they say, because it makes it possible to follow debris – which can be hazardous to humans and animals – in near-real time as they travel towards the surface.

“We’re at the situation today where more and more spacecraft are re-entering the Earth’s atmosphere on a daily basis,” says team member Benjamin Fernando, a postdoctoral researcher at Johns Hopkins University in the US. “The problem is that we don’t necessarily know what happens to the fragments this space debris produces – whether they all break up in the atmosphere or if some of them reach the ground.”

Piggybacking on a network of earthquake sensors

As the Shenzhou-15 module re-entered the atmosphere, it began to disintegrate, producing debris that travelled at supersonic speeds (between Mach 25‒30) over the US cities of Santa Barbara, California and Las Vegas, Nevada. The resulting sonic booms produced vibrations strong enough to be picked up by a network of 125 seismic stations spread over Nevada and Southern California.

Fernando and his colleague Constantinos Charalambous at Imperial College London in the UK used freely available data from these stations to measure the arrival times of the largest sonic boom signals. Based on these data, they produced a contour map of the path the debris took and the direction in which it propagated. They also determined the altitude of the module as it travelled by using ratios of the speed of sound to the apparent speed of the incident wavefront its supersonic flight generated as it passed over the seismic stations. Finally, they used a best-fit seismic inversion model to estimate where remnants of the module may have landed and the speed at which they travelled over the ground.

The analyses revealed that the module travelled roughly 20-30 kilometres south of the trajectory that US Space Command had predicted based on measurements of the module’s orbit alone. The seismic data also showed that the module gradually disintegrated into smaller pieces rather than undergoing a single explosive disassembly.

Advantages of accurate tracking

To obtain an estimate of the object’s trajectory within seconds or minutes, the researchers had to simplify their calculations by ignoring the effects of wind and temperature variations in the lower troposphere (the lowest layer of the Earth’s atmosphere). This simplification also did away with the need to simulate the path of wave signals through the atmosphere, which was essential for previous techniques that relied on radar data to follow objects decaying in low Earth orbit. These older techniques, Fernando adds, produced predictions of the objects’ landing sites that could, in the worst cases, be out by thousands of kilometres.

The availability of accurate, near-real time debris tracking could be particularly helpful in cases where the debris is potentially harmful. As an example, Fernando cites an incident in 1996, when debris from the Russian Mars 96 spacecraft fell out of orbit. “People thought it burned up and [that] its radioactive power source landed intact in the ocean,” he says. “They tried to track it at the time, but its location was never confirmed. More recently, a group of scientists found artificial plutonium in a glacier in Chile that they believe is evidence the power source burst open during the descent and contaminated the area.”

Though Fernando emphasizes that it’s rare for debris to contain radioactive material, he argues “we’d benefit from having additional tracking tools” when it does.

Towards an automated algorithm for trajectory reconstruction

Fernando had previously used seismometers to track natural meteoroids, comets and asteroids on both Earth and Mars. In the latter case, he used data from InSight, a NASA Mars mission equipped with a seismometer.

“The meteoroids hitting the Red Planet were a really good seismic source for us,” he explains. “We detected the sonic booms from them breaking up and, occasionally, would actually detect the impact of them hitting the ground. We realized that we could actually apply those same techniques to studying space debris on Earth.

“This is an excellent example of a technique that we really perfected the expertise for a planetary science kind of pure science application. And then we were able to apply it to a really relevant, challenging problem here on Earth,” he tells Physics World.

The scientists say that in the longer term, they hope to develop an algorithm that automatically reconstructs the trajectory of an object. “At the moment, we’re having to find the sonic boons and analyse the data ‘by hand’,” Fernando says. “That’s obviously very slow, even though we’re getting better.”

A better solution, Fernando continues, would be to develop a machine learning tool that can find sonic booms in the data when a re-entry is expected, and then use those data to reconstruct the trajectory of an object. They are currently applying for funding to explore this option in a follow-up study.

Beyond that, there’s also the question of what to do with the data once they have it. “Who would we send the data to?” Fernando asks rhetorically. “Who needs to know about these events? If there’s a plane crash, hurricane, or similar, there are already good international frameworks in place for dealing with these events. It’s not clear to me, however, that such a framework for dealing with space debris has caught up with reality – either in terms of regulations or the response when such an event does happen.”

The current research is described in Science.

The post Earthquake-sensing network detects space debris as it falls to Earth appeared first on Physics World.

  •  

‘Relief’ as industrial megaproject in Chile that threatened world’s darkest skies is cancelled

A proposed industrial-scale green hydrogen and ammonia project in Chile that astronomers warned could cause “irreparable damage” to the clearest skies in the world has been cancelled. The decision by AES Andes, a subsidiary of the US power company AES Corporation, to shelve plans for the INNA complex has been welcomed by the European Southern Observatory (ESO).

AES Andes submitted an Environmental Impact Assessment for the green hydrogen project in December 2024. Expected to cover more than 3000 hectares, it would have been located just a few kilometres from ESO’s Paranal Observatory in Chile’s Atacama Desert, which is one of the world’s most important astronomical research sites due to its stable atmosphere and lack of light pollution.

That same month, ESO conducted its own impact assessment, concluding that INNA would increase light pollution above Paranal’s Very Large Telescope by at least 35% and by more than 50% above the southern site of the Cherenkov Telescope Array Observatory (CTAO).

Once built, the CTAO will be the world’s most powerful ground-based observatory for very-high-energy gamma-ray astronomy.

ESO director general Xavier Barcons had warned that the hydrogen project would have posed a major threat to “the performance of the most advanced astronomical facilities anywhere in the world”.

On 23 January, however, AES Andes announced that it will discontinue plans to develop the INNA complex. The firm stated that after a review of its project portfolio it had chosen to instead focus on renewable energy and energy storage. On 6 February AES Andes sent a letter to Chile’s Environmental Assessment Service requesting that INNA is not evaluated, which formally confirmed the end of the project.

Barcons says that ESO is “relieved” about the decision, adding that the case highlights the urgent need to establish clear protection measures in the areas around astronomical observatories.

Barcons notes that green-energy projects as well as other industrial projects can be “fully compatible” with astronomical observatories as long as the facilities are located at sufficient distances away.

Romano Corradi, director of the Gran Telescopio Canarias, which is located at the Roque de los Muchachos Observatory, La Palma, Spain, told Physics World that he was “delighted” with the decision.

Corradi adds that while it is unclear if preserving the night-sky darkness of the region was a relevant factor for the decision to cancel the project, he hopes that global pressure to defend the dark skies played a role.

The post ‘Relief’ as industrial megaproject in Chile that threatened world’s darkest skies is cancelled appeared first on Physics World.

  •  

What shape is a uranium nucleus?

High-energy heavy-nuclei collisions, conducted at particle colliders such as CERN’s Large Hadron Collider (LHC) and BNL’s Relativistic Heavy Ion Collider (RHIC) are able to produce a state of matter called a quark-gluon plasma (QGP).

A QGP is believed to have existed just after the Big Bang. The building blocks of protons and neutrons – quarks and gluons – were not confined inside particles as usual but instead formed a hot, dense, strongly interacting soup.

Studying this state of matter helps us understand the strong nuclear force, the early universe, and how matter evolved into the forms we see today.

In order to understand QGP created in a particle collider you need to know the initial conditions. In this case that is the shape and structure of the heavy nuclei that collided.

A major complicating factor here is that most atomic nuclei are deformed. They are not spherical but rather squashed and ellipsoidal or even pear-shaped.

Collisions of deformed nuclei with different orientations brings in a large amount of randomness and therefore hinders our ability to describe the initial conditions of the QGP.

A new method called imaging-by-smashing was developed by the STAR experiment at RHIC, where atomic nuclei are smashed together at extremely high speeds. By studying the patterns in the debris from these collisions, researchers can infer the original shape of the nuclei.

In this latest study, they compared collisions between two types of nuclei: uranium-238, which has a strongly deformed shape, and gold-197, which is nearly spherical.

The differences between uranium and gold helped isolate the effects of uranium’s deformation. Their results matched predictions from advanced hydrodynamic simulations and earlier low-energy experiments.

Most interestingly, they found hints that uranium might possess a pear-like (octupole) shape, in addition to its dominant football-like (quadrupole) shape. This feature had not previously been observed in high-energy collisions

This method is still new, but in the future, it could give us key insights nuclear structure throughout the periodic table. These measurements probe nuclei at energy scales orders of magnitudes higher than traditional methods, potentially revealing how nuclear structure evolves across very different energy regimes.

Read the full article

Imaging nuclear shape through anisotropic and radial flow in high-energy heavy-ion collisions – IOPscience

The STAR Collaboration, 2025 Rep. Prog. Phys. 88 108601

The post What shape is a uranium nucleus? appeared first on Physics World.

  •  

Wave scattering explained

In quantum mechanics, a quantum state is a complete description of a system’s physical properties.

If the system changes slowly and returns to its original physical configuration, then its quantum state also returns to its original form except for a phase factor.

In a pioneering work in 1984, physicist Michael Berry discovered that this factor can be separated into two parts: the dynamic and the geometric phase.

The usual dynamic phase depends on energy and time and was already well understood. The new part, the geometric phase (or Berry phase after its discoverer) arises purely from the geometry of the path that the state takes through parameter space.

The Berry phase has profound implications across physics, appearing in phenomena like the quantum Hall effect, molecular dynamics, and polarised light. It reveals deep connections between geometry, topology, and physical observables.

In a recent paper, this concept was extended from wave evolution to certain wave scattering events, where waves bounce off or pass through materials and their properties shift.

In order to do this, the authors used a mathematical tool called a scattering matrix. The matrix encodes all the possible outcomes of a scattering process – reflection, transmission, or deflection -based on the system’s properties.

They showed that these wave shifts can also be split into dynamic and geometric parts. Importantly this splitting can be done in such a way that doesn’t depend on arbitrary choices (i.e., it’s gauge-invariant).

The team demonstrated their idea with known examples like light passing through a changing waveplate, beams reflecting off surfaces, and time delays in 1D systems.

Their approach is not only able to describe known phenomena, but also reveals new physical features, provides new insights, and uncovers previously unnoticed connections.

Going forward, identifying the geometric and dynamic origins of various scattering-induced shifts offers new ways to control wave-scattering phenomena.

This could have applications in photonics, imaging, quantum computing, and micromanipulation.

Read the full article

Dynamic and geometric shifts in wave scattering – IOPscience

K. Y. Bliokh et al, 2025 Rep. Prog. Phys. 88 107901

 

The post Wave scattering explained appeared first on Physics World.

  •  

Dual-tracer PET enables biologically individualized radiotherapy

Radiation therapy is usually delivered by prescribing the same radiation dose for each particular type of tumour. But this “one-size-fits-all” approach does not account for a tumour’s intrinsic radiosensitivity and heterogeneity and can lead to recurrence and treatment failure. Researchers in Sweden and Germany are now investigating whether biologically individualized radiotherapy plans, created using PET images of a patient’s tumour biology, can improve treatment outcomes.

The research team – headed up by Marta Lazzeroni from Stockholm University – studied 28 patients with advanced head-and-neck squamous cell carcinoma (HNSCC). All patients underwent two pre-treatment PET/CT scans, using 18F-fluoromisonidazole (FMISO) and 18F-FDG as tracers to respectively quantify radioresistance and tumour cellularity (the percentage of clonogenic cells) – both critical factors that influence treatment response.

“FMISO provides information on hypoxia-related radioresistance, but tumour control also strongly depends on the number of clonogenic cells, which is not captured by hypoxia imaging alone,” Lazzeroni explains. “To our knowledge, this is the first study to combine FMISO and FDG PET within a unified radiobiological framework to guide biologically individualized dose escalation.”

For each patient, the researchers used FMISO uptake to derive voxel-level maps of oxygen partial pressure (pO2) in the tumour and define a hypoxic target volume (HTV). The FDG scans were used to estimate spatial variations in clonogenic tumour cell density, which directly influence the dose required to realise a given tumour control probability (TCP).

Based on individual tumour profiles, the team used automated planning to create volumetric-modulated arc therapy plans comprising 35 fractions with an integrated boost. The plans delivered escalated dose to radioresistant subvolumes (the HTV), while maintaining clinically acceptable sparing of organs-at-risk. The PET datasets were used to calculate the prescribed dose required to achieve a TCP of 95%.

Meeting clinical feasibility

The automated planning pipeline achieved high-quality treatment plans for all patients without manual intervention. The average EQD2 (the dose delivered in 2 Gy fractions that’s biologically equivalent to the total dose) to the HTV was boosted to 81±3.2 Gy, and all 28 plans met the clinical constraints for protecting the brainstem, spinal cord and mandible. Parotid glands were spared in 75% of cases, with the remainder being glands that overlapped the target.

Lazzeroni and colleagues suggest that these results confirm the overall clinical feasibility of their personalized dose-escalation strategy and demonstrate how biology-guided prescriptions could be integrated into existing treatment planning workflows.

The researchers also performed a radiobiologic evaluation of the treatment plans to see whether the optimized dose distribution achieved the desired target control. For this, they calculated the TCP based on the planned dose distribution, the PET-derived radioresistance data and clonogenic cell density maps. For all patients, the plans achieved model-predicted TCP values exceeding 90%, a notable improvement on tumour control rates reported in the clinical literature for HNSCC, which are typically around 60%.

The proposed strategy is based on pre-treatment PET images, but biological changes during treatment – including temporal and spatial variations in tumour hypoxia – could impact its effectiveness. In future, the researchers suggest that longitudinal imaging, such as PET/CT scans at weeks 3 and 5, could be used to monitor evolving tumour biology and inform adaptive replanning. This is particularly relevant in HNSCC, where tumour shrinkage and reoxygenation are common, and where updated imaging is required to determine whether dose escalation or de-escalation is appropriate to maintain tumour control and optimize normal tissue sparing.

The researchers point out that as the biology-guided dose prescriptions were planned but not delivered, prospective trials will be required to assess whether the observed dosimetric and biologic gains translate to improved patient outcomes.

“This study was designed as a feasibility and modelling investigation, and the next step is prospective clinical validation,” Lazzeroni tells Physics World. “Based on the promising results of this approach, prospective clinical trials are currently in the planning phase within the group led by Anca-L Grosu in Germany. These trials will focus on integrating longitudinal PET imaging during treatment to enable biologically adaptive radiotherapy.”

The results are published in the Journal of Nuclear Medicine.

The post Dual-tracer PET enables biologically individualized radiotherapy appeared first on Physics World.

  •  

Entanglement reveals the difficulty of computational problems

Entanglement is a key resource for quantum computation and quantum technologies, but it can also tell us much about a computational problem. That is the conclusion of a recent paper by Achim Kempf and Einar Gabbassov – who are applied mathematicians at Canada’s University of Waterloo and are affiliated with Waterloo’s Institute for Quantum Computing and the Perimeter Institute for Theoretical Physics. Writing in Quantum Science and Technology, Gabbassov and Kempf show how entanglement plays a fundamental role in determining both the efficiency and the hardness of quantum computation problems.

They considered the role of entanglement in adiabatic quantum computing. This considers a landscape of hills and valleys (the problem) where the shape of the landscape depends on the problem to be solved.  A point on the landscape represents a candidate solution to the problem. This could be a configuration of possible states of three qubits, for example, or “a possible schedule for truck routes, or a particular shape for a pharmaceutical molecule” says Kempf. The actual solution to the problem is then the lowest (deepest) point in the landscape, which corresponds to the lowest energy point (the minimum point or minima).

This minima is easy to find if the landscape is smooth and has only one valley. The problem is harder if there are multiple valleys (a rugged landscape) since you might get stuck in a valley you believe to be the deepest, but which is not, and then you would have to climb out of it.

In a classical computation, every possible valley must be checked one-by-one to find the deepest one. However, Kempf explains that “in adiabatic quantum computing, the computer keeps track of all the valleys at once, by connecting them internally using entanglement”. Classically, many possibilities just means many independent guesses of the deepest valley. With quantum effects, when one part of the landscape shifts, it affects the whole landscape all at once. He explains that instead of checking each valley one-by-one, we can check them all simultaneously, significantly increasing the speed at which the lowest point in the landscape is found.

Shapeshifting landscape

When given a difficult problem with many valleys, there is a risk of getting stuck in a valley that is shallow and not being able to climb out and find the lowest energy state. Adiabatic quantum computing gets around this issue through a clever shapeshifting of the landscape.

The process starts with an easy landscape, comprising only one valley. Since the solution is simple, the the deepest valley corresponding to the lowest energy state is occupied quickly. Gradually the landscape is changed to contain more and more valleys, more closely approximating the more complicated landscape whose lowest point is the solution.

The lowest point changes with each change in the landscape, but the trick is that if the changes in the landscape are small enough, the deepest part of the landscape and therefore the lowest energy state will always be occupied. This is the basic principle of adiabatic quantum computing often used in resource allocation, routing and logistics, and machine learning where there can be huge numbers of possible variable configurations.

Difficulty and computation time

In their work, Gabbassov and Kempf explore how the amount of entanglement required to find the deepest valley links to the difficulty and time needed to complete the problem.

A difficult problem would be a rugged landscape consisting of multiple valleys of similar depth located far apart from one another. To occupy the lowest energy state, we need to occupy all these valleys simultaneously. The entanglement needed to do this is greater since the interconnectedness between the valleys is harder to maintain when they are further apart (they have a large Hamming distance). The problem is also harder to solve since it is more difficult to discern which of these valleys is the deepest when they have a similar depth – being close in energy. This added difficulty is reflected in a need for a greater amount of entanglement to keep track of the valleys but also in a greater amount of time needed to distinguish the depths of the valleys to find the deepest one.

Gabbassov and Kempf show that a large amount of entanglement is needed at these difficult, bottleneck points of the computation. This makes it even more difficult to keep track of the valleys and more time is required to avoid falling into the wrong one. This is also where classical computation would normally slow down. Quantum effects are therefore most valuable and are most crucial at these points, proving essential for identifying when and where adiabatic quantum computation can provide a genuine advantage over classical methods.

Kempf summarizes this as, “the hardness of any computational problem, directly translates into the corresponding widespreadness of entanglement the quantum computer needs to keep track of all the valleys so that it can find the minimum point. Calculational hardness therefore means the need for sophisticated entanglement. Since entanglement is a precious and fragile resource, a hard problem that requires a lot of it can only be solved slowly.”

Entanglement therefore proves to be a useful tool not just for significantly increasing the computational speed of problems but also in characterizing problem difficulty and computation speed. As Gabbassov notes “if we want to devise faster quantum algorithms, we should look not just at the amount of entanglement but also at how this entanglement redistributes/flows” and therefore the structure of the problem. Their work shows that the amount of entanglement used as a resource is more subtle than just providing a general computational speed-up.

 

The post Entanglement reveals the difficulty of computational problems appeared first on Physics World.

  •  
❌