↩ Accueil

Vue normale

Reçu — 12 février 2026 6.5 📰 Sciences English

Fluid gears make their debut

12 février 2026 à 13:00

Flowing fluids that act like the interlocking teeth of mechanical gears offer a possible route to novel machines that suffer less wear-and-tear than traditional devices. This is the finding of researchers at New York University (NYU) in the US, who have been studying how fluids transmit motion and force between two spinning solid objects. Their work sheds new light on how one such object, or rotor, causes another object to rotate in the liquid that surrounds it – sometimes with counterintuitive results.

“The surprising part in our work is that the direction of motion may not be what you expect,” says NYU mathematician Leif Ristroph, who led the study together with mathematical physicist Jun Zhang. “Depending on the exact conditions, one rotor can cause a nearby rotor to spin in the opposite direction, like a pair of gears pressed together. For other cases, the rotors spin in the same direction, as if they are two pulleys connected by a belt that loops around them.”

Making gear teeth using fluids

Gears have been around for thousands of years, with the first records dating back to 3000 BC. While they have advanced over time, their teeth are still made from rigid materials and are prone to wearing out and breaking.

Ristroph says that he and Zhang began their project with a simple question: might it possible to avoid this problem by making gears that don’t have teeth, and in fact don’t even touch, but are instead linked together by a fluid? The idea, he points out, is not unprecedented. Flowing air and water are commonly used to rotate structures such as turbines, so developing fluid gears to facilitate that rotation is in some ways a logical next step.

To test their idea, the researchers carried out a series of measurements aimed at determining how parameters like the spin rate and the distance between spinning objects affect the motion produced. In these measurements, they immersed the rotors – solid cylinders – in an aqueous glycerol solution with a controllable viscosity and density. They began by rotating one cylinder while allowing the other one to spin in response. Then they placed the cylinders at varying distances from each other and rotated the active cylinder at different speeds.

“The active cylinder should generate fluid flows and could therefore in principle cause rotation of the passive one,” says Ristroph, “and this is exactly what we observed.”

When the cylinders were very close to each other, the NYU team found that the fluid flows functioned like gear teeth – in effect, they “gripped” the passive rotor and caused it to spin in the opposite direction as the active one. However, when the cylinders were spaced farther apart and the active cylinder spun faster, the flows looped around the outside of the passive cylinder like a belt around a pulley, producing rotation in the same direction as the active cylinder.

A model involving gear-like- and belt-like modes

Ristroph says the team’s main difficulty was figuring out how to perform such measurements with the necessary precision. “Once we got into the project, an early challenge was to make sure we could make very precise measurements of the rotations, which required a special way to hold the rotors using air bearings,” he explains. Team member Jesse Smith, a PhD student and first author of a paper in Physical Review Letters about the research, was “brilliant in figuring out every step in this process”, Ristroph adds.

Another challenge the researchers faced was figuring out how to interpret their findings. This led them to develop a model involving “gear-like” and “belt-like” modes of induced rotations. Using this model, they showed that, at least in principle, a fluid gear could replace regular gears and pulley-and-belt systems in any system – though Ristroph suggests that transmitting rotations in a machine or keep timing via a mechanical device might be especially well-suited.

In general, Ristroph says that fluid gears offer many advantages over mechanical ones. Notably, they cannot become jammed or wear out due to grinding. But that isn’t all: “There has been a lot of recent interest in designing new types of so-called active materials that are composed of many particles, and one class of these involves spinning particles in a fluid,” he explains. “Our results could help to understand how these materials behave based on the interactions between the particles and the flows they generate.”

The NYU researchers say their next step will be to study more complex fluids. “For example, a slurry of corn starch is an everyday example of a shear-thickening fluid and it would be interesting to see if this helps the rotors better ‘grip’ one another and therefore transmit the motions/forces more effectively,” Ristroph says. “We are also numerically simulating the processes, which should allow us to investigate things like non-circular shapes of the rotors or more than just two rotors,” he tells Physics World.

The post Fluid gears make their debut appeared first on Physics World.

Reçu — 11 février 2026 6.5 📰 Sciences English

Earthquake-sensing network detects space debris as it falls to Earth

11 février 2026 à 14:00
Graphic showing the path of space debris re-entering the atmosphere. The re-entry path is shown in bright yellow with contours fading to dark purple, and is superimposed on a brown-green-and-black contour plot of the terrain beneath it. A curved horizon in the background indicates that the image is shown from the perspective of someone looking down on the planet from space
Re-entry of space debris. Courtesy: S Economon and B Fernando

When chunks of space debris make their fiery descent through the Earth’s atmosphere, they leave a trail of shock waves in their wake. Geophysicists have now found a way to exploit this phenomenon, using open-source seismic data from a network of earthquake sensors to monitor the waves produced by China’s Shenzhou-15 module as it fell to Earth in April 2024. The method is valuable, they say, because it makes it possible to follow debris – which can be hazardous to humans and animals – in near-real time as they travel towards the surface.

“We’re at the situation today where more and more spacecraft are re-entering the Earth’s atmosphere on a daily basis,” says team member Benjamin Fernando, a postdoctoral researcher at Johns Hopkins University in the US. “The problem is that we don’t necessarily know what happens to the fragments this space debris produces – whether they all break up in the atmosphere or if some of them reach the ground.”

Piggybacking on a network of earthquake sensors

As the Shenzhou-15 module re-entered the atmosphere, it began to disintegrate, producing debris that travelled at supersonic speeds (between Mach 25‒30) over the US cities of Santa Barbara, California and Las Vegas, Nevada. The resulting sonic booms produced vibrations strong enough to be picked up by a network of 125 seismic stations spread over Nevada and Southern California.

Fernando and his colleague Constantinos Charalambous at Imperial College London in the UK used freely available data from these stations to measure the arrival times of the largest sonic boom signals. Based on these data, they produced a contour map of the path the debris took and the direction in which it propagated. They also determined the altitude of the module as it travelled by using ratios of the speed of sound to the apparent speed of the incident wavefront its supersonic flight generated as it passed over the seismic stations. Finally, they used a best-fit seismic inversion model to estimate where remnants of the module may have landed and the speed at which they travelled over the ground.

The analyses revealed that the module travelled roughly 20-30 kilometres south of the trajectory that US Space Command had predicted based on measurements of the module’s orbit alone. The seismic data also showed that the module gradually disintegrated into smaller pieces rather than undergoing a single explosive disassembly.

Advantages of accurate tracking

To obtain an estimate of the object’s trajectory within seconds or minutes, the researchers had to simplify their calculations by ignoring the effects of wind and temperature variations in the lower troposphere (the lowest layer of the Earth’s atmosphere). This simplification also did away with the need to simulate the path of wave signals through the atmosphere, which was essential for previous techniques that relied on radar data to follow objects decaying in low Earth orbit. These older techniques, Fernando adds, produced predictions of the objects’ landing sites that could, in the worst cases, be out by thousands of kilometres.

The availability of accurate, near-real time debris tracking could be particularly helpful in cases where the debris is potentially harmful. As an example, Fernando cites an incident in 1996, when debris from the Russian Mars 96 spacecraft fell out of orbit. “People thought it burned up and [that] its radioactive power source landed intact in the ocean,” he says. “They tried to track it at the time, but its location was never confirmed. More recently, a group of scientists found artificial plutonium in a glacier in Chile that they believe is evidence the power source burst open during the descent and contaminated the area.”

Though Fernando emphasizes that it’s rare for debris to contain radioactive material, he argues “we’d benefit from having additional tracking tools” when it does.

Towards an automated algorithm for trajectory reconstruction

Fernando had previously used seismometers to track natural meteoroids, comets and asteroids on both Earth and Mars. In the latter case, he used data from InSight, a NASA Mars mission equipped with a seismometer.

“The meteoroids hitting the Red Planet were a really good seismic source for us,” he explains. “We detected the sonic booms from them breaking up and, occasionally, would actually detect the impact of them hitting the ground. We realized that we could actually apply those same techniques to studying space debris on Earth.

“This is an excellent example of a technique that we really perfected the expertise for a planetary science kind of pure science application. And then we were able to apply it to a really relevant, challenging problem here on Earth,” he tells Physics World.

The scientists say that in the longer term, they hope to develop an algorithm that automatically reconstructs the trajectory of an object. “At the moment, we’re having to find the sonic boons and analyse the data ‘by hand’,” Fernando says. “That’s obviously very slow, even though we’re getting better.”

A better solution, Fernando continues, would be to develop a machine learning tool that can find sonic booms in the data when a re-entry is expected, and then use those data to reconstruct the trajectory of an object. They are currently applying for funding to explore this option in a follow-up study.

Beyond that, there’s also the question of what to do with the data once they have it. “Who would we send the data to?” Fernando asks rhetorically. “Who needs to know about these events? If there’s a plane crash, hurricane, or similar, there are already good international frameworks in place for dealing with these events. It’s not clear to me, however, that such a framework for dealing with space debris has caught up with reality – either in terms of regulations or the response when such an event does happen.”

The current research is described in Science.

The post Earthquake-sensing network detects space debris as it falls to Earth appeared first on Physics World.

Reçu — 10 février 2026 6.5 📰 Sciences English

Samples from the far side of the Moon shed light on lunar asymmetry

10 février 2026 à 10:00

The near and far sides of the Moon are very different in their chemical composition, their magmatic activity and the thickness of their crust. The reasons for this difference are not fully understood, but a new study of rocks brought back to Earth by China’s Chang’e-6 mission has provided the beginnings of an answer. According to researchers at the Chinese Academy of Sciences (CAS) in Beijing, who measured iron and potassium isotopes in four samples from the Moon’s gigantic South Pole-Aitken Basin (SPA), the discrepancy likely stems from the giant meteorite impact that created the basin.

China has been at the forefront of lunar exploration in recent years, beginning in 2007 with the launch of the lunar orbiter Chang’e-1. Since then, it has carried out several uncrewed missions to the lunar surface. In 2019, one of these, Chang’e-4, became the first craft to touch down on the far side of the Moon, landing in the SPA’s Von Kármán crater. This 2500-km-wide feature extends from the near to the far side of the Moon and is one of the oldest known impact craters in our solar system, with an estimated age of between 4.2 and 4.3 billion years old.

Next in the series was Chang’e-5, which launched in November 2020 and subsequently returned 1.7 kg of samples from the near side of the Moon – the first lunar samples brought back to Earth in nearly 50 years. Hot on the heels of this feat came the return of samples from the far side of the Moon aboard Chang’e-6 after it launched on 3 May 2024.

A hypothesis that aligns with previous results

When scientists at the CAS Institute of Geology and Geophysics and colleagues analysed these samples, they found that the ratio of potassium-41 to potassium-39 is greater in the samples from the SPA basin than in samples from the near side collected by Chang’e-5 and NASA’s Apollo missions. According to study leader Heng-Ci Tian, this potassium isotope ratio is a relic of the giant impact that formed this basin.

Tian explains that the impact created such intense temperatures and pressures that many of the volatile elements in the Moon’s crust and mantle – including potassium – evaporated and escaped into space. “Since the lighter potassium-39 isotope would more readily evaporate than the heavier potassium-41 isotope, the impact produced this greater ratio of potassium-41 to potassium-39,” says Tian. He adds that this explanation is also supported by earlier results, such as Chang’e 6’s discovery that the mantle on the far side contains less water than the near side.

Before drawing this conclusion, the researchers, who report their work in the Proceedings of the National Academy of Sciences, needed to rule out several other possible explanations. The options they considered included whether irradiation of the lunar surface by cosmic rays could have produced an unusual isotopic ratio, and whether magma melting, cooling and eruptive processes could have changed the composition of the basaltic rocks. They also examined the possibility that contamination from meteorites could be responsible. Ultimately, though, they concluded that these processes would have had only negligible effects.

The effects of the impact

Tian says the team’s work represents the first evidence that an impact event of this size can volatilize materials deep within the Moon. But that’s not all. The findings also offer the first direct evidence that large impacts play an important role in transforming the Moon’s crust and mantle. Fewer volatiles, for example, would limit volcanic activity by suppressing magma formation – something that would explain why the lunar far side contains so few of the vast volcanic plains, or maria, that appear dark to us when we look at the Moon’s near side from Earth.

“The loss of moderately volatile elements – and likely also highly volatile elements – would have suppressed magma generation and volcanic eruptions on the far side,” Tian says. “We therefore propose that the SPA impact contributed, at least partially, to the observed hemispheric asymmetry in volcanic distribution.”

Technical challenges

Having hypothesized that moderately volatile elements could be an effective means of tracing lunar impact effects, Tian and colleagues were eager to use the Chang’e-6 samples to investigate how such a large impact affects the shallow and deep lunar interior. But it wasn’t all smooth sailing. “A major technical challenge was that the Chang’e‑6 samples consist mainly of fine-grained materials, making it difficult to select large individual grains,” he recalls. “To overcome this, we developed an ultra‑low‑consumption potassium isotope analytical protocol, which ultimately enabled high‑precision potassium isotope measurements at the milligram level.”

The current results are preliminary, and the researchers plan to analyse additional moderately volatile element isotopes to verify their conclusions. “We will also combine these findings with numerical modelling to evaluate the global-scale effects of the SPA impact,” Tian tells Physics World.

The post Samples from the far side of the Moon shed light on lunar asymmetry appeared first on Physics World.

Reçu — 9 février 2026 6.5 📰 Sciences English

Unusual astronomical event might be a ‘superkilonova’

9 février 2026 à 10:00

An unusual signal initially picked up by the LIGO and Virgo gravitational wave detectors and subsequently by optical telescopes around the world may have been a “superkilonova” – that is, a kilonova that took place inside a supernova. According to a team led by astronomers at the California Institute of Technology (Caltech) in the US, the event in question happened in a galaxy 1.3 billion light years away and may have been the result of a merger between two or more neutron stars, including at least one that was less massive than our Sun. If confirmed, it would be the first superkilonova ever recorded – something that study leader Mansi Kasliwal says would advance our understanding of what happens to massive stars at the end of their lives.

A supernova is the explosion of a massive star that occurs either because the star no longer produces enough energy to prevent gravitational collapse, or because it suddenly acquires large amounts of matter from a disintegrating nearby object. Such explosions are an important source of heavy elements such as carbon and iron in the universe.

Kilonovae are slightly different. They occur when two neutron stars collide, producing heavier elements such as gold and uranium. Both types of events can be detected from the ripples they produce in spacetime – gravitational waves – and from the light they give off that propagates across the cosmos to Earth-based observers.

An unusual new kilonova candidate

The first – and so far only – confirmed observation of a kilonova came in 2017 as a result of two neutron stars colliding. This event, known as GW170817, produced gravitational waves that were detected by the Laser Interferometer Gravitational-wave Observatory (LIGO), which is operated by the US National Science Foundation, and by its European partner Virgo. Several ground-based and space telescopes also observed light signals associated with GW170817, which enabled astronomers to build a clear picture of what happened.

Kasliwal and colleagues now believe they have evidence for a second kilonova with a very different cause. They say that this event, initially called ZTF 25abjmnps and then renamed AT2025ulz, could be a kilonova driven by a supernova – something that has never been observed before, although theorists predicted it was possible.

A chain of detections

The gravitational waves from AT2025ulz reached Earth on 18 August 2025 and were picked up by the LIGO detectors in Louisiana and Washington and by Virgo in Italy. Scientists there quickly alerted colleagues to the signal, which appeared to be coming from a merger between two objects, one of which was unusually small. A few hours later, the Zwicky Transient Facility (ZTF) at California’s Palomar Observatory identified a rapidly fading red object 1.3 billion light-years away that appeared to be in the same location as the gravitational wave source.

Several other telescopes that were previously part of the Kasliwal-led GROWTH (Global Relay of Observatories Watching Transients Happen) programme, including the W M Keck Observatory in Hawaiʻi and the Fraunhofer telescope at the Wendelstein Observatory in Germany, picked up the event’s trail. Their observations confirmed that the light eruption had faded fast and glowed at red light wavelengths – exactly what was observed with GW170817.

A few days later, though, something changed. AT2025ulz grew brighter, and telescopes began to pick up hydrogen lines in its light spectra – a finding that suggested it was a Type IIb (stripped-envelope core-collapse) supernova, not a kilonova.

Two possible explanations

Kasliwal, however, remained puzzled. While she agrees that AT2025ulz does not resemble GW170817, she argues that it also doesn’t look like a run-of-the-mill supernova.

In her view, that leaves two possibilities. The first involves a process called fragmentation in which a rapidly spinning star explodes in a supernova and collects a disc of material around it as it collapses. This disc material subsequently aggregates into a tiny neutron star in much the same way as planets form. The second possibility is that a rapidly spinning massive star exploded as a supernova and then split into two tiny neutron stars, both much less massive than our Sun, which later merged. In other words, a supernova may have produced twin neutron stars that then merged to make a kilonova inside it – that is, a superkilonova.

“We have never seen any hints of anything like this before,” Kasliwal says. “It is amazing to me that nature may make tiny neutron stars smaller than a solar mass and more than one neutron star may be born inside a stripped-envelope supernova.”

While Kasliwal describes the data as “tantalizing”, she acknowledges that firm evidence for a superkilonova would require nebular infrared spectroscopy from either the W M Keck Observatory or the James Webb Space Telescope (JWST). On this occasion, that’s not going to be possible, as the event occurred outside JWST’s visibility window and was too far away for Keck to gather infrared spectra – though Kasliwal says the team did get “beautiful” optical spectra from Keck.

The only real way to test the superkilonova theory would be to find more events like AT2025ulz, and Kasliwal is hoping to do just that. “We will be keeping a close eye on any future events in which there are hints that the neutron star is sub-solar and look hard for a young stripped envelope supernova that could have exploded at the same time,” she tells Physics World. “Future superkilonova discoveries will open up this entirely new avenue into our understanding of what happens to massive stars.”

The study is detailed in The Astrophysical Journal Letters.

The post Unusual astronomical event might be a ‘superkilonova’ appeared first on Physics World.

Reçu — 6 février 2026 6.5 📰 Sciences English

Metasurfaces create super-sized neutral atom arrays for quantum computing

6 février 2026 à 10:00

A new way of creating arrays of ultracold neutral atoms could make it possible to build quantum computers with more than 100,000 quantum bits (qubits) – two orders of magnitude higher than today’s best machines. The approach, which was demonstrated by physicists at Columbia University in the US, uses optical metasurfaces to generate the forces required to trap and manipulate the atoms. According to its developers, this method is much more scalable than traditional techniques for generating arrays of atomic qubits.

“Neutral atom arrays have become a leading quantum technology, notably for quantum computing, where single atoms serve as qubits,” explains atomic physicist Sebastian Will, who co-led the study with his Columbia colleague Nanfang Yu. “However, the technology available so far to make these arrays limits array sizes to about 10,000 traps, which corresponds to a maximum of 10,000 atomic qubits.”

Building on a well-established technique

In common with standard ways of constructing atomic qubit arrays, the new method relies on a well-established technique known as optical tweezing. The principle of optical tweezing is that highly focused laser beams generate forces at their focal points that are strong enough to trap individual objects – in this case, atoms.

To create many such trapping sites while maintaining tight control of the laser’s light field, scientists typically use devices called spatial light modulators (SLMs) and acousto-optic deflectors (AODs) to split a single moderately intense laser beam into many lower-intensity ones. Such arrays have previously been used to trap thousands of atoms at once. In 2025, for example, researchers at the California Institute of Technology in the US created arrays containing up to 6100 trapped atoms – a feat that Will describes as “an amazing achievement”.

A superposition of tens of thousands of flat lenses

In the new work, which is detailed in Nature, Will, Yu and colleagues replaced these SLMs and AODs with flat optical surfaces made up of two-dimensional arrays of nanometre-sized “pixels”. These so-called metasurfaces can be thought of as a superposition of tens of thousands of flat lenses. When a laser beam hits them, it produces tens of thousands of focal points in a unique pattern. And because the pixels in the Columbia team’s metasurfaces are smaller than the wavelength of light they are manipulating (300 nm compared to 520 nm), Yu explains that they can use these metasurfaces to generate tweezer arrays directly, without the need for additional bulky and expensive equipment.

The Columbia researchers demonstrated this by trapping atoms in several highly uniform two-dimensional (2D) patterns, including a square lattice with 1024 trapping sites; patterns shaped like quasicrystals and the Statue of Liberty with hundreds of sites; and a circle made up of atoms spaced less than 1.5 microns apart. They also created a 3.5 mm diameter metasurface that contains more than 100 million pixels and used it to generate a 600 × 600 array of trapping sites. “This is two orders of magnitude beyond the capabilities of current technologies,” Yu says.

Another advantage of using metasurfaces, Will adds, is that they are “extremely resilient” to high laser intensities. “This is what is needed to trap hundreds of thousands of neutral atom qubits,” he explains. “Metasurfaces’ laser power handling capabilities go several orders of magnitude beyond the state of the art with SLMs and AODs.”

Laying the groundwork

For arrays of up to 1000 focal points, the researchers showed that their metasurface-generated arrays can trap single atoms with a high level of control and precision and with high single-atom detection fidelity. This is essential, they say, because it demonstrates that the arrays’ quality is high enough to be useful for quantum computing.

While they are not there yet, Will says that the metasurface atomic tweezer arrays they developed “lay the critical groundwork for realizing neutral-atom quantum computers that operate with more than 100,000 qubits”. These high numbers, he adds, will be essential for realizing quantum computers that can achieve “quantum advantage” by outperforming classical computers. “The large number of qubits also allows for more ‘redundancy’ in the system to realize highly-efficient quantum error correction codes, which can make quantum computing – which is usually fragile – more resilient,” he says.

The Columbia team is now working on further improving the quality of their metasurfaces. “On the atomic arrays side, we will now try to actually fill such arrays with more than 100 000 atoms,” Will tells Physics World. “Doing this will require a much more powerful laser than we currently have, but it’s in a realistic range.”

The post Metasurfaces create super-sized neutral atom arrays for quantum computing appeared first on Physics World.

Reçu — 5 février 2026 6.5 📰 Sciences English

Schrödinger cat state sets new size record

5 février 2026 à 10:00
The University of Vienna's Multi-Scale Cluster Interference Experiment
Massively quantum: The University of Vienna’s Multi-Scale Cluster Interference Experiment (MUSCLE), where researchers detected quantum interference in massive nanoparticles. (Courtesy: S Pedalino / Uni Wien)

Classical mechanics describes our everyday world of macroscopic objects very well. Quantum mechanics is similarly good at describing physics on the atomic scale. The boundary between these two regimes, however, is still poorly understood. Where, exactly, does the quantum world stop and the classical world begin?

Researchers in Austria and Germany have now pushed the line further towards the macroscopic regime by showing that metal nanoparticles made up of thousands of atoms clustered together continue to obey the rules of quantum mechanics in a double-slit-type experiment. At over 170 000 atomic mass units, these nanoparticles are heavier than some viroids and proteins – a fact that study leader Sebastian Pedalino, a PhD student at the University of Vienna, says demonstrates that quantum mechanics remains valid at this scale and alternative models are not required.

Multiscale cluster interference

According to the rules of quantum mechanics, even large objects behave as delocalized waves. However, we do not observe this behaviour in our daily lives because the characteristic length over which this behaviour extends – the de Broglie wavelength λdB = h/mv, where h is Planck’s constant, m is the object’s mass and v is its velocity – is generally much smaller than the object itself.

In the new work, a team led by Vienna’s Markus Arndt and Stefan Gerlich, in collaboration with Klaus Hornberger at the University of Duisburg-Essen, created clusters of sodium atoms in a helium-argon mixture at 77 K in an ultrahigh vacuum. The clusters each contained between 5000 and 1000 atoms and travelled at velocities of around 160 m s−1, giving them de Broglie wavelengths between 10‒22 femtometres (1 fm = 10-15 m).

To observe matter-wave interference in objects with such ultra-short de Broglie wavelengths, the team used an interferometer containing three diffraction gratings constructed with deep ultraviolet laser beams in a so-called Talbot–Lau configuration. The first grating channels the clusters through narrow gaps, from which their wave function expands. This wave is then modulated by the second grating, resulting in interference that produces a measurable striped pattern at the third grating.

This result implies that the clusters’ location is not fixed as it propagates through the apparatus. Instead, its wave function is spread over a span dozens of times larger than an individual cluster, meaning that it is in a superposition of locations rather than occupying a fixed position in space. This is known as a Schrödinger cat state, in reference to the famous thought experiment by physicist Erwin Schrödinger in which he imagined a cat sitting in a sealed box to be both dead and alive at once.

Pushing the boundaries for quantum experiments

The Vienna-Duisburg-Essen researchers characterized their experiment by calculating a quantity known as macroscopicity that combines the duration of the quantum state (its coherence time), the mass of the object in that state and the degree of separation between states. In this work, which they detail in Nature, the macroscopicity reached a value of 15.5 – an order of magnitude higher than the best known previous reported measurement of this kind.

Arndt explains that this milestone was reached thanks to a long-term research programme that aims to push quantum experiments to ever higher masses and complexity. “The motivation is simply that we do not yet know if quantum mechanics is the ultimate theory or if it requires any modification at some mass limit,” he tells Physics World. While several speculative theories predict some degree of modification, he says, “as experimentalists our task is to be agnostic and see what happens”.

Arndt notes that the team’s machine is very sensitive to small forces, which can generate notable deflections of the interference fringes. In the future, he thinks this effect could be exploited to characterize the properties of materials. In the longer term, this force-sensing capability could even be used to search for new particles.

Interpretations and adventures

While Arndt says he is “impressed” that these mesoscopic objects – which are in principle easy to see and even to localize under a scattering microscope – can be delocalized on a scale more than 10 times their size if they are isolated and non-interacting, he is not entirely surprised. The challenge, he says, lies in understanding what it means. “The interpretation of this phenomenon, the duality between this delocalization and the apparently local nature in the act of measurement, is still an open conundrum,” he says.

Looking ahead, the researchers say they would now like to extend their research to higher mass objects, longer coherence times, higher force sensitivity and different materials, including nanobiological materials as well as other metals and dielectrics. “We still have a lot of work to do on sources, beam splitters, detectors, vibration isolation and cooling,” says Arndt. “This is a big experimental adventure for us.”

The post Schrödinger cat state sets new size record appeared first on Physics World.

Reçu — 4 février 2026 6.5 📰 Sciences English

Interactions between dark matter and neutrinos could resolve a cosmic discrepancy

4 février 2026 à 14:00

Hints of non-gravitational interactions between dark matter and “relic” neutrinos in the early universe have emerged in a study of astronomical data from different periods of cosmic history. The study was carried out by cosmologists in Poland, the UK and China, and team leader Sebastian Trojanowski of Poland’s NCBJ and NCAC PAS notes that future telescope observations could verify or disprove these hints of a deep connection between dark matter and neutrinos.

Dark matter and neutrinos play major roles in the evolution of cosmic structures, but they are among the universe’s least-understood components. Dark matter is thought to make up over 25% of the universe’s mass, but it has never been detected directly; instead, its existence is inferred from its gravitational interactions. Neutrinos, for their part, are fundamental subatomic particles that have a very low mass and interact only rarely with normal matter.

Analysing data from different epochs

According to the standard (ΛCDM) model of cosmology, dark matter and neutrinos do not interact with each other. The work of Trojanowski and colleagues challenges this model by proposing that dark matter and neutrinos may have interacted in the past, when the universe was younger and contained many more neutrinos than it does today.

This proposal, they say, was partly inspired by a longstanding cosmic conundrum. Measurements of the early universe suggest that structures such as galaxies should have grown more rapidly than ΛCDM predicts. At the same time, observations of today’s universe indicate that matter is slightly less densely packed than expected. This suggests a slight mismatch between early and late measurements.

To explore the impact that dark matter-neutrino interactions (νDM) would have on this mismatch, a team led by Trojanowski’s colleague Lei Zu analysed data from different epochs of the universe’s evolution. Data from the young (high redshift) universe came from two instruments – the ground-based Atacama Cosmology Telescope and the space-based Planck Telescope, which the European Space Agency operated from 2009 to 2013 – that were designed to study the afterglow of the Big Bang, which is known as the cosmic microwave background (CMB). Data from the older (low-redshift, or z< 3.5) universe, meanwhile, came from a variety of sources, including galaxy maps from the Sloan Digital Sky Survey and weak gravitational lensing data from the Dark Energy Survey (DES) conducted with the Dark Energy Camera on the Victor M Blanco Telescope in Chile.

“New insight into how structure formed in the universe”

Drawing on these data, the team calculated that an interaction strength u ≈10−4 between dark matter and neutrinos would be enough to resolve the discrepancy. The statistical significance of this result is nearly 3σ, which team member Sming Tsai Yue-Lin of the Purple Mountain Observatory in Nanjing, China says was “largely achieved by incorporating the high-precision weak lensing data from the DES with the weak lensing component”.​

While this is not high enough to definitively disprove the ΛCDM model, the researchers say it does show that the model is incomplete and requires further investigation. Our study shows that interactions between dark matter and neutrinos could help explain this difference, offering new insight into how structure formed in the universe,” explains team member Eleonora Di Valentino, a senior research fellow at Sheffield University, UK.

Trojanowski adds that the ΛCDM has been under growing pressure in recent years, while the Standard Model of particle physics cannot explain the nature of dark matter. “These two theories need to be extended to resolve these problems and studying dark matter-neutrino interactions are a promising way to achieve this goal,” he says.

The team’s result, he continues, adds to the “massive amount of data” suggesting that we are reaching the limits of the standard cosmological model and may be at the dawn of understanding physics beyond it. “We illustrate that we likely need to bridge cosmological data and fundamental particle physics to describe the universe across different scales and so resolve current anomalies,” he says.

Two worlds

One of the challenges of doing this, Trojanowski adds, is that the two fields involved – cosmological data analysis and theoretical astroparticle physics – are very different. “Each field has its own approach to problem-solving and even its own jargon,” he says. “Fortunately, we had a great team and working together was really fun.”

The researchers say that data from future telescope observations, such as those from the Simonyi Survey Telescope at the Vera C Rubin Observatory (formerly known as the Large Synoptic Survey Telescope, LSST) and the China Space Station Telescope (CSST), could place more stringent tests on their hypothesis. Data from CMB experiments and weak lensing surveys, which map the distribution of mass in the universe by analysing how distant galaxies distort light, could also come in useful.

They detail their present research in Nature Astronomy.

The post Interactions between dark matter and neutrinos could resolve a cosmic discrepancy appeared first on Physics World.

Reçu — 27 janvier 2026 6.5 📰 Sciences English

Uranus and Neptune may be more rocky than icy, say astrophysicists

27 janvier 2026 à 14:00

Our usual picture of Uranus and Neptune as “ice giant” planets may not be entirely correct. According to new work by scientists at the University of Zürich (UZH), Switzerland, the outermost planets in our solar system may in fact be rock-rich worlds with complex internal structures – something that could have major implications for our understanding of how these planets formed and evolved.

Within our solar system, planets fall into three categories based on their internal composition. Mercury, Venus, Earth and Mars are deemed terrestrial rocky planets; Jupiter and Saturn are gas giants; and Uranus and Neptune are ice giants.

An agnostic approach

The new work, which was led by PhD student Luca Morf in UZH’s astrophysics department, challenges this last categorization by numerically simulating the two planets’ interiors as a mixture of rock, water, hydrogen and helium. Morf explains that this modelling framework is initially “agnostic” – meaning unbiased – about what the density profiles of the planets’ interiors should be. “We then calculate the gravitational fields of the planets so that they match with observational measurements to infer a possible composition,” he says.

This process, Morf continues, is then repeated and refined to ensure that each model satisfies several criteria. The first criteria is that the planet should be in hydrostatic equilibrium, meaning that its internal pressure is enough to counteract its gravity and keep it stable. The second is that the planet should have the gravitational moments observed in spacecraft data. These moments describe the gravitational field of a planet, which is complex because planets are not perfect spheres.

The final criteria is that the modelled planets need to be thermodynamically and compositionally consistent with known physics. “For example, a simulation of the planets’ interiors must obey equations of state, which dictate how materials behave under given pressure and temperature conditions,” Morf explains.

After each iteration, the researchers adjust the density profile of each planet and test it to ensure that the model continues to adhere to the three criteria. “We wanted to bridge the gap between existing physics-based models that are overly constrained and empirical approaches that are too simplified,” Morf explains. Avoiding strict initial assumptions about composition, he says, “lets the physics and data guide the solution [and] allows us to probe a larger parameter space.”

A wide range of possible structures

Based on their models, the UZH astrophysicists concluded that the interiors of Uranus and Neptune could have a wide range of possible structures, encompassing both water-rich and rock-rich configurations. More specifically, their calculations yield rock-to-water ratios of between 0.04-3.92 for Uranus and 0.20-1.78 for Neptune.

Diagrams showing possible "slices" of Uranus and Neptune. Four slices are shown, two for each planet. Each slice is filled with brown areas representing silicon dioxide rock and blue areas representing water ice, plus smaller areas of tan colouring for hydrogen-helium mixtures and (for Neptune only) grey areas representing iron. Two slices are mostly blue, while the other two contain large fractions of brown.
Slices of different pies: According to models developed with “agnostic” initial assumptions, Uranus (top) and Neptune (bottom) could be composed mainly of water ice (blue areas), but they could also contain substantial amounts of silicon dioxide rock (brown areas). (Courtesy: Luca Morf)

The models, which are detailed in Astronomy and Astrophysics, also contain convective regions with ionic water pockets. The presence of such pockets could explain the fact that Uranus and Neptune, unlike Earth, have more than two magnetic poles, as the pockets would generate their own local magnetic dynamos.

Traditional “ice giant” label may be too simple

Overall, the new findings suggest that the traditional “ice giant” label may oversimplify the true nature of Uranus of Neptune, Morf tells Physics World. Instead, these planets could have complex internal structures with compositional gradients and different heat transport mechanisms. Though much uncertainty remains, Morf stresses that Uranus and Neptune – and, by extension, similar intermediate-class planets that may exist in other solar systems – are so poorly understood that any new information about their internal structure is valuable.

A dedicated space mission to these outer planets would yield more accurate measurements of the planets’ gravitational and magnetic fields, enabling scientists to refine the limited existing observational data. In the meantime, the UZH researchers are looking for more solutions for the possible interiors of Uranus and Neptune and improving their models to account for additional constraints, such as atmospheric conditions. “Our work will also guide laboratory and theoretical studies on the way materials behave in general at high temperatures and pressures,” Morf says.

The post Uranus and Neptune may be more rocky than icy, say astrophysicists appeared first on Physics World.

Reçu — 26 janvier 2026 6.5 📰 Sciences English

New sensor uses topological material to detect helium leaks

26 janvier 2026 à 10:00

A new sensor detects helium leaks by monitoring how sound waves propagate through a topological material – no chemical reactions required. Developed by acoustic scientists at Nanjing University, China, the innovative, physics-based device is compact, stable, accurate and capable of operating at very low temperatures.

Helium is employed in a wide range of fields, including aerospace, semiconductor manufacturing and medical applications as well as physics research. Because it is odourless, colourless, and inert, it is essentially invisible to traditional leak-detection equipment such as adsorption-based sensors. Specialist helium detectors are available, but they are bulky, expensive and highly sensitive to operating conditions.

A two-dimensional acoustic topological material

The new device created by Li Fan and colleagues at Nanjing consists of nine cylinders arranged in three sub-triangles with tubes in between the cylinders. The corners of the sub-triangles touch and the tubes allow air to enter the device. The resulting two-dimensional system has a so-called “kagome” structure and is an example of a topological material – that is, one that contains special, topologically protected, states that remain stable even if the bulk structure contains minor imperfections or defects. In this system, the protected states are the corners.

To test their setup, the researchers placed speakers under the corners that send sound waves into the structure and make the gas within it vibrate at a certain frequency (the resonance frequency). When they replaced the air in the device with helium, the sound waves travelled faster, changing the vibration frequency. Measuring this shift in frequency enabled the researchers to calculate the concentration of helium in the device.

Many advantages over traditional gas sensors

Fan explains that the device works because the interface/corner states are impacted by the properties of the gas within it. This mechanism has many advantages over traditional gas sensors. First, it does not rely on chemical reactions, making it ideal for detecting inert gases like helium. Second, the sensor is not affected by external conditions and can therefore work at extremely low temperatures – something that is challenging for conventional sensors that contain sensitive materials. Third, its sensitivity to the presence of helium does not change, meaning it does not need to be recalibrated during operation. Finally, it detects frequency changes quickly and rapidly returns to its baseline once helium levels decrease.

As well as detecting helium, Fan says the device can also pinpoint the direction a gas leak is coming from. This is because when helium begins to fill the device, the corner closest to the source of the gas is impacted first. Each corner thus acts as an independent sensing point, giving the device a spatial sensing capability that most traditional detectors lack.

Other gases could be detected

Detecting helium leaks is important in fields such as semiconductor manufacturing, where the gas is used for cooling, and in medical imaging systems that operate at liquid helium temperatures. “We think our work opens an avenue for inert gas detection using a simple device and is an example of a practical application for two-dimensional acoustic topological materials,” says Fan.

While the new sensor was fabricated to detect helium, the same mechanism could also be employed to detect other gases such as hydrogen, he adds.

Spurred on by these promising preliminary results, which they report in Applied Physics Letters, the researchers plan to extend their fabrication technique to create three-dimensional acoustic topological structures. “These could be used to orientate the corner points so that helium can be detected in 3D space,” says Fan. “Ultimately, we are trying to integrate our system into a portable structure that can be deployed in real-world environments without complex supporting equipment.,” he tells Physics World.

The post New sensor uses topological material to detect helium leaks appeared first on Physics World.

Reçu — 21 janvier 2026 6.5 📰 Sciences English

Shining laser light on a material produces subtle changes in its magnetic properties

21 janvier 2026 à 15:00

Researchers in Switzerland have found an unexpected new use for an optical technique commonly used in silicon chip manufacturing. By shining a focused laser beam onto a sample of material, a team at the Paul Scherrer Institute (PSI) and ETH Zürich showed that it was possible to change the material’s magnetic properties on a scale of nanometres – essentially “writing” these magnetic properties into the sample in the same way as photolithography etches patterns onto wafers. The discovery could have applications for novel forms of computer memory as well as fundamental research.

In standard photolithography – the workhorse of the modern chip manufacturing industry – a light beam passes through a transmission mask and projects an image of the mask’s light-absorption pattern onto a (usually silicon) wafer. The wafer itself is covered with a photosensitive polymer called a resist. Changing the intensity of the light leads to different exposure levels in the resist-covered material, making it possible to create finely detailed structures.

In the new work, Laura Heyderman and colleagues in PSI-ETH Zürich’s joint Mesoscopic System group began by placing a thin film of a magnetic material in a standard photolithography machine, but without a photoresist. They then scanned a focused laser beam over the surface of the sample while modulating the beam’s wavelength of 405 nm to deliver varying intensities of light. This process is known as direct write laser annealing (DWLA), and it makes it possible to heat areas of the sample that measure just 150 nm across.

In each heated area, thermal energy from the laser is deposited at the surface and partially absorbed by the film down to a depth of around 100 nm). The remainder dissipates through a silicon substrate coated in 300-nm-thick silicon oxide. However, the thermal conductivity of this substrate is low, which maximizes the temperature increase in the film for a given laser fluence. The researchers also sought to keep the temperature increase as uniform as possible by using thin-film heterostructures with a total thickness of less than 20 nm.

Crystallization and interdiffusion effects

Members of the PSI-ETH Zürich team applied this technique to several technologically important magnetic thin-film systems, including ferromagnetic CoFeB/MgO, ferrimagnetic CoGd and synthetic antiferromagnets composed of Co/Cr, Co/Ta or CoFeB/Pt/Ru. They found that DWLA induces both crystallization and interdiffusion effects in these materials. During crystallization, the orientation of the sample’s magnetic moments gradually changes, while interdiffusion alters the magnetic exchange coupling between the layers of the structures.

The researchers say that both phenomena could have interesting applications. The magnetized regions in the structures could be used in data storage, for example, with the direction of the magnetization (“up” or “down”) corresponding to the “1” or “0” of a bit of data. In conventional data-storage systems, these bits are switched with a magnetic field, but team member Jeffrey Brock explains that the new technique allows electric currents to be used instead. This is advantageous because electric currents are easier to produce than magnetic fields, while data storage devices switched with electricity are both faster and capable of packing more data into a given space.

Team member Lauren Riddiford says the new work builds on previous studies by members of the same group, which showed it was possible to make devices suitable for computer memory by locally patterning magnetic properties. “The trick we used here was to locally oxidize the topmost layer in a magnetic multilayer,” she explains. “However, we found that this works only in a few systems and only produces abrupt changes in the material properties. We were therefore brainstorming possible alternative methods to create gradual, smooth gradients in material properties, which would open possibilities to even more exciting applications and realized that we could perform local annealing with a laser originally made for patterning polymer resist layers for photolithography.”

Riddiford adds that the method proved so fast and simple to implement that the team’s main challenge was to investigate all the material changes it produced. Physical characterization methods for ultrathin films can be slow and difficult, she tells Physics World.

The researchers, who describe their technique in Nature Communications, now hope to use it to develop structures that are compatible with current chip-manufacturing technology. “Beyond magnetism, our approach can be used to locally modify the properties of any material that undergoes changes when heated, so we hope researchers using thin films for many different devices – electronic, superconducting, optical, microfluidic and so on – could use this technique to design desired functionalities,” Riddiford says. “We are looking forward to seeing where this method will be implemented next, whether in magnetic or non-magnetic materials, and what kind of applications it might bring.”

The post Shining laser light on a material produces subtle changes in its magnetic properties appeared first on Physics World.

Reçu — 16 janvier 2026 6.5 📰 Sciences English

Gravitational lensing sheds new light on Hubble constant controversy

16 janvier 2026 à 11:00

By studying how light from eight distant quasars is gravitationally lensed as it propagates towards Earth, astronomers have calculated a new value for the Hubble constant – a parameter that describes the rate at which the universe is expanding. The result agrees more closely with previous “late-universe” probes of this constant than it does with calculations based on observations of the cosmic microwave background (CMB) in the early universe, strengthening the notion that we may be misunderstanding something fundamental about how the universe works.

The universe has been expanding ever since the Big Bang nearly 14 billion years ago. We know this, in part, because of observations made in the 1920s by the American astronomer Edwin Hubble. By measuring the redshift of various galaxies, Hubble discovered that galaxies further away from Earth are moving away faster than galaxies that are closer to us. The relationship between this speed and the galaxies’ distance is known as the Hubble constant, H0.

Astronomers have developed several techniques for measuring H0. The problem is that different techniques deliver different values. According to measurements made by the European Space Agency’s Planck satellite of CMB radiation “left over” from the Big Bang, the value of H0 is about 67 kilometres per second per megaparsec (km/s/Mpc), where one Mpc is 3.3 million light years. In contrast, “distance-ladder” measurements such as those made by the SH0ES collaboration those involving observations of type Ia supernovae yield a value of about 73 km/s/Mpc. This discrepancy is known as the Hubble tension.

Time-delay cosmography

In the latest work, the TDCOSMO collaboration, which includes astronomers Kenneth Wong and Eric Paic of the University of Tokyo, Japan, measured H0 using a technique called time-delay cosmography. This well-established method dates back to 1964 and uses the fact that massive galaxies can act as lenses, deflecting the light from objects behind them so that from our perspective, these objects appear distorted.

“This is called gravitational lensing, and if the circumstances are right, we’ll actually see multiple distorted images, each of which will have taken a slightly different pathway to get to us, taking different amounts of time,” Wong explains.

By looking for changes in these images that are identical, but slightly out of sync, astronomers can measure the time differences required for the light from the objects to reach Earth. Then, by combining these data with estimates of the distribution of the mass of the distorting galactic lens, they can calculate H0.

A real tension, not a measurement artefact

Wong and colleagues measured the light from eight strongly lensed quasars using various telescopes, including the James Webb Space Telescope (JWST), the Keck Telescopes and the Very Large Telescope (VLT). They also made use of observations from the Sloan Lens ACS (SLACS) sample with Keck and the Legacy Survey (SL2S) sample.

Based on these measurements, they obtained a H0 value of roughly 71.6 km s−1 Mpc−1, which is more consistent with current-day observations (such as that from SH0ES) than early-universe ones (such as that from Planck). Wong explains that this discrepancy supports the idea that the Hubble tension arises from real physics, not just some unknown error in the various methods. “Our measurement is completely independent of other methods, both early- and late-universe, so if there are any systematic uncertainties in those, we should not be affected by them,” he says.

The astronomers say that the SLACS and SL2S sample data are in excellent agreement with the new TDCOSMO-2025 sample, while the new measurements improve the precision of H0 to 4.6%. However, Paic notes that nailing down the value of H0 to a level that would “definitely confirm” the Hubble tension will require a precision of 1-2%. “This could be possible by increasing the number of objects observed as well as ruling out any systematic errors as yet unaccounted for,” he says.

Wong adds that while the TDCOSMO-2025 dataset contains its own uncertainties, multiple independent measurements should, in principle, strengthen the result. “One of the largest sources of uncertainty is the fact that we don’t know exactly how the mass in the lens galaxies is distributed,” he explains. “It is usually assumed that the mass follows some simple profile that is consistent with observations, but it is hard to be sure and this uncertainty can directly influence the values we calculate.”

The biggest hurdle, Wang adds, will “probably be addressing potential sources of systematic uncertainty, making sure we have thought of all the possible ways that our result could be wrong or biased and figuring out how to handle those uncertainties.”

The study is detailed in Astronomy and Astrophysics.

The post Gravitational lensing sheds new light on Hubble constant controversy appeared first on Physics World.

Reçu — 14 janvier 2026 6.5 📰 Sciences English

Quantum state teleported between quantum dots at telecoms wavelengths

14 janvier 2026 à 17:00

Physicists at the University of Stuttgart, Germany have teleported a quantum state between photons generated by two different semiconductor quantum dot light sources located several metres apart. Though the distance involved in this proof-of-principle “quantum repeater” experiment is small, members of the team describe the feat as a prerequisite for future long-distance quantum communications networks.

“Our result is particularly exciting because such a quantum Internet will encompass these types of distant quantum nodes and will require quantum states that are transmitted among these different nodes,” explains Tim Strobel, a PhD student at Stuttgart’s Institute of Semiconductor Optics and Functional Interfaces (IHFG) and the lead author of a paper describing the research. “It is therefore an important step in showing that remote sources can be effectively interfaced in this way in quantum teleportation experiments.”

In the Stuttgart study, one of the quantum dots generates a single photon while the other produces a pair of photons that are entangled – meaning that the quantum state of one photon is closely linked to the state of the other, no matter how far apart they are. One of the photons in the entangled pair then travels to the other quantum dot and interferes with the photon there. This process produces a superposition that allows the information encapsulated in the single photon to be transferred to the distant “partner” photon from the pair.

Quantum frequency converters

Strobel says the most challenging part of the experiment was making photons from two remote quantum dots interfere with each other. Such interference is only possible if the two particles are indistinguishable, meaning they must be similar in every regard, be it in their temporal shape, spatial shape or wavelength. In contrast, each quantum dot is unique, especially in terms of its spectral properties, and each one emits photons at slightly different wavelengths.

To close the gap, the team used devices called quantum frequency converters to precisely tune the wavelength of the photons and match them spectrally. The researchers also used the converters to shift the original wavelengths of the photons emitted from the quantum dots (around 780 nm) to a wavelength commonly used in telecommunications (1515 nm) without altering the quantum state of the photons. This offers further advantages: “Being at telecommunication wavelengths makes the technology compatible with the existing global optical fibre network, an important step towards real-life applications,” Strobel tells Physics World.

Proof-of-principle experiment

In this work, the quantum dots were separated by an optical fibre just 10 m in length. However, the researchers aim to push this to considerably greater distances in the future. Strobel notes that the Stuttgart study was published in Nature Communications back-to-back with an independent work carried out by researchers led by Rinaldo Trotta of Sapienza University in Rome, Italy. The Rome-based group demonstrated quantum state teleportation across the Sapienza University campus at shorter wavelengths, enabled by the brightness of their quantum-dot source.

“These two papers that we published independently strengthen the measurement outcomes, demonstrating the maturity of quantum dot light sources in this domain,” Strobel says. Semiconducting quantum dots are particularly attractive for this application, he adds, because as well as producing both single and entangled photons on demand, they are also compatible with other semiconductor technologies.

Fundamental research pays off

Simone Luca Portalupi, who leads the quantum optics group at IHFG, notes that “several years of fundamental research and semiconductor technology are converging into these quantum teleportation experiments”. For Peter Michler, who led the study team, the next step is to leverage these advances to bring quantum-dot-based teleportation technology out of a controlled laboratory environment and into the real world.

Strobel points out that there is already some precedent for this, as one of the group’s previous studies showed that they could maintain photon entanglement across a 36-km fibre link deployed across the city of Stuttgart. “The natural next step would be to show that we can teleport the state of a photon across this deployed fibre link,” he says. “Our results will stimulate us to improve each building block of the experiment, from the sample to the setup.”

The post Quantum state teleported between quantum dots at telecoms wavelengths appeared first on Physics World.

Reçu — 9 janvier 2026 6.5 📰 Sciences English

Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously

9 janvier 2026 à 11:00

A new microscope that can simultaneously measure both forward- and backward-scattered light from a sample could allow researchers to image both micro- and nanoscale objects at the same time. The device could be used to observe structures as small as individual proteins, as well as the environment in which they move, say the researchers at the University of Tokyo who developed it.

“Our technique could help us link cell structures with the motion of tiny particles inside and outside cells,” explains Kohki Horie of the University of Tokyo’s department of physics, who led this research effort. “Because it is label-free, it is gentler on cells and better for long observations. In the future, it could help quantify cell states, holding potential for drug testing and quality checks in the biotechnology and pharmaceutical industries.”

Detecting forward and backward scattered light at the same time

The new device combines two powerful imaging techniques routinely employed in biomedical applications: quantitative phase microscopy (QPM) and interferometric scattering (iSCAT).

QPM measures forward-scattered (FS) light – that is, light waves that travel in the same direction as before they were scattered. This technique is excellent at imaging structures in the Mie scattering region (greater than 100 nm, referred to as microscale in this study). This makes it ideal for visualizing complex structures such as biological cells. It falls short, however, when it comes to imaging structures in the Rayleigh scattering region (smaller than 100 nm, referred to as nanoscale in this study).

The second technique, iSCAT, detects backward-scattered (BS) light. This is light that’s reflected back towards the direction from which it came and which predominantly contains Rayleigh scattering. As such, iSCAT exhibits high sensitivity for detecting nanoscale objects. Indeed, the technique has recently been used to image single proteins, intracellular vesicles and viruses. It cannot, however, image microscale structures because of its limited ability to detect in the Mie scattering region.

The team’s new bidirectional quantitative scattering microscope (BiQSM) is able to detect both FS and BS light at the same time, thereby overcoming these previous limitations.

Cleanly separating the signals from FS and BS

The BiQSM system illuminates a sample through an objective lens from two opposite directions and detects both the FS and BS light using a single image sensor. The researchers use the spatial-frequency multiplexing method of off-axis digital holography to capture both images simultaneously. The biggest challenge, says Horie, was to cleanly separate the signals from FS and BS light in the images while keeping noise low and avoiding mixing between them.

Horie and colleagues, Keiichiro Toda, Takuma Nakamura and team leader Takuro Ideguchi, tested their technique by imaging live cells. They were able to visualize micron-sized cell structures, including the nucleus, nucleoli and lipid droplets, as well as nanoscale particles. They compared the FS and BS results using the scattering-field amplitude (SA), defined as the amplitude ratios between the scattered wave and the incident illumination wave.

“SA characterizes the light scattered in both the forward and backward directions within a unified framework,” says Horie, “so allowing for a direct comparison between FS and BS light images.”

Spurred on by their findings, which are detailed in Nature Communications, the researchers say they now plan to study even smaller particles such as exosomes and viruses.

The post Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously appeared first on Physics World.

Reçu — 8 janvier 2026 6.5 📰 Sciences English

Tetraquark measurements could shed more light on the strong nuclear force

8 janvier 2026 à 11:00

The Compact Muon Solenoid (CMS) Collaboration has made the first measurements of the quantum properties of a family of three “all-charm” tetraquarks that was recently discovered at the Large Hadron Collider (LHC) at CERN. The findings could help shed more light on the properties of the strong nuclear force, which holds protons and neutrons together in nuclei. The result could help us better understand how ordinary matter forms.

In recent years, the LHC has discovered tens of massive particles called hadrons, which are made of quarks bound together by the strong force. Quarks come in six types: up, down, charm, strange, top and bottom. Most observed hadrons comprise two or three quarks (called mesons and baryons, respectively). Physicists have also observed exotic hadrons that comprise four or five quarks. These are the tetraquarks and pentaquarks respectively. Those seen so far usually contain a charm quark and its antimatter counterpart (a charm antiquark), with the remaining two or three quarks being up, down or strange quarks, or their antiquarks.

Identifying and studying tetraquarks and pentaquarks helps physicists to better understand how the strong force binds quarks together. This force also binds protons and neutrons in atomic nuclei.

Physicists are still divided as to the nature of these exotic hadrons. Some models suggest that their quarks are tightly bound via the strong force, so making these hadrons compact objects. Others say that the quarks are only loosely bound. To confuse things further, there is evidence that in some exotic hadrons, the quarks might be both tightly and loosely bound at the same time.

Now, new findings from the CMS Collaboration suggest that tetraquarks are tightly bound, but they do not completely rule out other models.

Measuring quantum numbers

In their work, which is detailed in Nature, CMS physicists studied all-charm tetraquarks. These comprise two charm quarks and two charm antiquarks and were produced by colliding protons at high energies at the LHC. Three states of this tetraquark have been identified at the LHC. These are: X(6900); X(6600); and X(7100), where the numbers denote their approximate mass in millions of electron volts. The team measured the fundamental properties of these tetraquarks, including their quantum numbers: parity (P); charge conjugation (C); angular momentum, and spin (J). P determines whether a particle has the same properties as its spatial mirror image; C whether it has the same properties as its antiparticle; and J, the total angular momentum of the hadron. These numbers provide information on the internal structure of a tetraquark.

The researchers used a version of a well-known technique called angular analysis, which is similar to the technique used to characterize the Higgs boson. This approach focuses on the angles at which the decay products of the all-charm tetraquarks are scattered.

“We call this technique quantum state tomography,” explains CMS team member Chiara Mariotti of the INFN Torino inItaly. “Here, we deduce the quantum state of an exotic state X from the analysis of its decay products. In particular, the angular distributions in the decay X → J/ψJ/ψ, followed by J/ψ decays into two muons, serve as analysers of polarization of two J/ψ particles,” she explains.

The researchers analysed all-charm tetraquarks produced at the CMS experiment between 2016 and 2018. They calculated that J is likely to be 2 and that P and C are both +1. This combination of properties is expressed as 2++.

Result favours tightly-bound quarks

“This result favours models in which all four quarks are tightly bound,” says particle physicist Timothy Gershon of the UK’s University of Warwick, who was not involved in this study. “However, the question is not completely put to bed. The sample size in the CMS analysis is not sufficient to exclude fully other possibilities, and additionally certain assumptions are made that will require further testing in future.”

Gershon adds, “These include assumptions that all three states have the same quantum numbers, and that all correspond to tetraquark decays to two J/ψ mesons with no additional particles not included in the reconstruction (for example there could be missing photons that have been radiated in the decay).”

Further studies with larger data samples are warranted, he adds. “Fortunately, CMS as well as both the LHCb and the ATLAS collaborations [at CERN] already have larger samples in hand, so we should not have to wait too long for updates.”

Indeed, the CMS Collaboration is now gathering more data and exploring additional decay modes of these exotic tetraquarks. “This will ultimately improve our understanding how this matter forms, which, in turn, could help refine our theories of how ordinary matter comes into being,” Mariotti tells Physics World.

The post Tetraquark measurements could shed more light on the strong nuclear force appeared first on Physics World.

Reçu — 7 janvier 2026 6.5 📰 Sciences English

Physicists overcome ‘acoustic collapse’ to levitate multiple objects with sound

7 janvier 2026 à 10:00

Sound waves can make small objects hover in the air, but applying this acoustic levitation technique to an array of objects is difficult because the objects tend to clump together. Physicists at the Institute of Science and Technology Austria (ISTA) have now overcome this problem thanks to hybrid structures that emerge from the interplay between attractive acoustic forces and repulsive electrostatic ones. By proving that it is possible to levitate many particles while keeping them separated, the finding could pave the way for advances in acoustic-levitation-assisted 3D printing, mid-air chemical synthesis and micro-robotics.

In acoustic levitation, particles ranging in size from tens of microns to millimetres are drawn up into the air and confined by an acoustic force. The origins of this force lie in the momentum that the applied acoustic field transfers to a particle as sound waves scatter off its surface. While the technique works well for single particles, multiple particles tend to aggregate into a single dense object in mid-air because the acoustic forces they scatter can, collectively, create an attractive interaction between them.

Keeping particles separated

Led by Scott Waitukaitis, the ISTA researchers found a way to avoid this so-called “acoustic collapse” by using a tuneable repulsive electrostatic force to counteract the attractive acoustic one. They began by levitating a single silver-coated poly(methyl methacrylate) (PMMA) microsphere 250‒300 µm in diameter above a reflector plate coated with a transparent and conductive layer of indium tin oxide (ITO). They then imbued the particle with a precisely controlled amount of electrical charge by letting it rest on the ITO plate with the acoustic field off, but with a high-voltage DC potential applied between the plate and a transducer. This produces a capacitive build-up of charge on the particle, and the amount of charge can be estimated from Maxwell’s solutions for two contacting conductive spheres (assuming, in the calculations, that the lower plate acts like a sphere with infinite radius).

The next step in the process is to switch on the acoustic field and, after just 10 ms, add the electric field to it. During the short period in which both fields are on, and provided the electric field is strong enough, either field is capable of launching the particle towards the centre of the levitation setup. The electric fields is then switched off. A few seconds later, the particle levitates stably in the trap, with a charge given, in principle, by Maxwell’s approximations.

A visually mesmerizing dance of particles

This charging method works equally well for multiple particles, allowing the researchers to load particles into the trap with high efficiency and virtually any charge they want, limited only by the breakdown voltage of the surrounding air. Indeed, the physicists found they could tune the charge to levitate particles separately or collapse them into a single, dense object. They could even create hybrid states that mix separated and collapsed particles.

And that wasn’t all. According to team member Sue Shi, a PhD student at ISTA and the lead author of a paper in PNAS about the research, the most exciting moment came when they saw the compact parts of the hybrid structures spontaneously begin to rotate, while the expanded parts remained in one place while oscillating in response to the rotation. The result was “a visually mesmerizing dance,” Shi says, adding that “this is the first time that such acoustically and electrostatically coupled interactions have been observed in an acoustically levitated system.”

As well as having applications in areas such as materials science and micro-robotics, Shi says the technique developed in this work could be used to study non-reciprocal effects that lead to the particles rotating or oscillating. “This would pave the way for understanding more elusive and complex non-reciprocal forces and many-body interactions that likely influence the behaviours of our system,” Shi tells Physics World.

The post Physicists overcome ‘acoustic collapse’ to levitate multiple objects with sound appeared first on Physics World.

Reçu — 6 janvier 2026 6.5 📰 Sciences English

New hybrid state of matter is a mix of solid and liquid

6 janvier 2026 à 16:00

The boundary between a substance’s liquid and solid phases may not be as clear-cut as previously believed. A new state of matter that is a hybrid of both has emerged in research by scientists at the University of Nottingham, UK and the University of Ulm, Germany, and they say the discovery could have applications in catalysis and other thermally-activated processes.

In liquids, atoms move rapidly, sliding over and around each other in a random fashion. In solids, they are fixed in place. The transition between the two states, solidification, occurs when random atomic motion transitions to an ordered crystalline structure.

At least, that’s what we thought. Thanks to a specialist microscopy technique, researchers led by Nottingham’s Andrei Khlobystov found that this simple picture isn’t entirely accurate. In fact, liquid metal nanoparticles can contain stationary atoms – and as the liquid cools, their number and position play a significant role in solidification.

Some atoms remain stationary

The team used a method called spherical and chromatic aberration-corrected high-resolution transmission electron microscopy (Cc/Cs-corrected HRTEM) at the low-voltage SALVE instrument at Ulm to study melted metal nanoparticles (such as platinum, gold and palladium) deposited on an atomically thin layer of graphene. This carbon-based material acted a sort of “hob” for heating the particles, says team member Christopher Leist, who was in charge of the HRTEM experiments. “As they melted, the atoms in the nanoparticles began to move rapidly, as expected,” Leist says. “To our surprise, however, we found that some atoms remained stationary.”

At high temperatures, these static atoms bind strongly to point defects in the graphene support. When the researchers used the electron beam from the transmission microscope to increase the number of these defects, the number of stationary atoms within the liquid increased, too. Khlobystov says that this had a knock-on effect on how the liquid solidified: when the stationary atoms are few in number, a crystal forms directly from the liquid and continues to grow until the entire particle has solidified. When their numbers increase, the crystallization process cannot take place and no crystals form.

“The effect is particularly striking when stationary atoms create a ring (corral) that surrounds and confines the liquid,” he says. “In this unique state, the atoms within the liquid droplet are in motion, while the atoms forming the corral remain motionless, even at temperatures well below the freezing point of the liquid.”

Unprecedented level of detail

The researchers chose to use Cc/Cs-corrected HRTEM in their study because minimizing spherical and chromatic aberrations through specialized hardware installed on the microscope enabled them to resolve single atoms in their images.

“Additionally, we can control both the energy of the electron beam and the sample temperature (the latter using MEMS-heated chip technology),” Khlobystov explains. “As a result, we can study metal samples at temperatures of up to 800 °C, even in a molten state, without sacrificing atomic resolution. We can therefore observe atomic behaviour during crystallization while actively manipulating the environment around the metal particles using the electron beam or by cooling the particles. This level of detail under such extreme conditions is unprecedented.”

Effect could be harnessed for catalysis

The Nottingham-Ulm researchers, who report their work in ACS Nano, say they obtained their results by chance while working on an EPSRC-funded project on 1-2 nm metal particles for catalysis applications. “Our approach involves assembling catalysts from individual metal atoms, utilizing on-surface phenomena to control their assembly and dynamics,” explains Khlobystov. “To gain this control, we needed to investigate the behaviour of metal atoms at varying temperatures and within different local environments on a support material.

“We suspected that the interplay between vacancy defects in the support and the sample temperature creates a powerful mechanism for controlling the size and structure of the metal particles,” he tells Physics World. “Indeed, this study revealed the fundamental mechanisms behind this process with atomic precision.”

The experiments were far from easy, he recalls, with one of the key challenges being to identify a thin, robust and thermally conductive support material for the metal. Happily, graphene meets all these criteria.

“Another significant hurdle to overcome was to be able to control the number of defect sites surrounding each particle,” he adds. “We successfully accomplished this by using the TEM’s electron beam not just as an imaging tool, but also as a means to modify the environment around the particles by creating defects.”

The researchers say they would now like to explore whether the effect can be harnessed for catalysis. To do this, Khlobystov says it will be essential to improve control over defect production and its scale. “We also want to image the corralled particles in a gas environment to understand how the phenomenon is influenced by reaction conditions, since our present measurements were conducted in a vacuum,” he adds.

The post New hybrid state of matter is a mix of solid and liquid appeared first on Physics World.

Quantum photonics network passes a scaling-up milestone

6 janvier 2026 à 10:00

Physicists in the UK have succeeded in routing and teleporting entangled states of light between two four-user quantum networks – an important milestone in the development of scalable quantum communications. Led by Mehul Malik and Natalia Herrera Valencia of Heriot-Watt University in Edinburgh, Scotland, the team achieved this milestone thanks to a new method that uses light-scattering processes in an ordinary optical fibre to program a circuit. This approach, which is radically different from conventional methods based on photonic chips, allows the circuit to function as a programmable entanglement router that can implement several different network configurations on demand.

The team performed the experiments using commercially-available optical fibres, which are multi-mode structures that scatter light via random linear optical processes. In simple terms, Herrera Valencia explains that this means the light tends to ricochet chaotically through the fibres along hundreds of internal pathways. While this effect can scramble entanglement, researchers at the Institut Langevin in Paris, France had previously found that the scrambling can be calculated by analysing how the fibre transmits light. What is more, the light-scattering processes in such a medium can be harnessed to make programmable optical circuits – which is exactly what Malik, Herrera Valencia and colleagues did.

“Top-down” approach

The researchers explain that this “top-down” approach simplifies the circuit’s architecture because it separates the layer where the light is controlled from the layer in which it is mixed. Using waveguides for transporting and manipulating the quantum states of light also reduces optical losses. The result is a reconfigurable multi-port device that can distribute quantum entanglement between many users simultaneously in multiple patterns, switching between different channels (local connections, global connections or both) as required.

A further benefit is that the channels can be multiplexed, allowing many quantum processors to access the system at the same time. The researchers say this is similar to multiplexing in classical telecommunications networks, which makes it possible to send huge amounts of data through a single optical fibre using different wavelengths of light.

Access to a large number of modes

Although controlling and distributing entangled states of light is key for quantum networks, Malik says it comes with several challenges. One of these is that conventional methods based on photonics chips cannot be scaled up easily. They are also very sensitive to imperfections in how they’re made. In contrast, the waveguide-based approach developed by the Heriot-Watt team “opens up access to a large number of modes, providing significant improvements in terms of achievable circuit size, quality and loss,” Malik tells Physics World, adding that the approach also fits naturally with existing optical fibre infrastructures.

Gaining control over the complex scattering process inside a waveguide was not easy, though. “The main challenge was the learning curve and understanding how to control quantum states of light inside such a complex medium,” Herrera Valencia recalls. “It took time and iteration, but we now have the precise and reconfigurable control required for reliable entanglement distribution, and even more so for entanglement swapping, which is essential for scalable networks.”

While the Heriot-Watt team used the technique to demonstrate flexible quantum networking, Malik and Herrera Valencia say it might also be used for implementing large-scale photonic circuits. Such circuits could have many applications, ranging from machine learning to quantum computing and networking, they add.

Looking ahead, the researchers, who report their work in Nature Photonics, say they are now aiming to explore larger-scale circuits that can operate on more photons and light modes. “We would also like to take some of our network technology out of the laboratory and into the real world,” says Malik, adding that Herrera Valencia is leading a commercialization effort in that direction.

The post Quantum photonics network passes a scaling-up milestone appeared first on Physics World.

Reçu — 15 décembre 2025 6.5 📰 Sciences English

Will this volcano explode, or just ooze? A new mechanism could hold some answers

15 décembre 2025 à 17:00
A figure containing a diagram of a volcanic system and a photo of bubbles forming in a container
Bubbling up: A schematic representation of a volcanic system and a snapshot of one of the team’s experiments. The shear-induced bubbles are marked with red ellipses. (Courtesy: O Roche)

An international team of researchers has discovered a new mechanism that can trigger the formation of bubbles in magma – a major driver of volcanic eruptions. The finding could improve our understanding of volcanic hazards by improving models of magma flow through conduits beneath Earth’s surface.

Volcanic eruptions are thought to occur when magma deep within the Earth’s crust decompresses. This decompression allows volatile chemicals dissolved in the magma to escape in gaseous form, producing bubbles. The more bubbles there are in the viscous magma, the faster it will rise, until eventually it tears itself apart.

“This process can be likened to a bottle of sparkling water containing dissolved volatiles that exolve when the bottle is opened and the pressure is released,” explains Olivier Roche, a member of the volcanology team at the Magmas and Volcanoes Laboratory (LMV) at the Université Clermont Auvergne (UCA) in France and lead author of the study.

Magma shearing forces could induce bubble nucleation

The new work, however, suggests that this explanation is incomplete. In their study, Roche and colleagues at UCA, the French National Research Institute for Sustainable Development (IRD), Brown University in the US and ETH Zurich in Switzerland began with the assumption that the mechanical energy in magma comes from the pressure gradient between the nucleus of a gas bubble and the ambient liquid. “However, mechanical energy may also be provided by shear stress in the magma when it is in motion,” Roche notes. “We therefore hypothesized that magma shearing forces could induce bubble nucleation too.”

To test their theory, the researchers reproduced the internal movements of magma in liquid polyethylene oxide saturated with carbon dioxide at 80°C. They then set up a device to observe bubble nucleation in situ while the material was experiencing shear stress. They found that the energy provided by viscous shear is large enough to trigger bubble formation – even if decompression isn’t present.

The effect, which the team calls shear-induced bubble nucleation, depends on the magma’s viscosity and on the amount of gas it contains. According to Roche, the presence of this effect could help researchers determine whether an eruption is likely to be explosive or effusive. “Understanding which mechanism is at play is fundamental for hazard assessment,” he says. “If many gas bubbles grow deep in the volcano conduit in a volatile-rich magma, for example, they can combine with each other and form larger bubbles that then open up degassing conduits connected to the surface.

“This process will lead to effusive eruptions, which is counterintuitive (but supported by some earlier observations),” he tells Physics World. “It calls for the development of new conduit flow models to predict eruptive style for given initial conditions (essentially volatile content) in the magma chamber.”

Enhanced predictive power

By integrating this mechanism into future predictive models, the researchers aim to develop tools that anticipate the intensity of eruptions better, allowing scientists and local authorities to improve the way they manage volcanic hazards.

Looking ahead, they are planning new shear experiments on liquids that contain solid particles, mimicking crystals that form in magma and are believed to facilitate bubble nucleation. In the longer term, they plan to study combinations of shear and compression, though Roche acknowledges that this “will be challenging technically”.

They report their present work in Science.

The post Will this volcano explode, or just ooze? A new mechanism could hold some answers appeared first on Physics World.

Reçu — 12 décembre 2025 6.5 📰 Sciences English

Leftover gamma rays produce medically important radioisotopes

12 décembre 2025 à 10:00

The “leftover” gamma radiation produced when the beam of an electron accelerator strikes its target is usually discarded. Now, however, physicists have found a new use for it: generating radioactive isotopes for diagnosing and treating cancer. The technique, which piggybacks on an already-running experiment, uses bremsstrahlung from an accelerator facility to trigger nuclear reactions in a layer of zinc foil. The products of these reactions include copper isotopes that are hard to make using conventional techniques, meaning that the technique could reduce their costs and expand access to treatments.

Radioactive nuclides are commonly used to treat cancer, and so-called theranostic pairs are especially promising. These pairs occur when one isotope of an element provides diagnostic imaging while another delivers therapeutic radiation – a combination that enables precision tumour targeting to improve treatment outcomes.

One such pair is 64Cu and 67Cu: the former emits positrons that can identify tumours in PET scans while the latter produces beta particles that can destroy cancerous cells. They also have a further clinical advantage in that copper binds to antibodies and other biomolecules, allowing the isotopes to be delivered directly into cells. Indeed, these isotopes have already been used to treat cancer in mice, and early clinical studies in humans are underway.

“Wasted” photons might be harnessed

Researchers led by Mamad Eslami of the University of York, UK have now put forward a new way to make both isotopes. Their method exploits the fact that gamma rays generated by the intense electron beams in particle accelerator experiments interact only weakly with matter (relative to electrons or neutrons, at least). This means that many of them pass right through their primary target and into a beam dump. These “wasted” photons still carry enough energy to drive further nuclear reactions, though, and Eslami and colleagues realized that they could be harnessed to produce 64Cu and 67Cu.

Eslami and colleagues tested their idea at the Mainz Microtron, an electron accelerator at Johannes Gutenberg University Mainz in Germany. “We wanted to see whether GeV-scale bremsstrahlung, already available at the electron accelerator, could be used in a truly parasitic configuration,” Eslami says. The real test, he adds, was whether they could produce 67Cu alongside the primary experiment, which was using the same electron beam and photon field to study hadron physics, without disturbing it or degrading the beam conditions.

The answer turned out to be “yes”. What’s more, the researchers found that their approach could produce enough 67Cu for medical applications in about five days – roughly equal to the time required for a nuclear reactor to produce the equivalent amount of another important medical radionuclide, lutetium-177.

Improving nuclear medicine treatments and reducing costs

“Our results indicate that, under suitable conditions, high-energy electron and photon facilities that were originally built for nuclear or particle physics experiments could also be used to produce 67Cu and other useful radionuclides,” Eslami tells Physics World. In practice, however, Eslami adds that this will be only realistic at sites with a strong, well-characterized bremsstrahlung fields. High-power multi-GeV electron facilities such as the planned Electron-Ion Collider at Brookhaven National Laboratory in the US, or a high-repetition laser-plasma electron source, are two possibilities.

Even with this restriction, team member Mikhail Bashkanov is excited about the advantages. “If we could do away with the necessity of using nuclear reactors to produce medical isotopes and solely generate them with high-energy photon beams from laser-plasma accelerators, we could significantly improve nuclear medicine treatments and reduce their costs,” Bashkanov says.

The researchers, who detail their work in Physical Review C, now plan to test their method at other electron accelerators, especially those with higher beam power and GeV-scale beams, to quantify the 67Cu yields they can expect to achieve in realistic target and beam-dump configurations. In parallel, Eslami adds, they want to explore parasitic operation at emerging laser-plasma-driven electron sources that are being developed for muon tomography. They would also like to link their irradiation studies to target design, radiochemistry and timing constraints to see whether the method can deliver clinically useful activities of 67Cu and other useful isotopes in a reliable and cost-effective way.

The post Leftover gamma rays produce medically important radioisotopes appeared first on Physics World.

Reçu — 11 décembre 2025 6.5 📰 Sciences English

Astronomers observe a coronal mass ejection from a distant star

11 décembre 2025 à 10:00

The Sun regularly produces energetic outbursts of electromagnetic radiation called solar flares. When these flares are accompanied by flows of plasma, they are known as coronal mass ejections (CMEs). Now, astronomers at the Netherlands Institute for Radio Astronomy (ASTRON) have spotted a similar event occurring on a star other than our Sun – the first unambiguous detection of a CME outside our solar system.

Astronomers have long predicted that the radio emissions associated with CMEs from other stars should be detectable. However, Joseph Callingham, who led the ASTRON study, says that he and his colleagues needed the highly sensitive low-frequency radio telescope LOFAR – plus ESA’s XMM-Newton space observatory and “some smart software” developed by Cyril Tasse and Philippe Zarka at the Observatoire de Paris-PSL, France – to find one.

A short, intense radio signal from StKM 1-1262

Using these tools, the team detected short, intense radio signals from a star located around 40 light-years away from Earth. This star, called StKM 1-1262, is very different from our Sun. At only around half of the Sun’s mass, it is classed as an M-dwarf star. It also rotates 20 times faster and boasts a magnetic field 300 times stronger. Nevertheless, the burst it produced had the same frequency, time and polarization properties as the plasma emission from an event called a solar type II burst that astronomers identify as a fast CME when it comes from the Sun.

“This work opens up a new observational frontier for studying and understanding eruptions and space weather around other stars,” says Henrik Eklund, an ESA research fellow working at the European Space Research and Technology Centre (ESTEC) in Noordwijk, Netherlands, who was not involved in the study. “We’re no longer limited to extrapolating our understanding of the Sun’s CMEs to other stars.”

Implications for life on exoplanets

The high speed of this burst – around 2400 km/s – would be atypical for our own Sun, with only around 1 in every 20 solar CMEs reaching that level. However, the ASTRON team says that M-dwarfs like StKM 1-1262 could emit CMEs of this type as often as once a day.

An artist's impression of the XMM-Newton telescope, showing the telescope against a black, starry background with the Earth nearby
Spotting a distant coronal mass ejection: An artist’s impression of XMM-Newton. (Courtesy: ESA/C Carreau)

According to Eklund, this has implications for extraterrestrial life, as most of the known planets in the Milky Way are thought to orbit stars of this type, and such bursts could be powerful enough to strip their atmospheres. “It seems that intense space weather may be even more extreme around smaller stars – the primary hosts of potentially habitable exoplanets,” he says. “This has important implications for how these planets keep hold of their atmospheres and possibly remain habitable over time.”

Erik Kuulkers, a project scientist at XMM-Newton who was also not directly involved in the study, suggests that this atmosphere-stripping ability could modify the way we hunt for life in stellar systems akin to our Solar System. “A planet’s habitability for life as we know it is defined by its distance from its parent star – whether or not it sits within the star’s ‘habitable zone’, a region where liquid water can exist on the surface of planets with suitable atmospheres,” Kuulkers says. “What if that star was especially active, regularly producing CMEs, however? A planet regularly bombarded by these ejections might lose its atmosphere entirely, leaving behind a barren uninhabitable world, despite its orbit being ‘just right’.

Kuulkers adds that the study’s results also contain lessons for our own Solar System. “Why is there still life on Earth despite the violent material being thrown at us?” he asks. “It is because we are safeguarded by our atmosphere.”

Seeking more data

The ASTRON team’s next step will be to look for more stars like StKM 1-1262, which Kuulkers agrees is a good idea. “The more events we can find, the more we learn about CMEs and their impact on a star’s environment,” he says. Additional observations at other wavelengths “would help”, he adds, “but we have to admit that events like the strong one reported on in this work don’t happen too often, so we also need to be lucky enough to be looking at the right star at the right time.”

For now, the ASTRON researchers, who report their work in Nature, say they have reached the limit of what they can detect with LOFAR. “The next step is to use the next generation Square Kilometre Array, which will let us find many more such stars since it is so much more sensitive,” Callingham tells Physics World.

The post Astronomers observe a coronal mass ejection from a distant star appeared first on Physics World.

Reçu — 10 décembre 2025 6.5 📰 Sciences English

Scientists explain why ‘seeding’ clouds with silver iodide is so efficient

10 décembre 2025 à 09:58

Silver iodide crystals have long been used to “seed” clouds and trigger precipitation, but scientists have never been entirely sure why the material works so well for that purpose. Researchers at TU Wien in Austria are now a step closer to solving the mystery thanks to a new study that characterized surfaces of the material in atomic-scale detail.

“Silver iodide has been used in atmospheric weather modification programs around the world for several decades,” explains Jan Balajka from TU Wien’s Institute of Applied Physics, who led this research. “In fact, it was chosen for this purpose as far back as the 1940s because of its atomic crystal structure, which is nearly identical to that of ice – it has the same hexagonal symmetry and very similar distances between atoms in its lattice structure.”

The basic idea, Balajka continues, originated with the 20th-century American atmospheric scientist Bernard Vonnegut, who suggested in 1947 that introducing small silver iodide (AgI) crystals into a cloud could provide nuclei for ice to grow on. But while Vonnegut’s proposal worked (and helped to inspire his brother Kurt’s novel Cat’s Cradle), this simple picture is not entirely accurate. The stumbling block is that nucleation occurs at the surface of a crystal, not inside it, and the atomic structure of an AgI surface differs significantly from its interior.

A task that surface science has solved

To investigate further, Balajka and colleagues used high-resolution atomic force microscopy (AFM) and advanced computer simulations to study the atomic structure of 2‒3 nm diameter AgI crystals when they are broken into two pieces. The team’s measurements revealed that the surfaces of both freshly cleaved structures differed from those found inside the crystal.

More specifically, team member Johanna Hütner, who performed the experiments, explains that when an AgI crystal is cleaved, the silver atoms end up on one side while the iodine atoms appear on the other. This has implications for ice growth, because while the silver side maintains a hexagonal arrangement that provides an ideal template for the growth of ice layers, the iodine side reconstructs into a rectangular pattern that no longer lattice-matches the hexagonal symmetry of ice crystals. The iodine side is therefore incompatible with the epitaxial growth of hexagonal ice.

“Our works solves this decades-long controversy of the surface vs bulk structure of AgI, and shows that structural compatibility does matter,” Balajka says.

Difficult experiments

According to Balajka, the team’s experiments were far from easy. Many experimental methods for studying the structure and properties of material surfaces are based on interactions with charged particles such as electrons or ions, but AgI is an electrical insulator, which “excludes most of the tools available,” he explains. Using AFM enabled them to overcome this problem, he adds, because this technique detects interatomic forces between a sharp tip and the surface and does not require a conductive sample.

Another problem is that AgI is photosensitive and decomposes when exposed to visible light. While this property is useful in other contexts – AgI was a common ingredient in early photographic plates – it created complications for the TU Wien team. “Conventional AFM setups make use of optical laser detection to map the topography of a sample,” Balajka notes.

To avoid destroying their sample while studying it, the researchers therefore had to use a non-contact AFM based on a piezoelectric sensor that detects electrical signals and does not require optical readout. They also adapted their setup to operate in near-darkness, using only red light while manipulating the Ag to ensure that stray light did not degrade the samples.

The computational modelling part of the work introduced yet another hurdle to overcome. “Both Ag and I are atoms with a high number of electrons in their electron shells and are thus highly polarizable,” Balajka explains. “The interaction between such atoms cannot be accurately described by standard computational modelling methods such as density functional theory (DFT), so we had to employ highly accurate random-phase approximation (RPA) calculations to obtain reliable results.”

Highly controlled conditions

The researchers acknowledge that their study, which is detailed in Science Advances, was conducted under highly controlled conditions – ultrahigh vacuum, low pressure and temperature and a dark environment – that are very different from those that prevail inside real clouds. “The next logical step for us is therefore to confirm whether our findings hold under more representative conditions,” Balajka says. “We would like to find out whether the structure of AgI surfaces is the same in air and water, and if not, why.”

The researchers would also like to better understand the atomic arrangement of the rectangular reconstruction of the iodine surface. “This would complete the picture for the use of AgI in ice nucleation, as well as our understanding of AgI as a material overall,” Balajka says.

The post Scientists explain why ‘seeding’ clouds with silver iodide is so efficient appeared first on Physics World.

Reçu — 9 décembre 2025 6.5 📰 Sciences English

Memristors could measure a single quantum of resistance

9 décembre 2025 à 10:52

A proposed new way of defining the standard unit of electrical resistance would do away with the need for strong magnetic fields when measuring it. The new technique is based on memristors, which are programmable resistors originally developed as building blocks for novel computing architectures, and its developers say it would considerably simplify the experimental apparatus required to measure a single quantum of resistance for some applications.

Electrical resistance is a physical quantity that represents how much a material opposes the flow of electrical current. It is measured in ohms (Ω), and since 2019, when the base units of the International System of Units (SI) were most recently revised, the ohm has been defined in terms of the von Klitzing constant h/e2, where h and e are the Planck constant and the charge on an electron, respectively.

To measure this resistance with high precision, scientists use the fact that the von Klitzing constant is related to the quantized change in the Hall resistance of a two-dimensional electron system (such as the one that forms in a semiconductor heterostructure) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in a material like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. Generating such high fields typically requires a superconducting electromagnet, however.

A completely different approach

Researchers connected to a European project called MEMQuD are now advocating a completely different approach. Their idea is based on memristors, which are programmable resistors that “remember” their previous resistance state even after they have been switched off. This previous resistance state can be changed by applying a voltage or current.

In the new work, a team led by Gianluca Milano of Italy’s Istituto Nazionale di Ricerca Metrologia (INRiM); Vitor Cabral of the Instituto Português da Qualidade; and Ilia Valov of the Institute of Electrochemistry and Energy Systems at the Bulgarian Academy of Sciences studied a device based on memristive nanoionics cells made from conducting filaments of silver. When an electrical field is applied to these filaments, their conductance changes in distinct, quantized steps.

The MEMQuD team reports that the quantum conductance levels achieved in this set-up are precise enough to be exploited as intrinsic standard values. Indeed, a large inter-laboratory comparison confirmed that the values deviated by just -3.8% and 0.6% from the agreed SI values for the fundamental quantum of conductance, G0, and 2G0, respectively. The researchers attribute this precision to tight, atomic-level control over the morphology of the nanochannels responsible for quantum conductance effects, which they achieved by electrochemically polishing the silver filaments into the desired configuration.

A national metrology institute condensed into a microchip

The researchers say their results are building towards a concept known as an “NMI-in-a-chip” – that is, condensing the services of a national metrology institute into a microchip. “This could lead to measuring devices that have their resistance references built-in directly into the chip,” says Milano, “so doing away with complex measurements in laboratories and allowing for devices with zero-chain traceability – that is, those that do not require calibration since they have embedded intrinsic standards.”

Yuma Okazaki of Japan’s National Institute of Advanced Industrial Science and Technology (AIST), who was not involved in this work, says that the new technique could indeed allow end users to directly access a quantum resistance standard.

“Notably, this method can be demonstrated at room temperature and under ambient conditions, in contrast to conventional methods that require cryogenic and vacuum equipment, which is expensive and require a lot of electrical power,” Okazaki says. “If such a user-friendly quantum standard becomes more stable and its uncertainty is improved, it could lead to a new calibration scheme for ensuring the accuracy of electronics used in extreme environments, such as space or the deep ocean, where traditional quantum standards that rely on cryogenic and vacuum conditions cannot be readily used.”

The MEMQuD researchers, who report their work in Nature Nanotechnology, now plan to explore ways to further decrease deviations from the agreed SI values for G0 and 2G0. These include better material engineering, an improved measurement protocol, and strategies for topologically protecting the memristor’s resistance.

The post Memristors could measure a single quantum of resistance appeared first on Physics World.

❌